2026-03-09T15:43:13.418 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T15:43:13.422 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T15:43:13.444 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/533 branch: squid description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} email: null first_in_suite: false flavor: default job_id: '533' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: client: debug ms: 1 global: mon election default strategy: 1 ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on pool no app: false osd: debug ms: 1 debug osd: 20 osd class default list: '*' osd class load list: '*' osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - reached quota - but it is still running - overall HEALTH_ - \(POOL_FULL\) - \(SMALLER_PGP_NUM\) - \(CACHE_POOL_NO_HIT_SET\) - \(CACHE_POOL_NEAR_FULL\) - \(POOL_APP_NOT_ENABLED\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - CEPHADM_STRAY_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: cephadm-package install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_packages: - cephadm extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm01.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIVxKNcv/GIG4sYTxHt073b7EbtVGYHVT4MUCvWyDF6vBzOq8YMccNmBOlsDzG2cpNCpduuSbsgVSUKUHOwgqCI= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNuQwmZganNm2yjfSat3+XXX2chIr55BA2LJ+PX7csW3uuveGuADsUCUry2aGGU6gCTCU3WvZsCk92lMn+UjcGQ= tasks: - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test.sh - rados/test_pool_quota.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T15:43:13.444 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T15:43:13.445 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T15:43:13.445 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T15:43:13.445 INFO:teuthology.task.internal:Checking packages... 2026-03-09T15:43:13.445 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T15:43:13.445 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T15:43:13.445 INFO:teuthology.packaging:ref: None 2026-03-09T15:43:13.445 INFO:teuthology.packaging:tag: None 2026-03-09T15:43:13.445 INFO:teuthology.packaging:branch: squid 2026-03-09T15:43:13.445 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:43:13.445 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-09T15:43:14.078 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-09T15:43:14.079 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T15:43:14.080 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T15:43:14.080 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T15:43:14.081 INFO:teuthology.task.internal:Saving configuration 2026-03-09T15:43:14.086 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T15:43:14.087 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T15:43:14.094 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm01.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/533', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 15:42:03.966398', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:01', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIVxKNcv/GIG4sYTxHt073b7EbtVGYHVT4MUCvWyDF6vBzOq8YMccNmBOlsDzG2cpNCpduuSbsgVSUKUHOwgqCI='} 2026-03-09T15:43:14.099 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/533', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 15:42:03.965866', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNuQwmZganNm2yjfSat3+XXX2chIr55BA2LJ+PX7csW3uuveGuADsUCUry2aGGU6gCTCU3WvZsCk92lMn+UjcGQ='} 2026-03-09T15:43:14.099 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T15:43:14.100 INFO:teuthology.task.internal:roles: ubuntu@vm01.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-09T15:43:14.100 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-09T15:43:14.100 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T15:43:14.106 DEBUG:teuthology.task.console_log:vm01 does not support IPMI; excluding 2026-03-09T15:43:14.112 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-09T15:43:14.112 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f8cbee7e170>, signals=[15]) 2026-03-09T15:43:14.112 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T15:43:14.113 INFO:teuthology.task.internal:Opening connections... 2026-03-09T15:43:14.113 DEBUG:teuthology.task.internal:connecting to ubuntu@vm01.local 2026-03-09T15:43:14.113 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T15:43:14.175 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-09T15:43:14.176 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T15:43:14.236 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T15:43:14.237 DEBUG:teuthology.orchestra.run.vm01:> uname -m 2026-03-09T15:43:14.263 INFO:teuthology.orchestra.run.vm01.stdout:x86_64 2026-03-09T15:43:14.263 DEBUG:teuthology.orchestra.run.vm01:> cat /etc/os-release 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:NAME="Ubuntu" 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_ID="22.04" 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_CODENAME=jammy 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:ID=ubuntu 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:ID_LIKE=debian 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T15:43:14.308 INFO:teuthology.orchestra.run.vm01.stdout:UBUNTU_CODENAME=jammy 2026-03-09T15:43:14.309 INFO:teuthology.lock.ops:Updating vm01.local on lock server 2026-03-09T15:43:14.322 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-09T15:43:14.325 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-09T15:43:14.325 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:NAME="Ubuntu" 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="22.04" 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_CODENAME=jammy 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:ID=ubuntu 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE=debian 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T15:43:14.368 INFO:teuthology.orchestra.run.vm09.stdout:UBUNTU_CODENAME=jammy 2026-03-09T15:43:14.368 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-09T15:43:14.373 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T15:43:14.375 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T15:43:14.376 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T15:43:14.376 DEBUG:teuthology.orchestra.run.vm01:> test '!' -e /home/ubuntu/cephtest 2026-03-09T15:43:14.377 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-09T15:43:14.412 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T15:43:14.413 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T15:43:14.413 DEBUG:teuthology.orchestra.run.vm01:> test -z $(ls -A /var/lib/ceph) 2026-03-09T15:43:14.422 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-09T15:43:14.424 INFO:teuthology.orchestra.run.vm01.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T15:43:14.457 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T15:43:14.457 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T15:43:14.465 DEBUG:teuthology.orchestra.run.vm01:> test -e /ceph-qa-ready 2026-03-09T15:43:14.468 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:43:14.713 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-09T15:43:14.716 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:43:14.955 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T15:43:14.957 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T15:43:14.957 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T15:43:14.958 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T15:43:14.962 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T15:43:14.963 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T15:43:14.964 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T15:43:14.964 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T15:43:15.002 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T15:43:15.010 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T15:43:15.012 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T15:43:15.012 DEBUG:teuthology.orchestra.run.vm01:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T15:43:15.048 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:43:15.048 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T15:43:15.053 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:43:15.053 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T15:43:15.090 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T15:43:15.098 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T15:43:15.100 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T15:43:15.103 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T15:43:15.105 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T15:43:15.106 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T15:43:15.107 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T15:43:15.107 DEBUG:teuthology.orchestra.run.vm01:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T15:43:15.146 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T15:43:15.156 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T15:43:15.158 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T15:43:15.158 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T15:43:15.197 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T15:43:15.201 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T15:43:15.245 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T15:43:15.288 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:43:15.289 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T15:43:15.336 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T15:43:15.339 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T15:43:15.383 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:43:15.411 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T15:43:15.436 DEBUG:teuthology.orchestra.run.vm01:> sudo service rsyslog restart 2026-03-09T15:43:15.449 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-09T15:43:15.492 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T15:43:15.503 INFO:teuthology.task.internal:Starting timer... 2026-03-09T15:43:15.503 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T15:43:15.518 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T15:43:15.531 INFO:teuthology.task.selinux:Excluding vm01: VMs are not yet supported 2026-03-09T15:43:15.531 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-09T15:43:15.531 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T15:43:15.531 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T15:43:15.531 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T15:43:15.531 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T15:43:15.533 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T15:43:15.533 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T15:43:15.535 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T15:43:16.163 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T15:43:16.169 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T15:43:16.169 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryea2yh4j0 --limit vm01.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T15:46:14.094 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm01.local'), Remote(name='ubuntu@vm09.local')] 2026-03-09T15:46:14.094 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm01.local' 2026-03-09T15:46:14.095 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T15:46:14.159 DEBUG:teuthology.orchestra.run.vm01:> true 2026-03-09T15:46:14.389 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm01.local' 2026-03-09T15:46:14.389 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-09T15:46:14.389 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T15:46:14.451 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-09T15:46:14.685 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-09T15:46:14.685 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T15:46:14.687 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T15:46:14.687 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T15:46:14.688 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T15:46:14.689 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T15:46:14.689 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T15:46:14.705 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T15:46:14.705 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: Command line: ntpd -gq 2026-03-09T15:46:14.705 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: ---------------------------------------------------- 2026-03-09T15:46:14.705 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T15:46:14.705 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T15:46:14.705 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: corporation. Support and training for ntp-4 are 2026-03-09T15:46:14.705 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: available at https://www.nwtime.org/support 2026-03-09T15:46:14.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: ---------------------------------------------------- 2026-03-09T15:46:14.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: proto: precision = 0.030 usec (-25) 2026-03-09T15:46:14.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: basedate set to 2022-02-04 2026-03-09T15:46:14.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: gps base set to 2022-02-06 (week 2196) 2026-03-09T15:46:14.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T15:46:14.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T15:46:14.706 INFO:teuthology.orchestra.run.vm01.stderr: 9 Mar 15:46:14 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T15:46:14.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T15:46:14.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T15:46:14.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T15:46:14.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen normally on 3 ens3 192.168.123.101:123 2026-03-09T15:46:14.708 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen normally on 4 lo [::1]:123 2026-03-09T15:46:14.708 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:1%2]:123 2026-03-09T15:46:14.708 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:14 ntpd[16111]: Listening on routing socket on fd #22 for interface updates 2026-03-09T15:46:14.744 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T15:46:14.744 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: Command line: ntpd -gq 2026-03-09T15:46:14.744 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: ---------------------------------------------------- 2026-03-09T15:46:14.744 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T15:46:14.744 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T15:46:14.744 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: corporation. Support and training for ntp-4 are 2026-03-09T15:46:14.744 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: available at https://www.nwtime.org/support 2026-03-09T15:46:14.744 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: ---------------------------------------------------- 2026-03-09T15:46:14.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: proto: precision = 0.029 usec (-25) 2026-03-09T15:46:14.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: basedate set to 2022-02-04 2026-03-09T15:46:14.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: gps base set to 2022-02-06 (week 2196) 2026-03-09T15:46:14.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T15:46:14.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T15:46:14.745 INFO:teuthology.orchestra.run.vm09.stderr: 9 Mar 15:46:14 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T15:46:14.746 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T15:46:14.746 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T15:46:14.746 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T15:46:14.746 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen normally on 3 ens3 192.168.123.109:123 2026-03-09T15:46:14.746 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen normally on 4 lo [::1]:123 2026-03-09T15:46:14.746 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:9%2]:123 2026-03-09T15:46:14.746 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:14 ntpd[16111]: Listening on routing socket on fd #22 for interface updates 2026-03-09T15:46:15.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:15 ntpd[16111]: Soliciting pool server 77.90.0.148 2026-03-09T15:46:15.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:15 ntpd[16111]: Soliciting pool server 77.90.0.148 2026-03-09T15:46:16.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:16 ntpd[16111]: Soliciting pool server 46.224.156.215 2026-03-09T15:46:16.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:16 ntpd[16111]: Soliciting pool server 213.239.234.28 2026-03-09T15:46:16.744 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:16 ntpd[16111]: Soliciting pool server 46.224.156.215 2026-03-09T15:46:16.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:16 ntpd[16111]: Soliciting pool server 213.239.234.28 2026-03-09T15:46:17.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:17 ntpd[16111]: Soliciting pool server 213.209.109.45 2026-03-09T15:46:17.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:17 ntpd[16111]: Soliciting pool server 141.144.246.224 2026-03-09T15:46:17.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:17 ntpd[16111]: Soliciting pool server 176.9.157.155 2026-03-09T15:46:17.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:17 ntpd[16111]: Soliciting pool server 141.144.246.224 2026-03-09T15:46:17.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:17 ntpd[16111]: Soliciting pool server 176.9.157.155 2026-03-09T15:46:18.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:18 ntpd[16111]: Soliciting pool server 78.47.56.71 2026-03-09T15:46:18.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:18 ntpd[16111]: Soliciting pool server 5.45.97.204 2026-03-09T15:46:18.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:18 ntpd[16111]: Soliciting pool server 212.18.3.18 2026-03-09T15:46:18.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:18 ntpd[16111]: Soliciting pool server 5.189.151.39 2026-03-09T15:46:18.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:18 ntpd[16111]: Soliciting pool server 78.47.56.71 2026-03-09T15:46:18.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:18 ntpd[16111]: Soliciting pool server 212.18.3.18 2026-03-09T15:46:18.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:18 ntpd[16111]: Soliciting pool server 5.189.151.39 2026-03-09T15:46:19.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:19 ntpd[16111]: Soliciting pool server 202.61.201.34 2026-03-09T15:46:19.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:19 ntpd[16111]: Soliciting pool server 81.3.27.46 2026-03-09T15:46:19.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:19 ntpd[16111]: Soliciting pool server 104.167.24.26 2026-03-09T15:46:19.707 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:19 ntpd[16111]: Soliciting pool server 185.125.190.56 2026-03-09T15:46:19.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:19 ntpd[16111]: Soliciting pool server 202.61.201.34 2026-03-09T15:46:19.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:19 ntpd[16111]: Soliciting pool server 81.3.27.46 2026-03-09T15:46:19.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:19 ntpd[16111]: Soliciting pool server 185.125.190.56 2026-03-09T15:46:20.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:20 ntpd[16111]: Soliciting pool server 185.125.190.58 2026-03-09T15:46:20.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:20 ntpd[16111]: Soliciting pool server 91.98.23.146 2026-03-09T15:46:20.706 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:20 ntpd[16111]: Soliciting pool server 51.75.67.47 2026-03-09T15:46:20.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:20 ntpd[16111]: Soliciting pool server 185.125.190.58 2026-03-09T15:46:20.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:20 ntpd[16111]: Soliciting pool server 91.98.23.146 2026-03-09T15:46:20.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:20 ntpd[16111]: Soliciting pool server 51.75.67.47 2026-03-09T15:46:21.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:21 ntpd[16111]: Soliciting pool server 185.125.190.57 2026-03-09T15:46:21.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:21 ntpd[16111]: Soliciting pool server 85.214.83.151 2026-03-09T15:46:21.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:21 ntpd[16111]: Soliciting pool server 2a03:4000:4f:9c:5852:52ff:fe47:3970 2026-03-09T15:46:22.745 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:22 ntpd[16111]: Soliciting pool server 91.189.91.157 2026-03-09T15:46:24.768 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 15:46:24 ntpd[16111]: ntpd: time slew +0.000173 s 2026-03-09T15:46:24.768 INFO:teuthology.orchestra.run.vm09.stdout:ntpd: time slew +0.000173s 2026-03-09T15:46:24.789 INFO:teuthology.orchestra.run.vm09.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T15:46:24.789 INFO:teuthology.orchestra.run.vm09.stdout:============================================================================== 2026-03-09T15:46:24.789 INFO:teuthology.orchestra.run.vm09.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:24.789 INFO:teuthology.orchestra.run.vm09.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:24.789 INFO:teuthology.orchestra.run.vm09.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:24.789 INFO:teuthology.orchestra.run.vm09.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:24.789 INFO:teuthology.orchestra.run.vm09.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:25.731 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 15:46:25 ntpd[16111]: ntpd: time slew +0.000391 s 2026-03-09T15:46:25.731 INFO:teuthology.orchestra.run.vm01.stdout:ntpd: time slew +0.000391s 2026-03-09T15:46:25.751 INFO:teuthology.orchestra.run.vm01.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T15:46:25.751 INFO:teuthology.orchestra.run.vm01.stdout:============================================================================== 2026-03-09T15:46:25.751 INFO:teuthology.orchestra.run.vm01.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:25.752 INFO:teuthology.orchestra.run.vm01.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:25.752 INFO:teuthology.orchestra.run.vm01.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:25.752 INFO:teuthology.orchestra.run.vm01.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:25.752 INFO:teuthology.orchestra.run.vm01.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T15:46:25.752 INFO:teuthology.run_tasks:Running task install... 2026-03-09T15:46:25.754 DEBUG:teuthology.task.install:project ceph 2026-03-09T15:46:25.754 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_packages': ['cephadm'], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T15:46:25.754 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T15:46:25.754 INFO:teuthology.task.install:Using flavor: default 2026-03-09T15:46:25.756 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T15:46:25.756 INFO:teuthology.task.install:extra packages: [] 2026-03-09T15:46:25.756 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-key list | grep Ceph 2026-03-09T15:46:25.756 DEBUG:teuthology.orchestra.run.vm09:> sudo apt-key list | grep Ceph 2026-03-09T15:46:25.797 INFO:teuthology.orchestra.run.vm09.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T15:46:25.820 INFO:teuthology.orchestra.run.vm09.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T15:46:25.820 INFO:teuthology.orchestra.run.vm09.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T15:46:25.820 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T15:46:25.820 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T15:46:25.820 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:46:25.882 INFO:teuthology.orchestra.run.vm01.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T15:46:25.883 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T15:46:25.883 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T15:46:25.883 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T15:46:25.883 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T15:46:25.883 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:46:26.466 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T15:46:26.466 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T15:46:26.480 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T15:46:26.480 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T15:46:26.951 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:46:26.951 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T15:46:26.959 DEBUG:teuthology.orchestra.run.vm09:> sudo apt-get update 2026-03-09T15:46:26.993 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:46:26.993 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T15:46:27.000 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-get update 2026-03-09T15:46:27.132 INFO:teuthology.orchestra.run.vm09.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T15:46:27.275 INFO:teuthology.orchestra.run.vm09.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T15:46:27.296 INFO:teuthology.orchestra.run.vm01.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T15:46:27.297 INFO:teuthology.orchestra.run.vm01.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T15:46:27.311 INFO:teuthology.orchestra.run.vm09.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T15:46:27.331 INFO:teuthology.orchestra.run.vm01.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T15:46:27.347 INFO:teuthology.orchestra.run.vm09.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T15:46:27.366 INFO:teuthology.orchestra.run.vm01.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T15:46:27.566 INFO:teuthology.orchestra.run.vm01.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T15:46:27.579 INFO:teuthology.orchestra.run.vm09.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T15:46:27.679 INFO:teuthology.orchestra.run.vm01.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T15:46:27.695 INFO:teuthology.orchestra.run.vm09.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T15:46:27.791 INFO:teuthology.orchestra.run.vm01.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T15:46:27.811 INFO:teuthology.orchestra.run.vm09.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T15:46:27.904 INFO:teuthology.orchestra.run.vm01.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T15:46:27.929 INFO:teuthology.orchestra.run.vm09.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T15:46:27.981 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 25.8 kB in 1s (31.2 kB/s) 2026-03-09T15:46:28.008 INFO:teuthology.orchestra.run.vm09.stdout:Fetched 25.8 kB in 1s (28.7 kB/s) 2026-03-09T15:46:28.708 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T15:46:28.722 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T15:46:28.723 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T15:46:28.736 DEBUG:teuthology.orchestra.run.vm09:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T15:46:28.757 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T15:46:28.771 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T15:46:28.937 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T15:46:28.937 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T15:46:28.976 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T15:46:28.977 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T15:46:29.174 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T15:46:29.175 INFO:teuthology.orchestra.run.vm09.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T15:46:29.176 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T15:46:29.176 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T15:46:29.177 INFO:teuthology.orchestra.run.vm09.stdout:The following additional packages will be installed: 2026-03-09T15:46:29.177 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T15:46:29.177 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T15:46:29.178 INFO:teuthology.orchestra.run.vm09.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T15:46:29.178 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T15:46:29.178 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T15:46:29.179 INFO:teuthology.orchestra.run.vm09.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T15:46:29.180 INFO:teuthology.orchestra.run.vm09.stdout:Suggested packages: 2026-03-09T15:46:29.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T15:46:29.180 INFO:teuthology.orchestra.run.vm09.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T15:46:29.180 INFO:teuthology.orchestra.run.vm09.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T15:46:29.180 INFO:teuthology.orchestra.run.vm09.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T15:46:29.180 INFO:teuthology.orchestra.run.vm09.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T15:46:29.180 INFO:teuthology.orchestra.run.vm09.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T15:46:29.180 INFO:teuthology.orchestra.run.vm09.stdout: smart-notifier mailx | mailutils 2026-03-09T15:46:29.181 INFO:teuthology.orchestra.run.vm09.stdout:Recommended packages: 2026-03-09T15:46:29.181 INFO:teuthology.orchestra.run.vm09.stdout: btrfs-tools 2026-03-09T15:46:29.223 INFO:teuthology.orchestra.run.vm09.stdout:The following NEW packages will be installed: 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T15:46:29.224 INFO:teuthology.orchestra.run.vm09.stdout: socat unzip xmlstarlet zip 2026-03-09T15:46:29.225 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be upgraded: 2026-03-09T15:46:29.225 INFO:teuthology.orchestra.run.vm09.stdout: librados2 librbd1 2026-03-09T15:46:29.232 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T15:46:29.233 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T15:46:29.233 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T15:46:29.233 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T15:46:29.234 INFO:teuthology.orchestra.run.vm01.stdout:The following additional packages will be installed: 2026-03-09T15:46:29.234 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T15:46:29.234 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T15:46:29.234 INFO:teuthology.orchestra.run.vm01.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T15:46:29.235 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T15:46:29.235 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T15:46:29.235 INFO:teuthology.orchestra.run.vm01.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T15:46:29.236 INFO:teuthology.orchestra.run.vm01.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout:Suggested packages: 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout: smart-notifier mailx | mailutils 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout:Recommended packages: 2026-03-09T15:46:29.237 INFO:teuthology.orchestra.run.vm01.stdout: btrfs-tools 2026-03-09T15:46:29.279 INFO:teuthology.orchestra.run.vm01.stdout:The following NEW packages will be installed: 2026-03-09T15:46:29.279 INFO:teuthology.orchestra.run.vm01.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T15:46:29.279 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T15:46:29.279 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T15:46:29.279 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T15:46:29.279 INFO:teuthology.orchestra.run.vm01.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T15:46:29.279 INFO:teuthology.orchestra.run.vm01.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T15:46:29.279 INFO:teuthology.orchestra.run.vm01.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T15:46:29.280 INFO:teuthology.orchestra.run.vm01.stdout: socat unzip xmlstarlet zip 2026-03-09T15:46:29.281 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be upgraded: 2026-03-09T15:46:29.281 INFO:teuthology.orchestra.run.vm01.stdout: librados2 librbd1 2026-03-09T15:46:29.375 INFO:teuthology.orchestra.run.vm01.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T15:46:29.375 INFO:teuthology.orchestra.run.vm01.stdout:Need to get 178 MB of archives. 2026-03-09T15:46:29.375 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T15:46:29.375 INFO:teuthology.orchestra.run.vm01.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T15:46:29.414 INFO:teuthology.orchestra.run.vm01.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T15:46:29.415 INFO:teuthology.orchestra.run.vm01.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T15:46:29.423 INFO:teuthology.orchestra.run.vm01.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T15:46:29.451 INFO:teuthology.orchestra.run.vm01.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T15:46:29.452 INFO:teuthology.orchestra.run.vm01.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T15:46:29.472 INFO:teuthology.orchestra.run.vm01.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T15:46:29.484 INFO:teuthology.orchestra.run.vm01.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T15:46:29.484 INFO:teuthology.orchestra.run.vm01.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T15:46:29.485 INFO:teuthology.orchestra.run.vm01.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T15:46:29.485 INFO:teuthology.orchestra.run.vm01.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T15:46:29.489 INFO:teuthology.orchestra.run.vm01.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T15:46:29.490 INFO:teuthology.orchestra.run.vm01.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T15:46:29.491 INFO:teuthology.orchestra.run.vm01.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T15:46:29.491 INFO:teuthology.orchestra.run.vm01.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T15:46:29.492 INFO:teuthology.orchestra.run.vm01.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T15:46:29.493 INFO:teuthology.orchestra.run.vm01.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T15:46:29.494 INFO:teuthology.orchestra.run.vm01.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T15:46:29.495 INFO:teuthology.orchestra.run.vm01.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T15:46:29.500 INFO:teuthology.orchestra.run.vm01.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T15:46:29.500 INFO:teuthology.orchestra.run.vm01.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T15:46:29.501 INFO:teuthology.orchestra.run.vm01.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T15:46:29.502 INFO:teuthology.orchestra.run.vm01.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T15:46:29.502 INFO:teuthology.orchestra.run.vm01.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T15:46:29.507 INFO:teuthology.orchestra.run.vm01.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T15:46:29.507 INFO:teuthology.orchestra.run.vm01.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T15:46:29.508 INFO:teuthology.orchestra.run.vm01.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T15:46:29.508 INFO:teuthology.orchestra.run.vm01.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T15:46:29.510 INFO:teuthology.orchestra.run.vm01.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T15:46:29.514 INFO:teuthology.orchestra.run.vm01.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T15:46:29.515 INFO:teuthology.orchestra.run.vm01.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T15:46:29.515 INFO:teuthology.orchestra.run.vm01.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T15:46:29.516 INFO:teuthology.orchestra.run.vm01.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T15:46:29.516 INFO:teuthology.orchestra.run.vm01.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T15:46:29.521 INFO:teuthology.orchestra.run.vm01.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T15:46:29.522 INFO:teuthology.orchestra.run.vm01.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T15:46:29.522 INFO:teuthology.orchestra.run.vm01.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T15:46:29.526 INFO:teuthology.orchestra.run.vm01.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T15:46:29.526 INFO:teuthology.orchestra.run.vm01.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T15:46:29.529 INFO:teuthology.orchestra.run.vm01.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T15:46:29.530 INFO:teuthology.orchestra.run.vm01.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T15:46:29.531 INFO:teuthology.orchestra.run.vm01.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T15:46:29.533 INFO:teuthology.orchestra.run.vm01.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T15:46:29.534 INFO:teuthology.orchestra.run.vm01.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T15:46:29.536 INFO:teuthology.orchestra.run.vm01.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T15:46:29.537 INFO:teuthology.orchestra.run.vm01.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T15:46:29.537 INFO:teuthology.orchestra.run.vm01.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T15:46:29.576 INFO:teuthology.orchestra.run.vm01.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T15:46:29.577 INFO:teuthology.orchestra.run.vm01.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T15:46:29.578 INFO:teuthology.orchestra.run.vm01.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T15:46:29.586 INFO:teuthology.orchestra.run.vm01.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T15:46:29.586 INFO:teuthology.orchestra.run.vm01.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T15:46:29.587 INFO:teuthology.orchestra.run.vm01.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T15:46:29.587 INFO:teuthology.orchestra.run.vm01.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T15:46:29.588 INFO:teuthology.orchestra.run.vm01.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T15:46:29.588 INFO:teuthology.orchestra.run.vm01.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T15:46:29.590 INFO:teuthology.orchestra.run.vm01.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T15:46:29.594 INFO:teuthology.orchestra.run.vm01.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T15:46:29.595 INFO:teuthology.orchestra.run.vm01.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T15:46:29.598 INFO:teuthology.orchestra.run.vm01.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T15:46:29.603 INFO:teuthology.orchestra.run.vm01.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T15:46:29.605 INFO:teuthology.orchestra.run.vm01.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T15:46:29.605 INFO:teuthology.orchestra.run.vm01.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T15:46:29.606 INFO:teuthology.orchestra.run.vm01.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T15:46:29.625 INFO:teuthology.orchestra.run.vm01.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T15:46:29.625 INFO:teuthology.orchestra.run.vm01.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T15:46:29.627 INFO:teuthology.orchestra.run.vm01.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T15:46:29.628 INFO:teuthology.orchestra.run.vm01.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T15:46:29.628 INFO:teuthology.orchestra.run.vm01.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T15:46:29.629 INFO:teuthology.orchestra.run.vm01.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T15:46:29.629 INFO:teuthology.orchestra.run.vm01.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T15:46:29.630 INFO:teuthology.orchestra.run.vm01.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T15:46:29.659 INFO:teuthology.orchestra.run.vm01.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T15:46:29.659 INFO:teuthology.orchestra.run.vm01.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T15:46:29.659 INFO:teuthology.orchestra.run.vm01.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T15:46:29.667 INFO:teuthology.orchestra.run.vm01.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T15:46:29.668 INFO:teuthology.orchestra.run.vm01.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T15:46:29.695 INFO:teuthology.orchestra.run.vm01.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T15:46:29.826 INFO:teuthology.orchestra.run.vm09.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T15:46:29.826 INFO:teuthology.orchestra.run.vm09.stdout:Need to get 178 MB of archives. 2026-03-09T15:46:29.826 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T15:46:29.826 INFO:teuthology.orchestra.run.vm09.stdout:Get:1 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T15:46:29.835 INFO:teuthology.orchestra.run.vm09.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T15:46:29.864 INFO:teuthology.orchestra.run.vm01.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T15:46:30.636 INFO:teuthology.orchestra.run.vm09.stdout:Get:3 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T15:46:30.662 INFO:teuthology.orchestra.run.vm09.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T15:46:30.674 INFO:teuthology.orchestra.run.vm01.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T15:46:30.678 INFO:teuthology.orchestra.run.vm09.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T15:46:30.759 INFO:teuthology.orchestra.run.vm09.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T15:46:30.771 INFO:teuthology.orchestra.run.vm09.stdout:Get:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T15:46:30.776 INFO:teuthology.orchestra.run.vm09.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T15:46:30.776 INFO:teuthology.orchestra.run.vm09.stdout:Get:9 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T15:46:30.780 INFO:teuthology.orchestra.run.vm09.stdout:Get:10 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T15:46:30.781 INFO:teuthology.orchestra.run.vm09.stdout:Get:11 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T15:46:30.787 INFO:teuthology.orchestra.run.vm09.stdout:Get:12 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T15:46:30.807 INFO:teuthology.orchestra.run.vm01.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T15:46:30.822 INFO:teuthology.orchestra.run.vm01.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T15:46:30.828 INFO:teuthology.orchestra.run.vm01.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T15:46:30.829 INFO:teuthology.orchestra.run.vm01.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T15:46:30.832 INFO:teuthology.orchestra.run.vm01.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T15:46:30.833 INFO:teuthology.orchestra.run.vm01.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T15:46:30.839 INFO:teuthology.orchestra.run.vm01.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T15:46:30.903 INFO:teuthology.orchestra.run.vm09.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T15:46:31.100 INFO:teuthology.orchestra.run.vm09.stdout:Get:14 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T15:46:31.100 INFO:teuthology.orchestra.run.vm09.stdout:Get:15 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T15:46:31.111 INFO:teuthology.orchestra.run.vm09.stdout:Get:16 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T15:46:31.151 INFO:teuthology.orchestra.run.vm01.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T15:46:31.153 INFO:teuthology.orchestra.run.vm01.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T15:46:31.159 INFO:teuthology.orchestra.run.vm01.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T15:46:31.506 INFO:teuthology.orchestra.run.vm09.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T15:46:31.541 INFO:teuthology.orchestra.run.vm09.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T15:46:31.885 INFO:teuthology.orchestra.run.vm09.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T15:46:31.915 INFO:teuthology.orchestra.run.vm09.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T15:46:31.916 INFO:teuthology.orchestra.run.vm09.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T15:46:31.916 INFO:teuthology.orchestra.run.vm09.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T15:46:31.916 INFO:teuthology.orchestra.run.vm09.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T15:46:31.938 INFO:teuthology.orchestra.run.vm09.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T15:46:31.939 INFO:teuthology.orchestra.run.vm09.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T15:46:31.940 INFO:teuthology.orchestra.run.vm09.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T15:46:31.941 INFO:teuthology.orchestra.run.vm09.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T15:46:32.089 INFO:teuthology.orchestra.run.vm09.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T15:46:32.253 INFO:teuthology.orchestra.run.vm01.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T15:46:32.265 INFO:teuthology.orchestra.run.vm09.stdout:Get:29 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T15:46:32.414 INFO:teuthology.orchestra.run.vm09.stdout:Get:30 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T15:46:32.415 INFO:teuthology.orchestra.run.vm09.stdout:Get:31 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T15:46:32.416 INFO:teuthology.orchestra.run.vm09.stdout:Get:32 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T15:46:32.454 INFO:teuthology.orchestra.run.vm09.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T15:46:32.460 INFO:teuthology.orchestra.run.vm09.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T15:46:32.460 INFO:teuthology.orchestra.run.vm09.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T15:46:32.461 INFO:teuthology.orchestra.run.vm09.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T15:46:32.461 INFO:teuthology.orchestra.run.vm09.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T15:46:32.461 INFO:teuthology.orchestra.run.vm09.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T15:46:32.461 INFO:teuthology.orchestra.run.vm09.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T15:46:32.461 INFO:teuthology.orchestra.run.vm09.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T15:46:32.462 INFO:teuthology.orchestra.run.vm09.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T15:46:32.466 INFO:teuthology.orchestra.run.vm01.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T15:46:32.467 INFO:teuthology.orchestra.run.vm01.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T15:46:32.469 INFO:teuthology.orchestra.run.vm01.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T15:46:32.500 INFO:teuthology.orchestra.run.vm09.stdout:Get:42 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T15:46:32.557 INFO:teuthology.orchestra.run.vm01.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T15:46:32.656 INFO:teuthology.orchestra.run.vm09.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T15:46:32.657 INFO:teuthology.orchestra.run.vm09.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T15:46:32.657 INFO:teuthology.orchestra.run.vm09.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T15:46:32.736 INFO:teuthology.orchestra.run.vm09.stdout:Get:46 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T15:46:32.780 INFO:teuthology.orchestra.run.vm01.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T15:46:32.891 INFO:teuthology.orchestra.run.vm09.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T15:46:32.891 INFO:teuthology.orchestra.run.vm09.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T15:46:32.891 INFO:teuthology.orchestra.run.vm09.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T15:46:32.892 INFO:teuthology.orchestra.run.vm09.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T15:46:32.892 INFO:teuthology.orchestra.run.vm09.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T15:46:32.892 INFO:teuthology.orchestra.run.vm09.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T15:46:32.892 INFO:teuthology.orchestra.run.vm09.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T15:46:32.968 INFO:teuthology.orchestra.run.vm09.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T15:46:32.969 INFO:teuthology.orchestra.run.vm09.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T15:46:33.199 INFO:teuthology.orchestra.run.vm09.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T15:46:33.200 INFO:teuthology.orchestra.run.vm09.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T15:46:33.202 INFO:teuthology.orchestra.run.vm09.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T15:46:33.211 INFO:teuthology.orchestra.run.vm09.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T15:46:33.237 INFO:teuthology.orchestra.run.vm09.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T15:46:33.399 INFO:teuthology.orchestra.run.vm09.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T15:46:33.399 INFO:teuthology.orchestra.run.vm09.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T15:46:33.494 INFO:teuthology.orchestra.run.vm09.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T15:46:33.494 INFO:teuthology.orchestra.run.vm09.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T15:46:33.494 INFO:teuthology.orchestra.run.vm09.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T15:46:33.640 INFO:teuthology.orchestra.run.vm01.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T15:46:33.640 INFO:teuthology.orchestra.run.vm01.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T15:46:33.657 INFO:teuthology.orchestra.run.vm01.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T15:46:33.718 INFO:teuthology.orchestra.run.vm09.stdout:Get:66 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T15:46:33.718 INFO:teuthology.orchestra.run.vm09.stdout:Get:67 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T15:46:33.741 INFO:teuthology.orchestra.run.vm09.stdout:Get:68 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T15:46:33.765 INFO:teuthology.orchestra.run.vm01.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T15:46:33.779 INFO:teuthology.orchestra.run.vm01.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T15:46:33.781 INFO:teuthology.orchestra.run.vm01.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T15:46:33.841 INFO:teuthology.orchestra.run.vm09.stdout:Get:69 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T15:46:33.860 INFO:teuthology.orchestra.run.vm09.stdout:Get:70 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T15:46:33.861 INFO:teuthology.orchestra.run.vm09.stdout:Get:71 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T15:46:33.886 INFO:teuthology.orchestra.run.vm01.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T15:46:33.963 INFO:teuthology.orchestra.run.vm09.stdout:Get:72 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T15:46:34.136 INFO:teuthology.orchestra.run.vm09.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T15:46:34.287 INFO:teuthology.orchestra.run.vm09.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T15:46:34.287 INFO:teuthology.orchestra.run.vm09.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T15:46:34.331 INFO:teuthology.orchestra.run.vm01.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T15:46:34.331 INFO:teuthology.orchestra.run.vm01.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T15:46:34.490 INFO:teuthology.orchestra.run.vm09.stdout:Get:76 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T15:46:34.491 INFO:teuthology.orchestra.run.vm09.stdout:Get:77 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T15:46:34.899 INFO:teuthology.orchestra.run.vm09.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T15:46:34.899 INFO:teuthology.orchestra.run.vm09.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T15:46:34.933 INFO:teuthology.orchestra.run.vm09.stdout:Get:80 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T15:46:34.950 INFO:teuthology.orchestra.run.vm09.stdout:Get:81 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T15:46:34.950 INFO:teuthology.orchestra.run.vm09.stdout:Get:82 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T15:46:34.950 INFO:teuthology.orchestra.run.vm09.stdout:Get:83 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T15:46:35.060 INFO:teuthology.orchestra.run.vm09.stdout:Get:84 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T15:46:35.112 INFO:teuthology.orchestra.run.vm09.stdout:Get:85 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T15:46:35.168 INFO:teuthology.orchestra.run.vm09.stdout:Get:86 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T15:46:35.168 INFO:teuthology.orchestra.run.vm09.stdout:Get:87 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T15:46:35.245 INFO:teuthology.orchestra.run.vm09.stdout:Get:88 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T15:46:35.418 INFO:teuthology.orchestra.run.vm09.stdout:Get:89 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T15:46:35.432 INFO:teuthology.orchestra.run.vm09.stdout:Get:90 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T15:46:35.450 INFO:teuthology.orchestra.run.vm09.stdout:Get:91 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T15:46:35.540 INFO:teuthology.orchestra.run.vm09.stdout:Get:92 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T15:46:35.618 INFO:teuthology.orchestra.run.vm09.stdout:Get:93 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T15:46:35.899 INFO:teuthology.orchestra.run.vm09.stdout:Get:94 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T15:46:35.899 INFO:teuthology.orchestra.run.vm09.stdout:Get:95 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T15:46:35.900 INFO:teuthology.orchestra.run.vm09.stdout:Get:96 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T15:46:35.900 INFO:teuthology.orchestra.run.vm09.stdout:Get:97 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T15:46:35.905 INFO:teuthology.orchestra.run.vm09.stdout:Get:98 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T15:46:35.906 INFO:teuthology.orchestra.run.vm09.stdout:Get:99 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T15:46:36.064 INFO:teuthology.orchestra.run.vm09.stdout:Get:100 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T15:46:36.065 INFO:teuthology.orchestra.run.vm09.stdout:Get:101 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T15:46:36.065 INFO:teuthology.orchestra.run.vm09.stdout:Get:102 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T15:46:36.194 INFO:teuthology.orchestra.run.vm09.stdout:Get:103 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T15:46:36.230 INFO:teuthology.orchestra.run.vm09.stdout:Get:104 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T15:46:36.953 INFO:teuthology.orchestra.run.vm09.stdout:Get:105 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T15:46:37.956 INFO:teuthology.orchestra.run.vm01.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T15:46:38.042 INFO:teuthology.orchestra.run.vm01.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T15:46:38.043 INFO:teuthology.orchestra.run.vm01.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T15:46:38.156 INFO:teuthology.orchestra.run.vm09.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T15:46:38.157 INFO:teuthology.orchestra.run.vm09.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T15:46:38.157 INFO:teuthology.orchestra.run.vm09.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T15:46:39.164 INFO:teuthology.orchestra.run.vm01.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T15:46:39.286 INFO:teuthology.orchestra.run.vm09.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T15:46:39.476 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 178 MB in 10s (18.0 MB/s) 2026-03-09T15:46:39.586 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T15:46:39.603 INFO:teuthology.orchestra.run.vm09.stdout:Fetched 178 MB in 10s (17.7 MB/s) 2026-03-09T15:46:39.614 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T15:46:39.616 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T15:46:39.618 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T15:46:39.619 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T15:46:39.638 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T15:46:39.641 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T15:46:39.642 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T15:46:39.646 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T15:46:39.647 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T15:46:39.649 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T15:46:39.657 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T15:46:39.662 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T15:46:39.663 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T15:46:39.672 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T15:46:39.677 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T15:46:39.678 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T15:46:39.682 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T15:46:39.687 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T15:46:39.690 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:39.696 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T15:46:39.702 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T15:46:39.703 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T15:46:39.730 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T15:46:39.734 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T15:46:39.735 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T15:46:39.736 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:39.738 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T15:46:39.743 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:39.754 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T15:46:39.760 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T15:46:39.761 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:39.788 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T15:46:39.792 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T15:46:39.793 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T15:46:39.794 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T15:46:39.798 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T15:46:39.799 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:39.819 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:39.821 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T15:46:39.822 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T15:46:39.827 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T15:46:39.829 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:39.862 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T15:46:39.867 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T15:46:39.880 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T15:46:39.902 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:39.905 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T15:46:39.907 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:39.910 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T15:46:39.995 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libnbd0. 2026-03-09T15:46:39.999 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:39.999 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T15:46:40.000 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T15:46:40.002 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T15:46:40.021 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T15:46:40.021 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.021 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.091 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rados. 2026-03-09T15:46:40.095 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libnbd0. 2026-03-09T15:46:40.096 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.096 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.101 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T15:46:40.102 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T15:46:40.118 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T15:46:40.118 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T15:46:40.124 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.124 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.125 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:40.126 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.141 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T15:46:40.148 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.149 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.152 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-rados. 2026-03-09T15:46:40.157 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.158 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.168 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T15:46:40.175 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:40.176 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.178 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T15:46:40.183 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:40.185 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.197 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T15:46:40.199 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T15:46:40.202 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T15:46:40.203 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T15:46:40.206 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.207 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.222 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T15:46:40.226 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T15:46:40.229 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T15:46:40.229 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T15:46:40.231 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:40.232 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.246 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T15:46:40.251 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.251 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T15:46:40.251 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.256 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T15:46:40.259 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T15:46:40.274 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T15:46:40.278 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T15:46:40.280 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T15:46:40.281 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T15:46:40.283 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T15:46:40.285 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T15:46:40.303 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T15:46:40.304 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T15:46:40.309 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.310 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T15:46:40.310 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.311 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T15:46:40.360 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T15:46:40.362 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T15:46:40.366 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T15:46:40.367 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T15:46:40.368 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T15:46:40.369 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T15:46:40.392 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T15:46:40.395 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package lua5.1. 2026-03-09T15:46:40.397 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T15:46:40.397 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T15:46:40.398 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T15:46:40.399 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T15:46:40.418 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T15:46:40.419 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package lua-any. 2026-03-09T15:46:40.424 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T15:46:40.424 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T15:46:40.425 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T15:46:40.426 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T15:46:40.441 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package zip. 2026-03-09T15:46:40.446 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package lua5.1. 2026-03-09T15:46:40.446 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T15:46:40.448 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T15:46:40.452 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T15:46:40.453 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T15:46:40.465 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package unzip. 2026-03-09T15:46:40.470 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T15:46:40.472 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T15:46:40.474 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package lua-any. 2026-03-09T15:46:40.481 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T15:46:40.482 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T15:46:40.494 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package luarocks. 2026-03-09T15:46:40.497 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package zip. 2026-03-09T15:46:40.501 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T15:46:40.502 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T15:46:40.503 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T15:46:40.504 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T15:46:40.524 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package unzip. 2026-03-09T15:46:40.529 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T15:46:40.531 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T15:46:40.552 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package luarocks. 2026-03-09T15:46:40.555 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package librgw2. 2026-03-09T15:46:40.559 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T15:46:40.560 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T15:46:40.561 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.562 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.612 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package librgw2. 2026-03-09T15:46:40.619 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.620 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.685 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T15:46:40.690 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.691 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.741 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T15:46:40.743 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T15:46:40.744 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T15:46:40.745 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T15:46:40.746 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.747 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.761 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T15:46:40.765 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T15:46:40.766 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.767 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.771 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T15:46:40.772 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T15:46:40.798 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-common. 2026-03-09T15:46:40.804 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.805 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T15:46:40.805 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.811 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.813 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:40.862 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-common. 2026-03-09T15:46:40.868 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:40.869 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:41.342 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-base. 2026-03-09T15:46:41.347 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:41.353 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:41.358 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-base. 2026-03-09T15:46:41.365 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:41.377 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:41.622 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T15:46:41.627 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T15:46:41.679 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T15:46:41.682 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T15:46:41.687 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T15:46:41.688 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T15:46:41.719 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T15:46:41.721 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T15:46:41.722 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T15:46:41.727 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T15:46:41.727 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T15:46:41.728 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T15:46:41.768 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T15:46:41.770 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T15:46:41.773 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T15:46:41.774 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T15:46:41.776 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T15:46:41.776 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T15:46:41.811 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T15:46:41.811 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T15:46:41.813 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T15:46:41.816 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T15:46:41.817 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T15:46:41.824 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T15:46:41.843 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T15:46:41.848 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T15:46:41.853 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T15:46:41.858 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T15:46:41.861 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T15:46:41.863 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T15:46:41.895 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T15:46:41.898 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T15:46:41.900 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T15:46:41.902 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T15:46:41.908 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T15:46:41.934 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T15:46:41.963 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-portend. 2026-03-09T15:46:41.966 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T15:46:41.969 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T15:46:41.971 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-portend. 2026-03-09T15:46:41.978 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T15:46:41.979 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T15:46:42.007 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T15:46:42.007 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T15:46:42.011 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T15:46:42.012 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T15:46:42.016 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T15:46:42.016 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T15:46:42.051 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T15:46:42.057 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T15:46:42.057 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T15:46:42.059 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T15:46:42.064 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T15:46:42.069 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T15:46:42.110 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T15:46:42.112 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T15:46:42.117 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T15:46:42.117 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T15:46:42.119 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T15:46:42.120 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T15:46:42.144 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T15:46:42.147 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T15:46:42.151 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T15:46:42.152 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T15:46:42.152 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T15:46:42.154 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T15:46:42.173 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-mako. 2026-03-09T15:46:42.174 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-mako. 2026-03-09T15:46:42.180 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T15:46:42.180 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T15:46:42.181 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T15:46:42.183 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T15:46:42.206 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T15:46:42.212 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T15:46:42.212 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T15:46:42.213 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T15:46:42.219 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T15:46:42.220 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T15:46:42.228 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T15:46:42.235 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T15:46:42.236 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T15:46:42.239 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T15:46:42.246 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T15:46:42.248 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T15:46:42.253 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-webob. 2026-03-09T15:46:42.259 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T15:46:42.260 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T15:46:42.266 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-webob. 2026-03-09T15:46:42.273 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T15:46:42.274 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T15:46:42.285 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T15:46:42.291 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T15:46:42.293 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T15:46:42.299 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T15:46:42.304 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T15:46:42.306 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T15:46:42.310 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T15:46:42.316 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T15:46:42.317 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T15:46:42.324 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T15:46:42.330 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T15:46:42.331 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T15:46:42.333 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-paste. 2026-03-09T15:46:42.340 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T15:46:42.340 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T15:46:42.349 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-paste. 2026-03-09T15:46:42.354 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T15:46:42.406 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T15:46:42.438 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T15:46:42.441 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T15:46:42.444 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T15:46:42.445 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T15:46:42.447 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T15:46:42.448 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T15:46:42.464 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T15:46:42.465 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T15:46:42.471 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T15:46:42.471 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T15:46:42.471 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T15:46:42.472 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T15:46:42.490 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T15:46:42.493 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T15:46:42.495 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T15:46:42.496 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T15:46:42.500 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T15:46:42.501 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T15:46:42.513 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T15:46:42.519 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T15:46:42.520 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T15:46:42.524 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T15:46:42.530 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T15:46:42.531 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T15:46:42.555 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T15:46:42.561 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T15:46:42.562 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T15:46:42.568 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T15:46:42.575 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T15:46:42.576 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T15:46:42.590 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T15:46:42.596 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:42.597 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:42.605 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T15:46:42.613 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:42.614 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:42.643 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T15:46:42.650 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:42.651 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:42.657 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T15:46:42.662 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:42.663 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:42.671 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T15:46:42.678 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:42.679 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:42.683 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T15:46:42.689 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:42.691 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:42.718 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T15:46:42.724 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:42.725 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:42.726 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T15:46:42.735 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:42.736 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:42.858 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T15:46:42.860 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T15:46:42.865 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T15:46:42.866 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T15:46:42.867 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T15:46:42.868 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T15:46:42.966 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T15:46:42.969 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T15:46:42.972 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:42.973 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:42.975 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:42.976 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.372 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph. 2026-03-09T15:46:43.378 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:43.412 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.427 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph. 2026-03-09T15:46:43.434 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:43.435 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.438 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T15:46:43.444 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:43.445 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.453 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T15:46:43.460 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:43.461 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.485 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T15:46:43.491 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:43.492 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.499 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T15:46:43.505 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:43.506 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.559 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package cephadm. 2026-03-09T15:46:43.561 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package cephadm. 2026-03-09T15:46:43.567 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:43.567 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:43.568 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.568 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.589 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T15:46:43.593 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T15:46:43.595 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T15:46:43.596 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T15:46:43.600 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T15:46:43.601 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T15:46:43.627 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T15:46:43.631 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T15:46:43.632 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:43.634 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.637 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:43.639 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.664 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T15:46:43.668 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T15:46:43.670 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T15:46:43.671 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T15:46:43.675 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T15:46:43.676 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T15:46:43.690 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-routes. 2026-03-09T15:46:43.696 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-routes. 2026-03-09T15:46:43.697 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T15:46:43.698 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T15:46:43.703 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T15:46:43.704 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T15:46:43.727 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T15:46:43.733 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T15:46:43.733 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:43.734 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:43.738 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:43.740 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:44.222 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T15:46:44.222 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T15:46:44.225 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T15:46:44.226 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T15:46:44.228 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T15:46:44.229 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T15:46:44.306 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T15:46:44.307 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T15:46:44.312 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T15:46:44.313 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T15:46:44.313 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T15:46:44.314 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T15:46:44.351 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T15:46:44.352 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T15:46:44.358 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T15:46:44.359 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T15:46:44.359 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T15:46:44.361 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T15:46:44.379 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T15:46:44.381 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T15:46:44.385 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T15:46:44.386 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T15:46:44.388 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T15:46:44.389 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T15:46:44.536 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T15:46:44.537 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T15:46:44.543 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:44.544 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:44.544 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:44.545 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:44.935 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T15:46:44.938 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T15:46:44.938 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T15:46:44.939 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T15:46:44.941 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T15:46:44.942 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T15:46:44.956 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T15:46:44.959 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T15:46:44.962 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T15:46:44.963 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T15:46:44.966 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T15:46:44.967 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T15:46:44.983 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T15:46:44.989 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T15:46:44.990 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T15:46:44.990 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T15:46:44.996 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T15:46:44.997 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T15:46:45.010 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T15:46:45.016 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T15:46:45.017 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T15:46:45.022 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T15:46:45.029 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T15:46:45.029 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T15:46:45.036 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T15:46:45.042 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T15:46:45.043 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T15:46:45.049 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T15:46:45.055 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T15:46:45.056 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T15:46:45.066 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T15:46:45.072 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T15:46:45.079 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T15:46:45.085 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T15:46:45.085 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T15:46:45.099 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T15:46:45.267 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T15:46:45.270 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T15:46:45.273 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:45.273 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:45.274 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:45.274 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:45.289 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T15:46:45.290 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T15:46:45.294 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T15:46:45.296 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T15:46:45.296 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T15:46:45.297 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T15:46:45.316 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T15:46:45.318 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T15:46:45.321 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T15:46:45.322 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T15:46:45.323 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T15:46:45.324 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T15:46:45.338 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package jq. 2026-03-09T15:46:45.342 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package jq. 2026-03-09T15:46:45.344 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T15:46:45.345 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T15:46:45.348 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T15:46:45.349 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T15:46:45.364 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package socat. 2026-03-09T15:46:45.367 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package socat. 2026-03-09T15:46:45.370 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T15:46:45.371 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T15:46:45.373 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T15:46:45.374 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T15:46:45.401 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T15:46:45.401 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T15:46:45.407 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T15:46:45.407 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T15:46:45.408 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T15:46:45.409 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T15:46:45.458 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-test. 2026-03-09T15:46:45.459 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-test. 2026-03-09T15:46:45.463 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:45.465 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:45.465 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:45.466 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:46.543 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T15:46:46.548 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T15:46:46.550 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:46.551 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:46.554 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T15:46:46.555 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:46.582 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T15:46:46.588 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T15:46:46.588 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:46.589 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:46.594 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:46.594 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:46.606 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T15:46:46.612 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T15:46:46.612 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T15:46:46.613 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T15:46:46.617 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T15:46:46.619 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T15:46:46.639 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T15:46:46.644 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T15:46:46.646 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T15:46:46.647 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T15:46:46.652 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T15:46:46.652 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T15:46:46.667 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T15:46:46.673 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T15:46:46.674 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T15:46:46.674 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T15:46:46.678 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T15:46:46.679 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T15:46:46.715 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package pkg-config. 2026-03-09T15:46:46.721 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T15:46:46.722 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T15:46:46.723 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package pkg-config. 2026-03-09T15:46:46.730 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T15:46:46.731 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T15:46:46.741 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T15:46:46.747 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T15:46:46.748 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T15:46:46.750 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T15:46:46.756 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T15:46:46.757 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T15:46:46.801 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T15:46:46.802 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T15:46:46.808 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T15:46:46.808 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T15:46:46.809 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T15:46:46.809 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T15:46:46.825 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T15:46:46.828 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T15:46:46.831 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T15:46:46.832 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T15:46:46.834 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T15:46:46.836 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T15:46:46.892 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T15:46:46.897 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T15:46:46.898 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T15:46:46.899 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T15:46:46.903 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T15:46:46.904 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T15:46:46.920 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T15:46:46.921 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T15:46:46.925 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T15:46:46.926 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T15:46:46.927 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T15:46:46.928 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T15:46:46.949 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-py. 2026-03-09T15:46:46.950 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-py. 2026-03-09T15:46:46.955 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T15:46:46.956 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T15:46:46.956 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T15:46:46.957 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T15:46:46.981 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T15:46:46.984 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T15:46:46.987 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T15:46:46.988 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T15:46:46.990 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T15:46:46.991 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T15:46:47.058 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T15:46:47.060 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T15:46:47.064 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T15:46:47.065 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T15:46:47.067 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T15:46:47.068 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T15:46:47.083 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-toml. 2026-03-09T15:46:47.084 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-toml. 2026-03-09T15:46:47.089 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T15:46:47.090 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T15:46:47.090 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T15:46:47.091 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T15:46:47.107 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T15:46:47.107 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T15:46:47.113 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T15:46:47.113 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T15:46:47.114 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T15:46:47.114 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T15:46:47.144 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T15:46:47.144 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T15:46:47.149 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T15:46:47.149 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T15:46:47.150 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T15:46:47.150 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T15:46:47.169 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T15:46:47.169 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T15:46:47.173 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T15:46:47.174 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T15:46:47.175 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T15:46:47.176 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T15:46:47.305 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package radosgw. 2026-03-09T15:46:47.306 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package radosgw. 2026-03-09T15:46:47.311 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:47.312 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:47.312 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:47.314 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:47.545 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T15:46:47.550 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T15:46:47.551 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:47.552 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:47.554 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T15:46:47.555 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:47.571 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package smartmontools. 2026-03-09T15:46:47.576 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package smartmontools. 2026-03-09T15:46:47.577 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T15:46:47.582 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T15:46:47.586 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T15:46:47.590 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T15:46:47.637 INFO:teuthology.orchestra.run.vm01.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T15:46:47.640 INFO:teuthology.orchestra.run.vm09.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T15:46:47.895 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T15:46:47.895 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T15:46:47.900 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T15:46:47.901 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T15:46:48.264 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T15:46:48.293 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T15:46:48.337 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T15:46:48.340 INFO:teuthology.orchestra.run.vm09.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T15:46:48.367 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T15:46:48.370 INFO:teuthology.orchestra.run.vm01.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T15:46:48.415 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T15:46:48.436 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T15:46:48.643 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T15:46:48.684 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T15:46:49.014 INFO:teuthology.orchestra.run.vm01.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T15:46:49.019 INFO:teuthology.orchestra.run.vm01.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T15:46:49.021 INFO:teuthology.orchestra.run.vm01.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:49.033 INFO:teuthology.orchestra.run.vm09.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T15:46:49.040 INFO:teuthology.orchestra.run.vm09.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T15:46:49.041 INFO:teuthology.orchestra.run.vm09.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:49.097 INFO:teuthology.orchestra.run.vm09.stdout:Adding system user cephadm....done 2026-03-09T15:46:49.100 INFO:teuthology.orchestra.run.vm01.stdout:Adding system user cephadm....done 2026-03-09T15:46:49.106 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T15:46:49.111 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T15:46:49.187 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T15:46:49.191 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T15:46:49.252 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T15:46:49.255 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T15:46:49.263 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T15:46:49.265 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T15:46:49.394 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T15:46:49.394 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T15:46:49.466 INFO:teuthology.orchestra.run.vm09.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T15:46:49.469 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T15:46:49.470 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T15:46:49.473 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T15:46:49.595 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T15:46:49.596 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T15:46:49.723 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T15:46:49.725 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T15:46:49.796 INFO:teuthology.orchestra.run.vm09.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T15:46:49.798 INFO:teuthology.orchestra.run.vm01.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T15:46:49.805 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T15:46:49.807 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T15:46:49.883 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T15:46:49.883 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T15:46:49.953 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:49.957 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:50.034 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T15:46:50.036 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T15:46:50.037 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T15:46:50.039 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T15:46:50.040 INFO:teuthology.orchestra.run.vm09.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T15:46:50.042 INFO:teuthology.orchestra.run.vm01.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T15:46:50.043 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T15:46:50.045 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T15:46:50.046 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T15:46:50.048 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T15:46:50.049 INFO:teuthology.orchestra.run.vm09.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T15:46:50.051 INFO:teuthology.orchestra.run.vm01.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T15:46:50.054 INFO:teuthology.orchestra.run.vm09.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T15:46:50.057 INFO:teuthology.orchestra.run.vm01.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T15:46:50.057 INFO:teuthology.orchestra.run.vm09.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T15:46:50.059 INFO:teuthology.orchestra.run.vm01.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T15:46:50.060 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T15:46:50.062 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T15:46:50.062 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T15:46:50.065 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T15:46:50.192 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T15:46:50.204 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T15:46:50.267 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T15:46:50.282 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T15:46:50.343 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T15:46:50.361 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T15:46:50.427 INFO:teuthology.orchestra.run.vm09.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T15:46:50.429 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T15:46:50.455 INFO:teuthology.orchestra.run.vm01.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T15:46:50.458 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T15:46:50.735 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T15:46:50.758 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T15:46:50.810 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T15:46:50.813 INFO:teuthology.orchestra.run.vm09.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T15:46:50.815 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T15:46:50.831 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T15:46:50.834 INFO:teuthology.orchestra.run.vm01.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T15:46:50.836 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T15:46:50.910 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T15:46:50.935 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T15:46:51.057 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T15:46:51.084 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T15:46:51.191 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T15:46:51.227 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T15:46:51.281 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T15:46:51.331 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T15:46:51.405 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T15:46:51.460 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T15:46:51.480 INFO:teuthology.orchestra.run.vm09.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T15:46:51.483 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:51.532 INFO:teuthology.orchestra.run.vm01.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T15:46:51.535 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:51.579 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T15:46:51.634 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T15:46:52.165 INFO:teuthology.orchestra.run.vm09.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T15:46:52.192 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:52.197 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T15:46:52.228 INFO:teuthology.orchestra.run.vm01.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T15:46:52.251 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:52.257 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T15:46:52.278 INFO:teuthology.orchestra.run.vm09.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T15:46:52.280 INFO:teuthology.orchestra.run.vm09.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T15:46:52.283 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T15:46:52.334 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T15:46:52.337 INFO:teuthology.orchestra.run.vm01.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T15:46:52.340 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T15:46:52.356 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T15:46:52.411 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T15:46:52.423 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:52.426 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T15:46:52.479 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:52.482 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T15:46:52.508 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T15:46:52.555 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T15:46:52.580 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T15:46:52.626 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T15:46:52.710 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T15:46:52.712 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T15:46:52.780 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T15:46:52.785 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T15:46:52.850 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T15:46:52.858 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T15:46:52.972 INFO:teuthology.orchestra.run.vm09.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T15:46:52.972 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T15:46:52.975 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T15:46:52.975 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T15:46:53.055 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T15:46:53.057 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T15:46:53.059 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T15:46:53.061 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T15:46:53.131 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T15:46:53.136 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T15:46:53.215 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T15:46:53.226 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T15:46:53.311 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T15:46:53.324 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T15:46:53.386 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T15:46:53.388 INFO:teuthology.orchestra.run.vm01.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T15:46:53.391 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T15:46:53.394 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T15:46:53.401 INFO:teuthology.orchestra.run.vm09.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T15:46:53.404 INFO:teuthology.orchestra.run.vm09.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T15:46:53.407 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T15:46:53.409 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T15:46:53.569 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T15:46:53.569 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T15:46:53.688 INFO:teuthology.orchestra.run.vm01.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T15:46:53.688 INFO:teuthology.orchestra.run.vm09.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T15:46:53.691 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T15:46:53.691 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T15:46:53.766 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:53.767 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T15:46:53.769 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T15:46:53.771 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T15:46:53.849 INFO:teuthology.orchestra.run.vm09.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T15:46:53.852 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T15:46:53.853 INFO:teuthology.orchestra.run.vm01.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T15:46:53.855 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T15:46:53.931 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T15:46:53.936 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T15:46:54.065 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T15:46:54.078 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T15:46:54.152 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T15:46:54.170 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T15:46:54.270 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T15:46:54.272 INFO:teuthology.orchestra.run.vm09.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:54.275 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:54.278 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T15:46:54.289 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T15:46:54.291 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:54.294 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:54.297 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T15:46:54.886 INFO:teuthology.orchestra.run.vm09.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T15:46:54.907 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:54.910 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:54.914 INFO:teuthology.orchestra.run.vm09.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:54.967 INFO:teuthology.orchestra.run.vm01.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T15:46:54.968 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:54.977 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.011 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.013 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.016 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.019 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.022 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.044 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T15:46:55.045 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T15:46:55.091 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T15:46:55.091 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T15:46:55.440 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.441 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.445 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.446 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.448 INFO:teuthology.orchestra.run.vm09.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.449 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.451 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.452 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.455 INFO:teuthology.orchestra.run.vm09.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.456 INFO:teuthology.orchestra.run.vm01.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.460 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.461 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.464 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.464 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.471 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.472 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:55.513 INFO:teuthology.orchestra.run.vm01.stdout:Adding group ceph....done 2026-03-09T15:46:55.513 INFO:teuthology.orchestra.run.vm09.stdout:Adding group ceph....done 2026-03-09T15:46:55.550 INFO:teuthology.orchestra.run.vm01.stdout:Adding system user ceph....done 2026-03-09T15:46:55.552 INFO:teuthology.orchestra.run.vm09.stdout:Adding system user ceph....done 2026-03-09T15:46:55.560 INFO:teuthology.orchestra.run.vm01.stdout:Setting system user ceph properties....done 2026-03-09T15:46:55.563 INFO:teuthology.orchestra.run.vm09.stdout:Setting system user ceph properties....done 2026-03-09T15:46:55.565 INFO:teuthology.orchestra.run.vm01.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T15:46:55.568 INFO:teuthology.orchestra.run.vm09.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T15:46:55.631 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T15:46:55.635 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T15:46:55.864 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T15:46:55.868 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T15:46:56.230 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:56.233 INFO:teuthology.orchestra.run.vm09.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:56.257 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:56.260 INFO:teuthology.orchestra.run.vm01.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:56.478 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T15:46:56.479 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T15:46:56.515 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T15:46:56.516 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T15:46:56.847 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:56.873 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:56.934 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T15:46:56.957 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T15:46:57.287 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:57.341 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:57.358 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T15:46:57.358 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T15:46:57.408 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T15:46:57.408 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T15:46:57.773 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:57.793 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:57.840 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T15:46:57.840 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T15:46:57.952 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T15:46:57.952 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T15:46:58.240 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:58.326 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T15:46:58.326 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T15:46:58.339 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:58.429 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T15:46:58.429 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T15:46:58.669 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:58.672 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:58.689 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:58.755 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T15:46:58.755 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T15:46:58.840 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:58.842 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:58.861 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:58.925 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T15:46:58.925 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T15:46:59.169 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:59.181 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:59.185 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:59.197 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:59.315 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T15:46:59.322 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T15:46:59.324 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:59.337 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T15:46:59.338 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:59.341 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:59.354 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T15:46:59.416 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T15:46:59.481 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T15:46:59.489 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T15:46:59.506 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T15:46:59.594 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T15:46:59.895 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:46:59.895 INFO:teuthology.orchestra.run.vm09.stdout:Running kernel seems to be up-to-date. 2026-03-09T15:46:59.895 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:46:59.895 INFO:teuthology.orchestra.run.vm09.stdout:Services to be restarted: 2026-03-09T15:46:59.902 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart packagekit.service 2026-03-09T15:46:59.905 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:46:59.906 INFO:teuthology.orchestra.run.vm09.stdout:Service restarts being deferred: 2026-03-09T15:46:59.906 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T15:46:59.906 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart unattended-upgrades.service 2026-03-09T15:46:59.906 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:46:59.906 INFO:teuthology.orchestra.run.vm09.stdout:No containers need to be restarted. 2026-03-09T15:46:59.906 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:46:59.906 INFO:teuthology.orchestra.run.vm09.stdout:No user sessions are running outdated binaries. 2026-03-09T15:46:59.906 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:46:59.906 INFO:teuthology.orchestra.run.vm09.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T15:46:59.980 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:46:59.981 INFO:teuthology.orchestra.run.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-09T15:46:59.981 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:46:59.981 INFO:teuthology.orchestra.run.vm01.stdout:Services to be restarted: 2026-03-09T15:46:59.986 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart packagekit.service 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout:Service restarts being deferred: 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout:No containers need to be restarted. 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout:No user sessions are running outdated binaries. 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:46:59.989 INFO:teuthology.orchestra.run.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T15:47:00.830 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T15:47:00.832 DEBUG:teuthology.orchestra.run.vm09:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T15:47:00.909 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T15:47:01.109 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T15:47:01.112 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T15:47:01.127 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T15:47:01.128 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T15:47:01.193 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T15:47:01.293 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T15:47:01.293 INFO:teuthology.orchestra.run.vm09.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T15:47:01.293 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T15:47:01.293 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T15:47:01.303 INFO:teuthology.orchestra.run.vm09.stdout:The following NEW packages will be installed: 2026-03-09T15:47:01.303 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath python3-xmltodict 2026-03-09T15:47:01.390 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T15:47:01.390 INFO:teuthology.orchestra.run.vm09.stdout:Need to get 34.3 kB of archives. 2026-03-09T15:47:01.390 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T15:47:01.390 INFO:teuthology.orchestra.run.vm09.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T15:47:01.406 INFO:teuthology.orchestra.run.vm09.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T15:47:01.423 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T15:47:01.424 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T15:47:01.623 INFO:teuthology.orchestra.run.vm09.stdout:Fetched 34.3 kB in 0s (345 kB/s) 2026-03-09T15:47:01.667 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T15:47:01.668 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T15:47:01.669 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T15:47:01.669 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T15:47:01.685 INFO:teuthology.orchestra.run.vm01.stdout:The following NEW packages will be installed: 2026-03-09T15:47:01.685 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath python3-xmltodict 2026-03-09T15:47:01.806 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T15:47:01.840 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T15:47:01.843 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T15:47:01.844 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T15:47:01.863 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T15:47:01.870 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T15:47:01.870 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T15:47:01.900 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T15:47:01.970 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T15:47:02.153 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T15:47:02.153 INFO:teuthology.orchestra.run.vm01.stdout:Need to get 34.3 kB of archives. 2026-03-09T15:47:02.153 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T15:47:02.153 INFO:teuthology.orchestra.run.vm01.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T15:47:02.319 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:47:02.319 INFO:teuthology.orchestra.run.vm09.stdout:Running kernel seems to be up-to-date. 2026-03-09T15:47:02.319 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:47:02.319 INFO:teuthology.orchestra.run.vm09.stdout:Services to be restarted: 2026-03-09T15:47:02.328 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart packagekit.service 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout:Service restarts being deferred: 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart unattended-upgrades.service 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout:No containers need to be restarted. 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout:No user sessions are running outdated binaries. 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:47:02.332 INFO:teuthology.orchestra.run.vm09.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T15:47:02.378 INFO:teuthology.orchestra.run.vm01.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T15:47:02.598 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 34.3 kB in 1s (49.1 kB/s) 2026-03-09T15:47:02.786 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T15:47:02.823 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T15:47:02.826 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T15:47:02.827 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T15:47:02.845 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T15:47:02.852 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T15:47:02.853 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T15:47:02.885 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T15:47:02.993 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T15:47:03.381 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:47:03.382 INFO:teuthology.orchestra.run.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-09T15:47:03.382 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:47:03.382 INFO:teuthology.orchestra.run.vm01.stdout:Services to be restarted: 2026-03-09T15:47:03.389 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart packagekit.service 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout:Service restarts being deferred: 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout:No containers need to be restarted. 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout:No user sessions are running outdated binaries. 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:47:03.393 INFO:teuthology.orchestra.run.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T15:47:03.423 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T15:47:03.428 DEBUG:teuthology.parallel:result is None 2026-03-09T15:47:04.479 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T15:47:04.483 DEBUG:teuthology.parallel:result is None 2026-03-09T15:47:04.483 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:47:05.091 DEBUG:teuthology.orchestra.run.vm01:> dpkg-query -W -f '${Version}' ceph 2026-03-09T15:47:05.100 INFO:teuthology.orchestra.run.vm01.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T15:47:05.100 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T15:47:05.100 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T15:47:05.101 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:47:05.759 DEBUG:teuthology.orchestra.run.vm09:> dpkg-query -W -f '${Version}' ceph 2026-03-09T15:47:05.768 INFO:teuthology.orchestra.run.vm09.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T15:47:05.768 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T15:47:05.768 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T15:47:05.769 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T15:47:05.769 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:47:05.769 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T15:47:05.777 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:47:05.777 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T15:47:05.817 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T15:47:05.817 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:47:05.817 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T15:47:05.826 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T15:47:05.874 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:47:05.874 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T15:47:05.882 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T15:47:05.934 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T15:47:05.934 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:47:05.934 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T15:47:05.941 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T15:47:05.990 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:47:05.990 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T15:47:05.997 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T15:47:06.045 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T15:47:06.045 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:47:06.045 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T15:47:06.053 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T15:47:06.102 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:47:06.102 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T15:47:06.109 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T15:47:06.157 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T15:47:06.204 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'client': {'debug ms': 1}, 'global': {'mon election default strategy': 1, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon warn on pool no app': False}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd class default list': '*', 'osd class load list': '*', 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'reached quota', 'but it is still running', 'overall HEALTH_', '\\(POOL_FULL\\)', '\\(SMALLER_PGP_NUM\\)', '\\(CACHE_POOL_NO_HIT_SET\\)', '\\(CACHE_POOL_NEAR_FULL\\)', '\\(POOL_APP_NOT_ENABLED\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', 'CEPHADM_STRAY_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'cephadm-package'} 2026-03-09T15:47:06.204 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:47:06.204 INFO:tasks.cephadm:Cluster fsid is 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:47:06.204 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T15:47:06.204 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.101', 'mon.c': '[v2:192.168.123.101:3301,v1:192.168.123.101:6790]', 'mon.b': '192.168.123.109'} 2026-03-09T15:47:06.204 INFO:tasks.cephadm:First mon is mon.a on vm01 2026-03-09T15:47:06.204 INFO:tasks.cephadm:First mgr is y 2026-03-09T15:47:06.204 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T15:47:06.204 DEBUG:teuthology.orchestra.run.vm01:> sudo hostname $(hostname -s) 2026-03-09T15:47:06.212 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-09T15:47:06.224 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T15:47:06.224 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T15:47:06.256 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T15:47:06.348 INFO:teuthology.orchestra.run.vm01.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T15:47:06.360 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T15:47:46.467 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-09T15:47:46.467 INFO:teuthology.orchestra.run.vm01.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T15:47:46.468 INFO:teuthology.orchestra.run.vm01.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T15:47:46.468 INFO:teuthology.orchestra.run.vm01.stdout: "repo_digests": [ 2026-03-09T15:47:46.468 INFO:teuthology.orchestra.run.vm01.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T15:47:46.468 INFO:teuthology.orchestra.run.vm01.stdout: ] 2026-03-09T15:47:46.468 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-09T15:47:58.678 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-09T15:47:58.678 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T15:47:58.679 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T15:47:58.679 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-09T15:47:58.679 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T15:47:58.679 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-09T15:47:58.679 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-09T15:47:58.696 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /etc/ceph 2026-03-09T15:47:58.705 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-09T15:47:58.714 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 777 /etc/ceph 2026-03-09T15:47:58.755 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-09T15:47:58.764 INFO:tasks.cephadm:Writing seed config... 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [client] debug ms = 1 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [mon] mon warn on pool no app = False 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [osd] osd class default list = * 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [osd] osd class load list = * 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T15:47:58.765 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-09T15:47:58.766 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:47:58.766 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T15:47:58.803 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 397fadc0-1bcf-11f1-8481-edc1430c2c03 mon election default strategy = 1 ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd class default list = * osd class load list = * osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 mon warn on pool no app = False [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true [client] debug ms = 1 2026-03-09T15:47:58.803 DEBUG:teuthology.orchestra.run.vm01:mon.a> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.a.service 2026-03-09T15:47:58.844 DEBUG:teuthology.orchestra.run.vm01:mgr.y> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.y.service 2026-03-09T15:47:58.888 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T15:47:58.888 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.101 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T15:47:59.023 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-09T15:47:59.023 INFO:teuthology.orchestra.run.vm01.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '397fadc0-1bcf-11f1-8481-edc1430c2c03', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.101', '--skip-admin-label'] 2026-03-09T15:47:59.023 INFO:teuthology.orchestra.run.vm01.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T15:47:59.023 INFO:teuthology.orchestra.run.vm01.stdout:Verifying podman|docker is present... 2026-03-09T15:47:59.023 INFO:teuthology.orchestra.run.vm01.stdout:Verifying lvm2 is present... 2026-03-09T15:47:59.023 INFO:teuthology.orchestra.run.vm01.stdout:Verifying time synchronization is in place... 2026-03-09T15:47:59.027 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T15:47:59.027 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T15:47:59.030 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T15:47:59.030 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T15:47:59.032 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T15:47:59.032 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T15:47:59.035 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T15:47:59.035 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T15:47:59.037 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T15:47:59.037 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout masked 2026-03-09T15:47:59.039 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T15:47:59.039 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T15:47:59.042 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T15:47:59.042 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T15:47:59.044 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T15:47:59.044 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T15:47:59.047 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-09T15:47:59.050 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-09T15:47:59.050 INFO:teuthology.orchestra.run.vm01.stdout:Unit ntp.service is enabled and running 2026-03-09T15:47:59.050 INFO:teuthology.orchestra.run.vm01.stdout:Repeating the final host check... 2026-03-09T15:47:59.050 INFO:teuthology.orchestra.run.vm01.stdout:docker (/usr/bin/docker) is present 2026-03-09T15:47:59.050 INFO:teuthology.orchestra.run.vm01.stdout:systemctl is present 2026-03-09T15:47:59.050 INFO:teuthology.orchestra.run.vm01.stdout:lvcreate is present 2026-03-09T15:47:59.052 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T15:47:59.052 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T15:47:59.054 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T15:47:59.054 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T15:47:59.057 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T15:47:59.057 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T15:47:59.059 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T15:47:59.059 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T15:47:59.061 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T15:47:59.061 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout masked 2026-03-09T15:47:59.063 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T15:47:59.063 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T15:47:59.065 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T15:47:59.065 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T15:47:59.067 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T15:47:59.067 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T15:47:59.070 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-09T15:47:59.074 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-09T15:47:59.074 INFO:teuthology.orchestra.run.vm01.stdout:Unit ntp.service is enabled and running 2026-03-09T15:47:59.074 INFO:teuthology.orchestra.run.vm01.stdout:Host looks OK 2026-03-09T15:47:59.074 INFO:teuthology.orchestra.run.vm01.stdout:Cluster fsid: 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:47:59.074 INFO:teuthology.orchestra.run.vm01.stdout:Acquiring lock 139951483062832 on /run/cephadm/397fadc0-1bcf-11f1-8481-edc1430c2c03.lock 2026-03-09T15:47:59.074 INFO:teuthology.orchestra.run.vm01.stdout:Lock 139951483062832 acquired on /run/cephadm/397fadc0-1bcf-11f1-8481-edc1430c2c03.lock 2026-03-09T15:47:59.074 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 3300 ... 2026-03-09T15:47:59.074 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 6789 ... 2026-03-09T15:47:59.074 INFO:teuthology.orchestra.run.vm01.stdout:Base mon IP(s) is [192.168.123.101:3300, 192.168.123.101:6789], mon addrv is [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-09T15:47:59.075 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.101 metric 100 2026-03-09T15:47:59.075 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-09T15:47:59.075 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.101 metric 100 2026-03-09T15:47:59.075 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.101 metric 100 2026-03-09T15:47:59.077 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T15:47:59.077 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:1/64 scope link 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.1/32` 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.1/32` 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T15:47:59.079 INFO:teuthology.orchestra.run.vm01.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T15:48:00.040 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-09T15:48:00.040 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:48:00.040 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:48:00.040 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:48:00.231 INFO:teuthology.orchestra.run.vm01.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T15:48:00.231 INFO:teuthology.orchestra.run.vm01.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T15:48:00.231 INFO:teuthology.orchestra.run.vm01.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T15:48:00.395 INFO:teuthology.orchestra.run.vm01.stdout:stat: stdout 167 167 2026-03-09T15:48:00.395 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial keys... 2026-03-09T15:48:00.518 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQCw665pQRjgHBAAHDJtHTD0EFbEEgyLhMLDiA== 2026-03-09T15:48:00.658 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQCw665pcy5IJBAAzlzm5LJS/loLah9KIQIR6Q== 2026-03-09T15:48:00.765 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQCw665pNHvZKxAAC0SH02dAme9FN0rJqxH/Cw== 2026-03-09T15:48:00.765 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial monmap... 2026-03-09T15:48:00.871 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T15:48:00.871 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T15:48:00.871 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:00.871 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T15:48:00.871 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool for a [v2:192.168.123.101:3300,v1:192.168.123.101:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T15:48:00.871 INFO:teuthology.orchestra.run.vm01.stdout:setting min_mon_release = quincy 2026-03-09T15:48:00.871 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: set fsid to 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:00.871 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T15:48:00.871 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:00.872 INFO:teuthology.orchestra.run.vm01.stdout:Creating mon... 2026-03-09T15:48:01.004 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.950+0000 7f8c3f004d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.950+0000 7f8c3f004d80 1 imported monmap: 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.950+0000 7f8c3f004d80 0 /usr/bin/ceph-mon: set fsid to 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Git sha 0 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: DB SUMMARY 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: DB Session ID: YLFQYSQDYW1LSQMSS5YC 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.create_if_missing: 1 2026-03-09T15:48:01.005 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.env: 0x561a6ecf4dc0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.info_log: 0x561aa89b2da0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.db_log_dir: 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.wal_dir: 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T15:48:01.007 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.write_buffer_manager: 0x561aa89a95e0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.row_cache: None 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.wal_filter: None 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Compression algorithms supported: 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: kZSTD supported: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T15:48:01.008 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.954+0000 7f8c3f004d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.merge_operator: 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561aa89a5520) 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-09T15:48:01.009 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-09T15:48:01.010 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-09T15:48:01.010 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-09T15:48:01.010 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x561aa89cb350 2026-03-09T15:48:01.010 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.num_levels: 7 2026-03-09T15:48:01.013 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T15:48:01.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 066ff4c9-ecb1-4e6d-a714-f8b15163b3e0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.958+0000 7f8c3f004d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.962+0000 7f8c3f004d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561aa89cce00 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.962+0000 7f8c3f004d80 4 rocksdb: DB pointer 0x561aa8ab0000 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.962+0000 7f8c3678e640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.962+0000 7f8c3678e640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:48:01.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T15:48:01.016 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x561aa89cb350#8 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-09T15:48:01.016 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-09T15:48:01.016 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.016 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-09T15:48:01.016 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T15:48:01.016 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.962+0000 7f8c3f004d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-09T15:48:01.016 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.962+0000 7f8c3f004d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-09T15:48:01.016 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T15:48:00.962+0000 7f8c3f004d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-09T15:48:01.016 INFO:teuthology.orchestra.run.vm01.stdout:create mon.a on 2026-03-09T15:48:01.174 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-09T15:48:01.345 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T15:48:01.555 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03.target → /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03.target. 2026-03-09T15:48:01.555 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03.target → /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03.target. 2026-03-09T15:48:01.730 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.a 2026-03-09T15:48:01.730 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.a.service: Unit ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.a.service not loaded. 2026-03-09T15:48:01.908 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03.target.wants/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.a.service → /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service. 2026-03-09T15:48:01.915 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-09T15:48:01.915 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T15:48:01.915 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon to start... 2026-03-09T15:48:01.915 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon... 2026-03-09T15:48:02.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:02 vm01 bash[20260]: cluster 2026-03-09T15:48:02.064502+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout id: 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout services: 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.241903s) 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout data: 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbf0f8f640 1 Processor -- start 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbf0f8f640 1 -- start start 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbf0f8f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec074550 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:02.357 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbf0f8f640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fbbec074b20 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbeb7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec074550 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbeb7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec074550 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41054/0 (socket says 192.168.123.101:41054) 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbeb7fe640 1 -- 192.168.123.101:0/433761219 learned_addr learned my addr 192.168.123.101:0/433761219 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbeb7fe640 1 -- 192.168.123.101:0/433761219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 msgr2=0x7fbbec074550 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_bulk peer close file descriptor 12 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbeb7fe640 1 -- 192.168.123.101:0/433761219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 msgr2=0x7fbbec074550 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_until read failed 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbeb7fe640 1 --2- 192.168.123.101:0/433761219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec074550 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.066+0000 7fbbeb7fe640 1 --2- 192.168.123.101:0/433761219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec074550 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbeb7fe640 1 --2- 192.168.123.101:0/433761219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec074550 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbeb7fe640 1 -- 192.168.123.101:0/433761219 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbbec074ca0 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbeb7fe640 1 --2- 192.168.123.101:0/433761219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec074550 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fbbdc00bb40 tx=0x7fbbdc033a40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7665c1914b792a40 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbea7fc640 1 -- 192.168.123.101:0/433761219 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbbdc03d030 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbea7fc640 1 -- 192.168.123.101:0/433761219 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fbbdc004170 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbea7fc640 1 -- 192.168.123.101:0/433761219 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbbdc004490 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/433761219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 msgr2=0x7fbbec074550 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbf0f8f640 1 --2- 192.168.123.101:0/433761219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec074550 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fbbdc00bb40 tx=0x7fbbdc033a40 comp rx=0 tx=0).stop 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/433761219 shutdown_connections 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbf0f8f640 1 --2- 192.168.123.101:0/433761219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec074550 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/433761219 >> 192.168.123.101:0/433761219 conn(0x7fbbec06fa60 msgr2=0x7fbbec071ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/433761219 shutdown_connections 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/433761219 wait complete. 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbf0f8f640 1 Processor -- start 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.270+0000 7fbbf0f8f640 1 -- start start 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbf0f8f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec089860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbf0f8f640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fbbec07b200 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbeb7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec089860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbeb7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec089860 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41074/0 (socket says 192.168.123.101:41074) 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbeb7fe640 1 -- 192.168.123.101:0/4042852829 learned_addr learned my addr 192.168.123.101:0/4042852829 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbeb7fe640 1 -- 192.168.123.101:0/4042852829 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbbec089da0 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbeb7fe640 1 --2- 192.168.123.101:0/4042852829 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec089860 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7fbbdc002410 tx=0x7fbbdc004830 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbe8ff9640 1 -- 192.168.123.101:0/4042852829 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbbdc039980 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbe8ff9640 1 -- 192.168.123.101:0/4042852829 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fbbdc039b40 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/4042852829 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fbbec08a030 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbe8ff9640 1 -- 192.168.123.101:0/4042852829 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbbdc039e60 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/4042852829 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fbbec0865f0 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbe8ff9640 1 -- 192.168.123.101:0/4042852829 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7fbbdc04b070 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbe8ff9640 1 -- 192.168.123.101:0/4042852829 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fbbdc03e070 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.274+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/4042852829 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbbec0745d0 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.278+0000 7fbbe8ff9640 1 -- 192.168.123.101:0/4042852829 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7fbbdc043e00 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.310+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/4042852829 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "status"} v 0) -- 0x7fbbec07fad0 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.310+0000 7fbbe8ff9640 1 -- 192.168.123.101:0/4042852829 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "status"}]=0 v0) ==== 54+0+317 (secure 0 0 0) 0x7fbbdc004140 con 0x7fbbec074150 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.310+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/4042852829 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 msgr2=0x7fbbec089860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.310+0000 7fbbf0f8f640 1 --2- 192.168.123.101:0/4042852829 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec089860 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7fbbdc002410 tx=0x7fbbdc004830 comp rx=0 tx=0).stop 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.310+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/4042852829 shutdown_connections 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.310+0000 7fbbf0f8f640 1 --2- 192.168.123.101:0/4042852829 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbbec074150 0x7fbbec089860 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.310+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/4042852829 >> 192.168.123.101:0/4042852829 conn(0x7fbbec06fa60 msgr2=0x7fbbec071ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.310+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/4042852829 shutdown_connections 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.310+0000 7fbbf0f8f640 1 -- 192.168.123.101:0/4042852829 wait complete. 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:mon is available 2026-03-09T15:48:02.358 INFO:teuthology.orchestra.run.vm01.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T15:48:02.567 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 Processor -- start 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 -- start start 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac07abb0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fb1ac07b0f0 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b1fee640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac07abb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b1fee640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac07abb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41076/0 (socket says 192.168.123.101:41076) 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b1fee640 1 -- 192.168.123.101:0/2096022334 learned_addr learned my addr 192.168.123.101:0/2096022334 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b1fee640 1 -- 192.168.123.101:0/2096022334 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb1ac07b270 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b1fee640 1 --2- 192.168.123.101:0/2096022334 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac07abb0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fb1a0009920 tx=0x7fb1a002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=243c3b09cb9f95b6 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b0fec640 1 -- 192.168.123.101:0/2096022334 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb1a003c070 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b0fec640 1 -- 192.168.123.101:0/2096022334 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fb1a002fae0 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 -- 192.168.123.101:0/2096022334 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 msgr2=0x7fb1ac07abb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 --2- 192.168.123.101:0/2096022334 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac07abb0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fb1a0009920 tx=0x7fb1a002ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 -- 192.168.123.101:0/2096022334 shutdown_connections 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 --2- 192.168.123.101:0/2096022334 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac07abb0 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 -- 192.168.123.101:0/2096022334 >> 192.168.123.101:0/2096022334 conn(0x7fb1ac101d60 msgr2=0x7fb1ac104180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 -- 192.168.123.101:0/2096022334 shutdown_connections 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 -- 192.168.123.101:0/2096022334 wait complete. 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.466+0000 7fb1b4279640 1 Processor -- start 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b4279640 1 -- start start 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b4279640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac1a2300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b4279640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fb1ac07b7a0 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b1fee640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac1a2300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b1fee640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac1a2300 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41086/0 (socket says 192.168.123.101:41086) 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b1fee640 1 -- 192.168.123.101:0/1243280167 learned_addr learned my addr 192.168.123.101:0/1243280167 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b1fee640 1 -- 192.168.123.101:0/1243280167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb1ac1a2840 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b1fee640 1 --2- 192.168.123.101:0/1243280167 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac1a2300 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fb1a002f450 tx=0x7fb1a00047c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb19affd640 1 -- 192.168.123.101:0/1243280167 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb1a0047020 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b4279640 1 -- 192.168.123.101:0/1243280167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb1ac1a2ad0 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb19affd640 1 -- 192.168.123.101:0/1243280167 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fb1a0042660 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb1b4279640 1 -- 192.168.123.101:0/1243280167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fb1ac1a2f30 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb19affd640 1 -- 192.168.123.101:0/1243280167 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb1a003c040 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb198ff9640 1 -- 192.168.123.101:0/1243280167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb16c005180 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb19affd640 1 -- 192.168.123.101:0/1243280167 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7fb1a0054050 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.470+0000 7fb19affd640 1 -- 192.168.123.101:0/1243280167 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fb1a0043830 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.474+0000 7fb19affd640 1 -- 192.168.123.101:0/1243280167 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7fb1a002fae0 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.502+0000 7fb198ff9640 1 -- 192.168.123.101:0/1243280167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7fb16c003c00 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.510+0000 7fb19affd640 1 -- 192.168.123.101:0/1243280167 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v2) ==== 70+0+380 (secure 0 0 0) 0x7fb1a0042cf0 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.510+0000 7fb19affd640 1 -- 192.168.123.101:0/1243280167 <== mon.0 v2:192.168.123.101:3300/0 8 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fb1a005a020 con 0x7fb1ac07c750 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.510+0000 7fb198ff9640 1 -- 192.168.123.101:0/1243280167 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 msgr2=0x7fb1ac1a2300 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.510+0000 7fb198ff9640 1 --2- 192.168.123.101:0/1243280167 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac1a2300 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fb1a002f450 tx=0x7fb1a00047c0 comp rx=0 tx=0).stop 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.510+0000 7fb198ff9640 1 -- 192.168.123.101:0/1243280167 shutdown_connections 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.510+0000 7fb198ff9640 1 --2- 192.168.123.101:0/1243280167 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb1ac07c750 0x7fb1ac1a2300 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.510+0000 7fb198ff9640 1 -- 192.168.123.101:0/1243280167 >> 192.168.123.101:0/1243280167 conn(0x7fb1ac101d60 msgr2=0x7fb1ac104150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.510+0000 7fb198ff9640 1 -- 192.168.123.101:0/1243280167 shutdown_connections 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.510+0000 7fb198ff9640 1 -- 192.168.123.101:0/1243280167 wait complete. 2026-03-09T15:48:02.568 INFO:teuthology.orchestra.run.vm01.stdout:Generating new minimal ceph.conf... 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.670+0000 7ff466d56640 1 Processor -- start 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.670+0000 7ff466d56640 1 -- start start 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.670+0000 7ff466d56640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff460108a60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.670+0000 7ff466d56640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7ff460109030 con 0x7ff460108660 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.670+0000 7ff465d54640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff460108a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.670+0000 7ff465d54640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff460108a60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41088/0 (socket says 192.168.123.101:41088) 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.670+0000 7ff465d54640 1 -- 192.168.123.101:0/3933475500 learned_addr learned my addr 192.168.123.101:0/3933475500 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff465d54640 1 -- 192.168.123.101:0/3933475500 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff460109860 con 0x7ff460108660 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff465d54640 1 --2- 192.168.123.101:0/3933475500 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff460108a60 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7ff454009920 tx=0x7ff45402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ef31142b41855b5 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff464d52640 1 -- 192.168.123.101:0/3933475500 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff45403c070 con 0x7ff460108660 2026-03-09T15:48:02.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff464d52640 1 -- 192.168.123.101:0/3933475500 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7ff454037440 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 -- 192.168.123.101:0/3933475500 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 msgr2=0x7ff460108a60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 --2- 192.168.123.101:0/3933475500 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff460108a60 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7ff454009920 tx=0x7ff45402ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 -- 192.168.123.101:0/3933475500 shutdown_connections 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 --2- 192.168.123.101:0/3933475500 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff460108a60 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 -- 192.168.123.101:0/3933475500 >> 192.168.123.101:0/3933475500 conn(0x7ff46007bcc0 msgr2=0x7ff46007c0f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 -- 192.168.123.101:0/3933475500 shutdown_connections 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 -- 192.168.123.101:0/3933475500 wait complete. 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 Processor -- start 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 -- start start 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff46019e520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7ff46010a560 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff465d54640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff46019e520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff465d54640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff46019e520 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41102/0 (socket says 192.168.123.101:41102) 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff465d54640 1 -- 192.168.123.101:0/2602988301 learned_addr learned my addr 192.168.123.101:0/2602988301 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff465d54640 1 -- 192.168.123.101:0/2602988301 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff46019ea60 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff465d54640 1 --2- 192.168.123.101:0/2602988301 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff46019e520 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7ff454037b80 tx=0x7ff454037bb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff44effd640 1 -- 192.168.123.101:0/2602988301 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff45403c070 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff44effd640 1 -- 192.168.123.101:0/2602988301 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7ff454045070 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 -- 192.168.123.101:0/2602988301 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff46019ecf0 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff466d56640 1 -- 192.168.123.101:0/2602988301 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff4601a19e0 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.674+0000 7ff44effd640 1 -- 192.168.123.101:0/2602988301 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff454040a10 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.678+0000 7ff44effd640 1 -- 192.168.123.101:0/2602988301 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7ff454040cc0 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.678+0000 7ff466d56640 1 -- 192.168.123.101:0/2602988301 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff46010ca90 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.678+0000 7ff44effd640 1 -- 192.168.123.101:0/2602988301 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7ff454035810 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.678+0000 7ff44effd640 1 -- 192.168.123.101:0/2602988301 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7ff454035a10 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.710+0000 7ff466d56640 1 -- 192.168.123.101:0/2602988301 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7ff4601a1e40 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.710+0000 7ff44effd640 1 -- 192.168.123.101:0/2602988301 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v2) ==== 76+0+181 (secure 0 0 0) 0x7ff454035bf0 con 0x7ff460108660 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.710+0000 7ff466d56640 1 -- 192.168.123.101:0/2602988301 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 msgr2=0x7ff46019e520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.710+0000 7ff466d56640 1 --2- 192.168.123.101:0/2602988301 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff46019e520 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7ff454037b80 tx=0x7ff454037bb0 comp rx=0 tx=0).stop 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.710+0000 7ff466d56640 1 -- 192.168.123.101:0/2602988301 shutdown_connections 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.710+0000 7ff466d56640 1 --2- 192.168.123.101:0/2602988301 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff460108660 0x7ff46019e520 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.710+0000 7ff466d56640 1 -- 192.168.123.101:0/2602988301 >> 192.168.123.101:0/2602988301 conn(0x7ff46007bcc0 msgr2=0x7ff460106170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.710+0000 7ff466d56640 1 -- 192.168.123.101:0/2602988301 shutdown_connections 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:02.710+0000 7ff466d56640 1 -- 192.168.123.101:0/2602988301 wait complete. 2026-03-09T15:48:02.757 INFO:teuthology.orchestra.run.vm01.stdout:Restarting the monitor... 2026-03-09T15:48:02.863 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:02 vm01 systemd[1]: Stopping Ceph mon.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T15:48:02.863 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:02 vm01 bash[20260]: debug 2026-03-09T15:48:02.790+0000 7ffaa51c2640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T15:48:02.863 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:02 vm01 bash[20260]: debug 2026-03-09T15:48:02.790+0000 7ffaa51c2640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T15:48:02.922 INFO:teuthology.orchestra.run.vm01.stdout:Setting public_network to 192.168.123.1/32,192.168.123.0/24 in mon config section 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:02 vm01 bash[20643]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-mon-a 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:02 vm01 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.a.service: Deactivated successfully. 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:02 vm01 systemd[1]: Stopped Ceph mon.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:02 vm01 systemd[1]: Started Ceph mon.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.034+0000 7fa0ece59d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.034+0000 7fa0ece59d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 8 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.034+0000 7fa0ece59d80 0 pidfile_write: ignore empty --pid-file 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 0 load: jerasure load: lrc 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Git sha 0 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: DB SUMMARY 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: DB Session ID: LC2VMK8TQ70UH5OOG35G 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T15:48:03.123 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 77697 ; 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.env: 0x55bc4e7efdc0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.info_log: 0x55bc542cc700 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.db_log_dir: 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.wal_dir: 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.write_buffer_manager: 0x55bc542d1900 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T15:48:03.124 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.row_cache: None 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.wal_filter: None 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Compression algorithms supported: 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: kZSTD supported: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T15:48:03.125 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.merge_operator: 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bc542cc640) 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cache_index_and_filter_blocks: 1 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: pin_top_level_index_and_filter: 1 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: index_type: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: data_block_index_type: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: index_shortening: 1 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: checksum: 4 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: no_block_cache: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: block_cache: 0x55bc542f3350 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: block_cache_name: BinnedLRUCache 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: block_cache_options: 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: capacity : 536870912 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: num_shard_bits : 4 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: strict_capacity_limit : 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: high_pri_pool_ratio: 0.000 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: block_cache_compressed: (nil) 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: persistent_cache: (nil) 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: block_size: 4096 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: block_size_deviation: 10 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: block_restart_interval: 16 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: index_block_restart_interval: 1 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: metadata_block_size: 4096 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: partition_filters: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: use_delta_encoding: 1 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: filter_policy: bloomfilter 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: whole_key_filtering: 1 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: verify_compression: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: read_amp_bytes_per_bit: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: format_version: 5 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: enable_index_compression: 1 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: block_align: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: max_auto_readahead_size: 262144 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: prepopulate_block_cache: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: initial_auto_readahead_size: 8192 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: num_file_reads_for_auto_readahead: 2 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.num_levels: 7 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T15:48:03.126 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T15:48:03.127 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.038+0000 7fa0ece59d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.042+0000 7fa0ece59d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.042+0000 7fa0ece59d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.042+0000 7fa0ece59d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 066ff4c9-ecb1-4e6d-a714-f8b15163b3e0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.042+0000 7fa0ece59d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773071283045438, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.042+0000 7fa0ece59d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.050+0000 7fa0ece59d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773071283055453, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 74603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 235, "table_properties": {"data_size": 72803, "index_size": 189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10270, "raw_average_key_size": 49, "raw_value_size": 67021, "raw_average_value_size": 325, "num_data_blocks": 8, "num_entries": 206, "num_filter_entries": 206, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773071283, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "066ff4c9-ecb1-4e6d-a714-f8b15163b3e0", "db_session_id": "LC2VMK8TQ70UH5OOG35G", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.050+0000 7fa0ece59d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773071283055528, "job": 1, "event": "recovery_finished"} 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.050+0000 7fa0ece59d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bc542f4e00 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 4 rocksdb: DB pointer 0x55bc5440a000 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] at bind addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0e2c23640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0e2c23640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: ** DB Stats ** 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: ** Compaction Stats [default] ** 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: L0 2/0 74.71 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 7.3 0.01 0.00 1 0.010 0 0 0.0 0.0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Sum 2/0 74.71 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 7.3 0.01 0.00 1 0.010 0 0 0.0 0.0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 7.3 0.01 0.00 1 0.010 0 0 0.0 0.0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: ** Compaction Stats [default] ** 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 7.3 0.01 0.00 1 0.010 0 0 0.0 0.0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T15:48:03.128 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Cumulative compaction: 0.00 GB write, 4.89 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Interval compaction: 0.00 GB write, 4.89 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Block cache BinnedLRUCache@0x55bc542f3350#8 capacity: 512.00 MB usage: 26.75 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: Block cache entry stats(count,size,portion): DataBlock(3,25.61 KB,0.0048846%) FilterBlock(2,0.77 KB,0.000146031%) IndexBlock(2,0.38 KB,7.15256e-05%) Misc(1,0.00 KB,0%) 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 1 mon.a@-1(???) e1 preinit fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 0 mon.a@-1(???).mds e1 new map 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 0 mon.a@-1(???).mds e1 print_map 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: e1 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: btime 2026-03-09T15:48:02:072481+0000 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: legacy client fscid: -1 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: No filesystems configured 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 4 mon.a@-1(???).mgr e0 loading version 1 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 4 mon.a@-1(???).mgr e1 active server: (0) 2026-03-09T15:48:03.129 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: debug 2026-03-09T15:48:03.054+0000 7fa0ece59d80 4 mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.074+0000 7f6c1b577640 1 Processor -- start 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.074+0000 7f6c1b577640 1 -- start start 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.074+0000 7f6c1b577640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c07abb0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.074+0000 7f6c1b577640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f6c1c07b0f0 con 0x7f6c1c07c750 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1a575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c07abb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1a575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c07abb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41114/0 (socket says 192.168.123.101:41114) 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1a575640 1 -- 192.168.123.101:0/1271487289 learned_addr learned my addr 192.168.123.101:0/1271487289 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1a575640 1 -- 192.168.123.101:0/1271487289 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6c1c07b270 con 0x7f6c1c07c750 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1a575640 1 --2- 192.168.123.101:0/1271487289 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c07abb0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f6c10009920 tx=0x7f6c1002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=def03a35df3deac7 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c19573640 1 -- 192.168.123.101:0/1271487289 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6c1003c070 con 0x7f6c1c07c750 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c19573640 1 -- 192.168.123.101:0/1271487289 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f6c10037440 con 0x7f6c1c07c750 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c19573640 1 -- 192.168.123.101:0/1271487289 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6c10035340 con 0x7f6c1c07c750 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 -- 192.168.123.101:0/1271487289 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 msgr2=0x7f6c1c07abb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 --2- 192.168.123.101:0/1271487289 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c07abb0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f6c10009920 tx=0x7f6c1002ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 -- 192.168.123.101:0/1271487289 shutdown_connections 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 --2- 192.168.123.101:0/1271487289 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c07abb0 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:03.170 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 -- 192.168.123.101:0/1271487289 >> 192.168.123.101:0/1271487289 conn(0x7f6c1c101bc0 msgr2=0x7f6c1c103fe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 -- 192.168.123.101:0/1271487289 shutdown_connections 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 -- 192.168.123.101:0/1271487289 wait complete. 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 Processor -- start 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 -- start start 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c109640 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1b577640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f6c1c10c800 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1a575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c109640 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1a575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c109640 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41118/0 (socket says 192.168.123.101:41118) 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.078+0000 7f6c1a575640 1 -- 192.168.123.101:0/1084004242 learned_addr learned my addr 192.168.123.101:0/1084004242 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c1a575640 1 -- 192.168.123.101:0/1084004242 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6c1c109b80 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c1a575640 1 --2- 192.168.123.101:0/1084004242 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c109640 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f6c10009a50 tx=0x7f6c10036ec0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c037fe640 1 -- 192.168.123.101:0/1084004242 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6c1003c030 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c1b577640 1 -- 192.168.123.101:0/1084004242 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6c1c109e10 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c1b577640 1 -- 192.168.123.101:0/1084004242 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6c1c106180 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c037fe640 1 -- 192.168.123.101:0/1084004242 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f6c1003e070 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c037fe640 1 -- 192.168.123.101:0/1084004242 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6c10042cb0 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c1b577640 1 -- 192.168.123.101:0/1084004242 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6be8005180 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c037fe640 1 -- 192.168.123.101:0/1084004242 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f6c1004c440 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c037fe640 1 -- 192.168.123.101:0/1084004242 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f6c1003d070 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.082+0000 7f6c037fe640 1 -- 192.168.123.101:0/1084004242 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f6c1004b3e0 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.114+0000 7f6c1b577640 1 -- 192.168.123.101:0/1084004242 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command([{prefix=config set, name=public_network}] v 0) -- 0x7f6be8005470 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.122+0000 7f6c037fe640 1 -- 192.168.123.101:0/1084004242 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{prefix=config set, name=public_network}]=0 v3)=0 v3) ==== 144+0+0 (secure 0 0 0) 0x7f6c1004c8a0 con 0x7f6c1c07c750 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.122+0000 7f6c017fa640 1 -- 192.168.123.101:0/1084004242 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 msgr2=0x7f6c1c109640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.122+0000 7f6c017fa640 1 --2- 192.168.123.101:0/1084004242 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c109640 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f6c10009a50 tx=0x7f6c10036ec0 comp rx=0 tx=0).stop 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.122+0000 7f6c017fa640 1 -- 192.168.123.101:0/1084004242 shutdown_connections 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.122+0000 7f6c017fa640 1 --2- 192.168.123.101:0/1084004242 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6c1c07c750 0x7f6c1c109640 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.122+0000 7f6c017fa640 1 -- 192.168.123.101:0/1084004242 >> 192.168.123.101:0/1084004242 conn(0x7f6c1c101bc0 msgr2=0x7f6c1c102710 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.122+0000 7f6c017fa640 1 -- 192.168.123.101:0/1084004242 shutdown_connections 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.122+0000 7f6c017fa640 1 -- 192.168.123.101:0/1084004242 wait complete. 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:Creating mgr... 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T15:48:03.171 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T15:48:03.370 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.y 2026-03-09T15:48:03.370 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.y.service: Unit ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.y.service not loaded. 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065526+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065526+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065569+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065569+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065574+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065574+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065577+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065577+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065585+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:03.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065585+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065588+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065588+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065592+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065592+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065595+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065595+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065821+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065821+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065832+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.065832+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.066377+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 bash[20728]: cluster 2026-03-09T15:48:03.066377+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T15:48:03.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:48:03.563 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03.target.wants/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.y.service → /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service. 2026-03-09T15:48:03.574 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-09T15:48:03.574 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T15:48:03.574 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-09T15:48:03.574 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T15:48:03.574 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr to start... 2026-03-09T15:48:03.575 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr... 2026-03-09T15:48:03.798 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:03 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:48:03.831 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:03.831 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T15:48:03.831 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "397fadc0-1bcf-11f1-8481-edc1430c2c03", 2026-03-09T15:48:03.831 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T15:48:03.831 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T15:48:03.831 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T15:48:03.831 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T15:48:03.831 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T15:48:03.833 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T15:48:02:072481+0000", 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T15:48:02.073365+0000", 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 Processor -- start 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 -- start start 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f79280745a0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f7928074b70 con 0x7f79280741a0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792d6fa640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f79280745a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792d6fa640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f79280745a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41128/0 (socket says 192.168.123.101:41128) 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792d6fa640 1 -- 192.168.123.101:0/150504832 learned_addr learned my addr 192.168.123.101:0/150504832 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792d6fa640 1 -- 192.168.123.101:0/150504832 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7928074cf0 con 0x7f79280741a0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792d6fa640 1 --2- 192.168.123.101:0/150504832 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f79280745a0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7f7924009920 tx=0x7f792402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ab5871a3c9ba40b5 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f791bfff640 1 -- 192.168.123.101:0/150504832 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f792403c070 con 0x7f79280741a0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f791bfff640 1 -- 192.168.123.101:0/150504832 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f7924037440 con 0x7f79280741a0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 -- 192.168.123.101:0/150504832 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 msgr2=0x7f79280745a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 --2- 192.168.123.101:0/150504832 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f79280745a0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7f7924009920 tx=0x7f792402ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 -- 192.168.123.101:0/150504832 shutdown_connections 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 --2- 192.168.123.101:0/150504832 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f79280745a0 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 -- 192.168.123.101:0/150504832 >> 192.168.123.101:0/150504832 conn(0x7f792806fa30 msgr2=0x7f7928071e70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 -- 192.168.123.101:0/150504832 shutdown_connections 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 -- 192.168.123.101:0/150504832 wait complete. 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.730+0000 7f792e6fc640 1 Processor -- start 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792e6fc640 1 -- start start 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792e6fc640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f792811bda0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792e6fc640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f7928110cc0 con 0x7f79280741a0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792d6fa640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f792811bda0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792d6fa640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f792811bda0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41130/0 (socket says 192.168.123.101:41130) 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792d6fa640 1 -- 192.168.123.101:0/3214342441 learned_addr learned my addr 192.168.123.101:0/3214342441 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792d6fa640 1 -- 192.168.123.101:0/3214342441 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f792811c2e0 con 0x7f79280741a0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792d6fa640 1 --2- 192.168.123.101:0/3214342441 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f792811bda0 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7f7924037b80 tx=0x7f7924037bb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:03.834 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f791a7fc640 1 -- 192.168.123.101:0/3214342441 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f792403c070 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f791a7fc640 1 -- 192.168.123.101:0/3214342441 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f7924045070 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f791a7fc640 1 -- 192.168.123.101:0/3214342441 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7924040a10 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792e6fc640 1 -- 192.168.123.101:0/3214342441 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f792811c570 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792e6fc640 1 -- 192.168.123.101:0/3214342441 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f792811d250 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f791a7fc640 1 -- 192.168.123.101:0/3214342441 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f7924035810 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f791a7fc640 1 -- 192.168.123.101:0/3214342441 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f79240497d0 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f792e6fc640 1 -- 192.168.123.101:0/3214342441 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f79280745a0 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.734+0000 7f791a7fc640 1 -- 192.168.123.101:0/3214342441 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f7924049a20 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.770+0000 7f792e6fc640 1 -- 192.168.123.101:0/3214342441 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f792811d5e0 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.778+0000 7f791a7fc640 1 -- 192.168.123.101:0/3214342441 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (secure 0 0 0) 0x7f7924049c00 con 0x7f79280741a0 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.782+0000 7f78f7fff640 1 -- 192.168.123.101:0/3214342441 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 msgr2=0x7f792811bda0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.782+0000 7f78f7fff640 1 --2- 192.168.123.101:0/3214342441 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f792811bda0 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7f7924037b80 tx=0x7f7924037bb0 comp rx=0 tx=0).stop 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.782+0000 7f78f7fff640 1 -- 192.168.123.101:0/3214342441 shutdown_connections 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.782+0000 7f78f7fff640 1 --2- 192.168.123.101:0/3214342441 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f79280741a0 0x7f792811bda0 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.782+0000 7f78f7fff640 1 -- 192.168.123.101:0/3214342441 >> 192.168.123.101:0/3214342441 conn(0x7f792806fa30 msgr2=0x7f7928070760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.782+0000 7f78f7fff640 1 -- 192.168.123.101:0/3214342441 shutdown_connections 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:03.782+0000 7f78f7fff640 1 -- 192.168.123.101:0/3214342441 wait complete. 2026-03-09T15:48:03.835 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (1/15)... 2026-03-09T15:48:04.130 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:03 vm01 bash[21002]: debug 2026-03-09T15:48:03.982+0000 7f75c49eb140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T15:48:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:04 vm01 bash[20728]: audit 2026-03-09T15:48:03.126268+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.101:0/1084004242' entity='client.admin' 2026-03-09T15:48:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:04 vm01 bash[20728]: audit 2026-03-09T15:48:03.126268+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.101:0/1084004242' entity='client.admin' 2026-03-09T15:48:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:04 vm01 bash[20728]: audit 2026-03-09T15:48:03.777206+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.101:0/3214342441' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:04 vm01 bash[20728]: audit 2026-03-09T15:48:03.777206+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.101:0/3214342441' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:04.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:04 vm01 bash[21002]: debug 2026-03-09T15:48:04.298+0000 7f75c49eb140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T15:48:05.156 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:04 vm01 bash[21002]: debug 2026-03-09T15:48:04.770+0000 7f75c49eb140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T15:48:05.156 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:04 vm01 bash[21002]: debug 2026-03-09T15:48:04.858+0000 7f75c49eb140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T15:48:05.156 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:04 vm01 bash[21002]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T15:48:05.156 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:04 vm01 bash[21002]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T15:48:05.156 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:04 vm01 bash[21002]: from numpy import show_config as show_numpy_config 2026-03-09T15:48:05.156 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:04 vm01 bash[21002]: debug 2026-03-09T15:48:04.990+0000 7f75c49eb140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T15:48:05.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:05 vm01 bash[21002]: debug 2026-03-09T15:48:05.150+0000 7f75c49eb140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T15:48:05.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:05 vm01 bash[21002]: debug 2026-03-09T15:48:05.190+0000 7f75c49eb140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T15:48:05.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:05 vm01 bash[21002]: debug 2026-03-09T15:48:05.230+0000 7f75c49eb140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T15:48:05.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:05 vm01 bash[21002]: debug 2026-03-09T15:48:05.274+0000 7f75c49eb140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T15:48:05.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:05 vm01 bash[21002]: debug 2026-03-09T15:48:05.330+0000 7f75c49eb140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T15:48:06.048 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:05 vm01 bash[21002]: debug 2026-03-09T15:48:05.778+0000 7f75c49eb140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T15:48:06.048 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:05 vm01 bash[21002]: debug 2026-03-09T15:48:05.822+0000 7f75c49eb140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T15:48:06.048 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:05 vm01 bash[21002]: debug 2026-03-09T15:48:05.870+0000 7f75c49eb140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "397fadc0-1bcf-11f1-8481-edc1430c2c03", 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T15:48:06.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T15:48:02:072481+0000", 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T15:48:06.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T15:48:02.073365+0000", 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f209abf0640 1 Processor -- start 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f209abf0640 1 -- start start 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f209abf0640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c005970 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f209abf0640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f208c005f40 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f2099bee640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c005970 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f2099bee640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c005970 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41160/0 (socket says 192.168.123.101:41160) 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f2099bee640 1 -- 192.168.123.101:0/2060189513 learned_addr learned my addr 192.168.123.101:0/2060189513 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f2099bee640 1 -- 192.168.123.101:0/2060189513 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f208c0067c0 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f2099bee640 1 --2- 192.168.123.101:0/2060189513 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c005970 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f2090009b80 tx=0x7f209002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ab71cbf24a448700 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f2098bec640 1 -- 192.168.123.101:0/2060189513 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f209003c070 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f2098bec640 1 -- 192.168.123.101:0/2060189513 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f2090037440 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f2098bec640 1 -- 192.168.123.101:0/2060189513 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2090035340 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f209abf0640 1 -- 192.168.123.101:0/2060189513 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 msgr2=0x7f208c005970 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.982+0000 7f209abf0640 1 --2- 192.168.123.101:0/2060189513 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c005970 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f2090009b80 tx=0x7f209002f190 comp rx=0 tx=0).stop 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 -- 192.168.123.101:0/2060189513 shutdown_connections 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 --2- 192.168.123.101:0/2060189513 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c005970 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 -- 192.168.123.101:0/2060189513 >> 192.168.123.101:0/2060189513 conn(0x7f208c09fc30 msgr2=0x7f208c0a2090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 -- 192.168.123.101:0/2060189513 shutdown_connections 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 -- 192.168.123.101:0/2060189513 wait complete. 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 Processor -- start 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 -- start start 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c153fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f208c0074c0 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2099bee640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c153fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2099bee640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c153fe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:41168/0 (socket says 192.168.123.101:41168) 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2099bee640 1 -- 192.168.123.101:0/3143822304 learned_addr learned my addr 192.168.123.101:0/3143822304 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2099bee640 1 -- 192.168.123.101:0/3143822304 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f208c154520 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2099bee640 1 --2- 192.168.123.101:0/3143822304 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c153fe0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f2090037cb0 tx=0x7f20900365d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2086ffd640 1 -- 192.168.123.101:0/3143822304 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f209003c040 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2086ffd640 1 -- 192.168.123.101:0/3143822304 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f2090045070 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2086ffd640 1 -- 192.168.123.101:0/3143822304 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2090040910 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 -- 192.168.123.101:0/3143822304 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f208c1547b0 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 -- 192.168.123.101:0/3143822304 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f208c1574a0 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f209abf0640 1 -- 192.168.123.101:0/3143822304 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f208c005970 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2086ffd640 1 -- 192.168.123.101:0/3143822304 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f209003f070 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.986+0000 7f2086ffd640 1 -- 192.168.123.101:0/3143822304 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f2090049560 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:05.994+0000 7f2086ffd640 1 -- 192.168.123.101:0/3143822304 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f2090036910 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:06.030+0000 7f209abf0640 1 -- 192.168.123.101:0/3143822304 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f208c1579b0 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:06.030+0000 7f2086ffd640 1 -- 192.168.123.101:0/3143822304 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (secure 0 0 0) 0x7f2090036af0 con 0x7f208c005570 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:06.034+0000 7f2084ff9640 1 -- 192.168.123.101:0/3143822304 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 msgr2=0x7f208c153fe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:06.034+0000 7f2084ff9640 1 --2- 192.168.123.101:0/3143822304 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c153fe0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f2090037cb0 tx=0x7f20900365d0 comp rx=0 tx=0).stop 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:06.034+0000 7f2084ff9640 1 -- 192.168.123.101:0/3143822304 shutdown_connections 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:06.034+0000 7f2084ff9640 1 --2- 192.168.123.101:0/3143822304 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f208c005570 0x7f208c153fe0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:06.106 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:06.034+0000 7f2084ff9640 1 -- 192.168.123.101:0/3143822304 >> 192.168.123.101:0/3143822304 conn(0x7f208c09fc30 msgr2=0x7f208c0a05e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:06.107 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:06.034+0000 7f2084ff9640 1 -- 192.168.123.101:0/3143822304 shutdown_connections 2026-03-09T15:48:06.107 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:06.034+0000 7f2084ff9640 1 -- 192.168.123.101:0/3143822304 wait complete. 2026-03-09T15:48:06.107 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (2/15)... 2026-03-09T15:48:06.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:06 vm01 bash[21002]: debug 2026-03-09T15:48:06.042+0000 7f75c49eb140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T15:48:06.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:06 vm01 bash[21002]: debug 2026-03-09T15:48:06.110+0000 7f75c49eb140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T15:48:06.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:06 vm01 bash[21002]: debug 2026-03-09T15:48:06.154+0000 7f75c49eb140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T15:48:06.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:06 vm01 bash[21002]: debug 2026-03-09T15:48:06.266+0000 7f75c49eb140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:48:06.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:06 vm01 bash[20728]: audit 2026-03-09T15:48:06.033996+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.101:0/3143822304' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:06.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:06 vm01 bash[20728]: audit 2026-03-09T15:48:06.033996+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.101:0/3143822304' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:06.692 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:06 vm01 bash[21002]: debug 2026-03-09T15:48:06.430+0000 7f75c49eb140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T15:48:06.692 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:06 vm01 bash[21002]: debug 2026-03-09T15:48:06.606+0000 7f75c49eb140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T15:48:06.692 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:06 vm01 bash[21002]: debug 2026-03-09T15:48:06.642+0000 7f75c49eb140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T15:48:07.146 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:06 vm01 bash[21002]: debug 2026-03-09T15:48:06.686+0000 7f75c49eb140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T15:48:07.146 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:06 vm01 bash[21002]: debug 2026-03-09T15:48:06.870+0000 7f75c49eb140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:07 vm01 bash[21002]: debug 2026-03-09T15:48:07.138+0000 7f75c49eb140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: cluster 2026-03-09T15:48:07.145624+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: cluster 2026-03-09T15:48:07.145624+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: cluster 2026-03-09T15:48:07.152301+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00676112s) 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: cluster 2026-03-09T15:48:07.152301+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00676112s) 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.155235+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.155235+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.155286+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.155286+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.155336+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.155336+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.157201+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.157201+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.157308+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.157308+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: cluster 2026-03-09T15:48:07.164365+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:07.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: cluster 2026-03-09T15:48:07.164365+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:07.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.186896+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:07.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.186896+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:07.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.190851+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:07.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.190851+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:07.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.192973+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:07.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.192973+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:07.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.197636+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:07.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:07 vm01 bash[20728]: audit 2026-03-09T15:48:07.197636+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:08.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:08 vm01 bash[20728]: audit 2026-03-09T15:48:07.207147+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:08.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:08 vm01 bash[20728]: audit 2026-03-09T15:48:07.207147+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:08.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:08 vm01 bash[20728]: cluster 2026-03-09T15:48:08.162817+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01729s) 2026-03-09T15:48:08.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:08 vm01 bash[20728]: cluster 2026-03-09T15:48:08.162817+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01729s) 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "397fadc0-1bcf-11f1-8481-edc1430c2c03", 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T15:48:02:072481+0000", 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T15:48:08.537 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T15:48:02.073365+0000", 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 Processor -- start 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 -- start start 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f9407abb0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f4f9407b0f0 con 0x7f4f9407c750 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9a42f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f9407abb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9a42f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f9407abb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43712/0 (socket says 192.168.123.101:43712) 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9a42f640 1 -- 192.168.123.101:0/1778047340 learned_addr learned my addr 192.168.123.101:0/1778047340 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9a42f640 1 -- 192.168.123.101:0/1778047340 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4f9407b270 con 0x7f4f9407c750 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9a42f640 1 --2- 192.168.123.101:0/1778047340 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f9407abb0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f4f84009b80 tx=0x7f4f8402f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6e2f8ffb306ec8d2 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9942d640 1 -- 192.168.123.101:0/1778047340 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4f8403c070 con 0x7f4f9407c750 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9942d640 1 -- 192.168.123.101:0/1778047340 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f4f84037440 con 0x7f4f9407c750 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1778047340 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 msgr2=0x7f4f9407abb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 --2- 192.168.123.101:0/1778047340 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f9407abb0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f4f84009b80 tx=0x7f4f8402f190 comp rx=0 tx=0).stop 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1778047340 shutdown_connections 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 --2- 192.168.123.101:0/1778047340 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f9407abb0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1778047340 >> 192.168.123.101:0/1778047340 conn(0x7f4f94101d20 msgr2=0x7f4f94104160 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1778047340 shutdown_connections 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1778047340 wait complete. 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 Processor -- start 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 -- start start 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f941a2b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.314+0000 7f4f9c6ba640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f4f94108490 con 0x7f4f9407c750 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f9a42f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f941a2b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f9a42f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f941a2b90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43720/0 (socket says 192.168.123.101:43720) 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f9a42f640 1 -- 192.168.123.101:0/1471219276 learned_addr learned my addr 192.168.123.101:0/1471219276 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f9a42f640 1 -- 192.168.123.101:0/1471219276 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4f941a30d0 con 0x7f4f9407c750 2026-03-09T15:48:08.538 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f9a42f640 1 --2- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f941a2b90 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f4f840039b0 tx=0x7f4f840039e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f837fe640 1 -- 192.168.123.101:0/1471219276 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4f8403c040 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4f941a3360 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f4f941a6050 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f837fe640 1 -- 192.168.123.101:0/1471219276 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f4f84045070 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f837fe640 1 -- 192.168.123.101:0/1471219276 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4f84035bb0 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f837fe640 1 -- 192.168.123.101:0/1471219276 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 3) ==== 50130+0+0 (secure 0 0 0) 0x7f4f84037440 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f837fe640 1 --2- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f4f6803dad0 0x7f4f6803ff90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f837fe640 1 -- 192.168.123.101:0/1471219276 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f4f84077000 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f99c2e640 1 --2- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f4f6803dad0 0x7f4f6803ff90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.318+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4f5c005180 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.326+0000 7f4f837fe640 1 -- 192.168.123.101:0/1471219276 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4f84047310 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.326+0000 7f4f99c2e640 1 --2- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f4f6803dad0 0x7f4f6803ff90 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f4f88009a10 tx=0x7f4f88006eb0 comp rx=0 tx=0).ready entity=mgr.14102 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.486+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f4f5c005470 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.486+0000 7f4f837fe640 1 -- 192.168.123.101:0/1471219276 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1290 (secure 0 0 0) 0x7f4f8403c1e0 con 0x7f4f9407c750 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f4f6803dad0 msgr2=0x7f4f6803ff90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 --2- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f4f6803dad0 0x7f4f6803ff90 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f4f88009a10 tx=0x7f4f88006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 msgr2=0x7f4f941a2b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 --2- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f941a2b90 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f4f840039b0 tx=0x7f4f840039e0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 shutdown_connections 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 --2- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f4f6803dad0 0x7f4f6803ff90 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 --2- 192.168.123.101:0/1471219276 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4f9407c750 0x7f4f941a2b90 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 >> 192.168.123.101:0/1471219276 conn(0x7f4f94101d20 msgr2=0x7f4f941077c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 shutdown_connections 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.490+0000 7f4f9c6ba640 1 -- 192.168.123.101:0/1471219276 wait complete. 2026-03-09T15:48:08.539 INFO:teuthology.orchestra.run.vm01.stdout:mgr is available 2026-03-09T15:48:08.827 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:08.827 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T15:48:08.827 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.650+0000 7f1817797640 1 Processor -- start 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.650+0000 7f1817797640 1 -- start start 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.650+0000 7f1817797640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101071f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.650+0000 7f1817797640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f181007a6e0 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.650+0000 7f181550c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101071f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.650+0000 7f181550c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101071f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43736/0 (socket says 192.168.123.101:43736) 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.650+0000 7f181550c640 1 -- 192.168.123.101:0/2223623400 learned_addr learned my addr 192.168.123.101:0/2223623400 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.650+0000 7f181550c640 1 -- 192.168.123.101:0/2223623400 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1810107730 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f181550c640 1 --2- 192.168.123.101:0/2223623400 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101071f0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f1804009b80 tx=0x7f180402f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7600f47919625743 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f17fffff640 1 -- 192.168.123.101:0/2223623400 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f180403c070 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f17fffff640 1 -- 192.168.123.101:0/2223623400 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f1804037440 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 -- 192.168.123.101:0/2223623400 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 msgr2=0x7f18101071f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 --2- 192.168.123.101:0/2223623400 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101071f0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f1804009b80 tx=0x7f180402f190 comp rx=0 tx=0).stop 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 -- 192.168.123.101:0/2223623400 shutdown_connections 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 --2- 192.168.123.101:0/2223623400 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101071f0 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 -- 192.168.123.101:0/2223623400 >> 192.168.123.101:0/2223623400 conn(0x7f1810100c10 msgr2=0x7f1810103050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 -- 192.168.123.101:0/2223623400 shutdown_connections 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 -- 192.168.123.101:0/2223623400 wait complete. 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 Processor -- start 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 -- start start 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101a29c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f1817797640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f18101085b0 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f181550c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101a29c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f181550c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101a29c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43746/0 (socket says 192.168.123.101:43746) 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.654+0000 7f181550c640 1 -- 192.168.123.101:0/2712992155 learned_addr learned my addr 192.168.123.101:0/2712992155 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.658+0000 7f181550c640 1 -- 192.168.123.101:0/2712992155 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f18101a2f00 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.658+0000 7f181550c640 1 --2- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101a29c0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f1804006fd0 tx=0x7f18040045e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.658+0000 7f17fe7fc640 1 -- 192.168.123.101:0/2712992155 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f1804045070 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.658+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f18101a3190 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.658+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f18101a5e80 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.658+0000 7f17fe7fc640 1 -- 192.168.123.101:0/2712992155 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f1804003c60 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.658+0000 7f17fe7fc640 1 -- 192.168.123.101:0/2712992155 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f180403c040 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.658+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f17dc005180 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.662+0000 7f17fe7fc640 1 -- 192.168.123.101:0/2712992155 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 3) ==== 50130+0+0 (secure 0 0 0) 0x7f1804003840 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.662+0000 7f17fe7fc640 1 --2- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f17d803db20 0x7f17d803ffe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.662+0000 7f1814d0b640 1 --2- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f17d803db20 0x7f17d803ffe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.662+0000 7f17fe7fc640 1 -- 192.168.123.101:0/2712992155 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f18040763d0 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.662+0000 7f1814d0b640 1 --2- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f17d803db20 0x7f17d803ffe0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f1800009a10 tx=0x7f1800006eb0 comp rx=0 tx=0).ready entity=mgr.14102 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.662+0000 7f17fe7fc640 1 -- 192.168.123.101:0/2712992155 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f1804076890 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.766+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7f17dc003c00 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.770+0000 7f17fe7fc640 1 -- 192.168.123.101:0/2712992155 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v3) ==== 70+0+380 (secure 0 0 0) 0x7f180403c1e0 con 0x7f1810104de0 2026-03-09T15:48:08.828 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.770+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f17d803db20 msgr2=0x7f17d803ffe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.770+0000 7f1817797640 1 --2- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f17d803db20 0x7f17d803ffe0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f1800009a10 tx=0x7f1800006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.770+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 msgr2=0x7f18101a29c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.770+0000 7f1817797640 1 --2- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101a29c0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f1804006fd0 tx=0x7f18040045e0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.774+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 shutdown_connections 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.774+0000 7f1817797640 1 --2- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f17d803db20 0x7f17d803ffe0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.774+0000 7f1817797640 1 --2- 192.168.123.101:0/2712992155 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1810104de0 0x7f18101a29c0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.774+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 >> 192.168.123.101:0/2712992155 conn(0x7f1810100c10 msgr2=0x7f181010acf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.774+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 shutdown_connections 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.774+0000 7f1817797640 1 -- 192.168.123.101:0/2712992155 wait complete. 2026-03-09T15:48:08.829 INFO:teuthology.orchestra.run.vm01.stdout:Enabling cephadm module... 2026-03-09T15:48:09.295 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.958+0000 7f90ca5b9640 1 Processor -- start 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 -- start start 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c407a2c0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f90c407a800 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90c3fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c407a2c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90c3fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c407a2c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43750/0 (socket says 192.168.123.101:43750) 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90c3fff640 1 -- 192.168.123.101:0/2119567079 learned_addr learned my addr 192.168.123.101:0/2119567079 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90c3fff640 1 -- 192.168.123.101:0/2119567079 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f90c407a980 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90c3fff640 1 --2- 192.168.123.101:0/2119567079 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c407a2c0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f90a8009920 tx=0x7f90a802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=3aa4e774561c44a1 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90c2ffd640 1 -- 192.168.123.101:0/2119567079 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f90a803c070 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90c2ffd640 1 -- 192.168.123.101:0/2119567079 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f90a8037440 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90c2ffd640 1 -- 192.168.123.101:0/2119567079 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f90a8035340 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2119567079 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 msgr2=0x7f90c407a2c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 --2- 192.168.123.101:0/2119567079 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c407a2c0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f90a8009920 tx=0x7f90a802ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2119567079 shutdown_connections 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 --2- 192.168.123.101:0/2119567079 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c407a2c0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2119567079 >> 192.168.123.101:0/2119567079 conn(0x7f90c4101d80 msgr2=0x7f90c41041a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2119567079 shutdown_connections 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.962+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2119567079 wait complete. 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90ca5b9640 1 Processor -- start 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90ca5b9640 1 -- start start 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90ca5b9640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c419e5f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90ca5b9640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f90c410ca00 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c3fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c419e5f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c3fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c419e5f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43762/0 (socket says 192.168.123.101:43762) 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c3fff640 1 -- 192.168.123.101:0/2808265687 learned_addr learned my addr 192.168.123.101:0/2808265687 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c3fff640 1 -- 192.168.123.101:0/2808265687 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f90c419eb30 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c3fff640 1 --2- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c419e5f0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f90a8009a50 tx=0x7f90a8036ec0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c17fa640 1 -- 192.168.123.101:0/2808265687 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f90a803c030 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c17fa640 1 -- 192.168.123.101:0/2808265687 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f90a803e070 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c17fa640 1 -- 192.168.123.101:0/2808265687 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f90a8042ca0 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f90c419edc0 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f90c419f1e0 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c17fa640 1 -- 192.168.123.101:0/2808265687 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 3) ==== 50130+0+0 (secure 0 0 0) 0x7f90a804c430 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c17fa640 1 --2- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f909803dad0 0x7f909803ff90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.966+0000 7f90c17fa640 1 -- 192.168.123.101:0/2808265687 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f90a803d070 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.970+0000 7f90c37fe640 1 --2- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f909803dad0 0x7f909803ff90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.970+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9088005180 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.974+0000 7f90c37fe640 1 --2- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f909803dad0 0x7f909803ff90 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f90b00099c0 tx=0x7f90b0006eb0 comp rx=0 tx=0).ready entity=mgr.14102 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:08.974+0000 7f90c17fa640 1 -- 192.168.123.101:0/2808265687 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f90a8033c90 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.090+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) -- 0x7f9088005470 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.214+0000 7f90c17fa640 1 -- 192.168.123.101:0/2808265687 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "cephadm"}]=0 v4) ==== 86+0+0 (secure 0 0 0) 0x7f90a8035dc0 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.218+0000 7f90c17fa640 1 -- 192.168.123.101:0/2808265687 <== mon.0 v2:192.168.123.101:3300/0 8 ==== mgrmap(e 4) ==== 50247+0+0 (secure 0 0 0) 0x7f90a8077020 con 0x7f90c407be60 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f909803dad0 msgr2=0x7f909803ff90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 --2- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f909803dad0 0x7f909803ff90 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f90b00099c0 tx=0x7f90b0006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 msgr2=0x7f90c419e5f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 --2- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c419e5f0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f90a8009a50 tx=0x7f90a8036ec0 comp rx=0 tx=0).stop 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 shutdown_connections 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 --2- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f909803dad0 0x7f909803ff90 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 --2- 192.168.123.101:0/2808265687 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f90c407be60 0x7f90c419e5f0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 >> 192.168.123.101:0/2808265687 conn(0x7f90c4101d80 msgr2=0x7f90c4102950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 shutdown_connections 2026-03-09T15:48:09.296 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.222+0000 7f90ca5b9640 1 -- 192.168.123.101:0/2808265687 wait complete. 2026-03-09T15:48:09.419 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:09 vm01 bash[20728]: audit 2026-03-09T15:48:08.491718+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.101:0/1471219276' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:09.419 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:09 vm01 bash[20728]: audit 2026-03-09T15:48:08.491718+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.101:0/1471219276' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:09.419 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:09 vm01 bash[20728]: audit 2026-03-09T15:48:08.772597+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.101:0/2712992155' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T15:48:09.419 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:09 vm01 bash[20728]: audit 2026-03-09T15:48:08.772597+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.101:0/2712992155' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T15:48:09.419 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:09 vm01 bash[20728]: audit 2026-03-09T15:48:09.096742+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.101:0/2808265687' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T15:48:09.419 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:09 vm01 bash[20728]: audit 2026-03-09T15:48:09.096742+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.101:0/2808265687' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T15:48:09.419 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:09 vm01 bash[21002]: ignoring --setuser ceph since I am not root 2026-03-09T15:48:09.419 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:09 vm01 bash[21002]: ignoring --setgroup ceph since I am not root 2026-03-09T15:48:09.419 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:09 vm01 bash[21002]: debug 2026-03-09T15:48:09.354+0000 7f3e922d1140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T15:48:09.674 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:09 vm01 bash[21002]: debug 2026-03-09T15:48:09.410+0000 7f3e922d1140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T15:48:09.674 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:09 vm01 bash[21002]: debug 2026-03-09T15:48:09.554+0000 7f3e922d1140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 Processor -- start 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 -- start start 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec0a4d20 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f36ec0a52f0 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36f9ed8640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec0a4d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36f9ed8640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec0a4d20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43774/0 (socket says 192.168.123.101:43774) 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36f9ed8640 1 -- 192.168.123.101:0/3130042148 learned_addr learned my addr 192.168.123.101:0/3130042148 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36f9ed8640 1 -- 192.168.123.101:0/3130042148 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f36ec0a5b20 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36f9ed8640 1 --2- 192.168.123.101:0/3130042148 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec0a4d20 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f36f000a9c0 tx=0x7f36f0033650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7bbdf7f9fad06370 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36f8ed6640 1 -- 192.168.123.101:0/3130042148 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f36f0037580 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36f8ed6640 1 -- 192.168.123.101:0/3130042148 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f36f0037b40 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36f8ed6640 1 -- 192.168.123.101:0/3130042148 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f36f003ca30 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 -- 192.168.123.101:0/3130042148 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 msgr2=0x7f36ec0a4d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 --2- 192.168.123.101:0/3130042148 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec0a4d20 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f36f000a9c0 tx=0x7f36f0033650 comp rx=0 tx=0).stop 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 -- 192.168.123.101:0/3130042148 shutdown_connections 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 --2- 192.168.123.101:0/3130042148 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec0a4d20 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 -- 192.168.123.101:0/3130042148 >> 192.168.123.101:0/3130042148 conn(0x7f36ec09fc30 msgr2=0x7f36ec0a2090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 -- 192.168.123.101:0/3130042148 shutdown_connections 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.518+0000 7f36faeda640 1 -- 192.168.123.101:0/3130042148 wait complete. 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.522+0000 7f36faeda640 1 Processor -- start 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.522+0000 7f36faeda640 1 -- start start 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.522+0000 7f36faeda640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec142c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.522+0000 7f36faeda640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f36ec0a64c0 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.522+0000 7f36f9ed8640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec142c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.522+0000 7f36f9ed8640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec142c00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43788/0 (socket says 192.168.123.101:43788) 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.522+0000 7f36f9ed8640 1 -- 192.168.123.101:0/836432097 learned_addr learned my addr 192.168.123.101:0/836432097 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.522+0000 7f36f9ed8640 1 -- 192.168.123.101:0/836432097 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f36ec143140 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.522+0000 7f36f9ed8640 1 --2- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec142c00 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f36f000aaf0 tx=0x7f36f0037680 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36e2ffd640 1 -- 192.168.123.101:0/836432097 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f36f0043070 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36faeda640 1 -- 192.168.123.101:0/836432097 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f36ec1433d0 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36faeda640 1 -- 192.168.123.101:0/836432097 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f36ec1460c0 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36e2ffd640 1 -- 192.168.123.101:0/836432097 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f36f0037870 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36e2ffd640 1 -- 192.168.123.101:0/836432097 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f36f0047400 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36e2ffd640 1 -- 192.168.123.101:0/836432097 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 4) ==== 50247+0+0 (secure 0 0 0) 0x7f36f003b070 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36e2ffd640 1 --2- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f36d403dce0 0x7f36d40401a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36f96d7640 1 -- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f36d403dce0 msgr2=0x7f36d40401a0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2299265276 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36f96d7640 1 --2- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f36d403dce0 0x7f36d40401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36e2ffd640 1 -- 192.168.123.101:0/836432097 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f36f0078370 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.526+0000 7f36faeda640 1 -- 192.168.123.101:0/836432097 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f36bc005180 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.534+0000 7f36e2ffd640 1 -- 192.168.123.101:0/836432097 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f36f0066070 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.654+0000 7f36faeda640 1 -- 192.168.123.101:0/836432097 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7f36bc005c80 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.654+0000 7f36e2ffd640 1 -- 192.168.123.101:0/836432097 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v4) ==== 56+0+88 (secure 0 0 0) 0x7f36f006c070 con 0x7f36ec0a4920 2026-03-09T15:48:09.708 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 -- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f36d403dce0 msgr2=0x7f36d40401a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 --2- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f36d403dce0 0x7f36d40401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 -- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 msgr2=0x7f36ec142c00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 --2- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec142c00 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f36f000aaf0 tx=0x7f36f0037680 comp rx=0 tx=0).stop 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 -- 192.168.123.101:0/836432097 shutdown_connections 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 --2- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7f36d403dce0 0x7f36d40401a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 --2- 192.168.123.101:0/836432097 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f36ec0a4920 0x7f36ec142c00 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 -- 192.168.123.101:0/836432097 >> 192.168.123.101:0/836432097 conn(0x7f36ec09fc30 msgr2=0x7f36ec0a0720 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 -- 192.168.123.101:0/836432097 shutdown_connections 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.658+0000 7f36e0ff9640 1 -- 192.168.123.101:0/836432097 wait complete. 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-09T15:48:09.709 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 4... 2026-03-09T15:48:09.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:09 vm01 bash[21002]: debug 2026-03-09T15:48:09.882+0000 7f3e922d1140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T15:48:10.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:10 vm01 bash[20728]: audit 2026-03-09T15:48:09.214067+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.101:0/2808265687' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T15:48:10.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:10 vm01 bash[20728]: audit 2026-03-09T15:48:09.214067+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.101:0/2808265687' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T15:48:10.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:10 vm01 bash[20728]: cluster 2026-03-09T15:48:09.217237+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T15:48:10.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:10 vm01 bash[20728]: cluster 2026-03-09T15:48:09.217237+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T15:48:10.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:10 vm01 bash[20728]: audit 2026-03-09T15:48:09.657902+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.101:0/836432097' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:10.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:10 vm01 bash[20728]: audit 2026-03-09T15:48:09.657902+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.101:0/836432097' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:10.683 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: debug 2026-03-09T15:48:10.386+0000 7f3e922d1140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T15:48:10.683 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: debug 2026-03-09T15:48:10.478+0000 7f3e922d1140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T15:48:10.683 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T15:48:10.683 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T15:48:10.683 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: from numpy import show_config as show_numpy_config 2026-03-09T15:48:10.683 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: debug 2026-03-09T15:48:10.614+0000 7f3e922d1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T15:48:11.183 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: debug 2026-03-09T15:48:10.762+0000 7f3e922d1140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T15:48:11.183 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: debug 2026-03-09T15:48:10.802+0000 7f3e922d1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T15:48:11.183 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: debug 2026-03-09T15:48:10.842+0000 7f3e922d1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T15:48:11.183 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: debug 2026-03-09T15:48:10.886+0000 7f3e922d1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T15:48:11.183 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:10 vm01 bash[21002]: debug 2026-03-09T15:48:10.942+0000 7f3e922d1140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T15:48:11.671 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:11 vm01 bash[21002]: debug 2026-03-09T15:48:11.386+0000 7f3e922d1140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T15:48:11.671 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:11 vm01 bash[21002]: debug 2026-03-09T15:48:11.426+0000 7f3e922d1140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T15:48:11.671 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:11 vm01 bash[21002]: debug 2026-03-09T15:48:11.462+0000 7f3e922d1140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T15:48:11.671 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:11 vm01 bash[21002]: debug 2026-03-09T15:48:11.622+0000 7f3e922d1140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T15:48:11.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:11 vm01 bash[21002]: debug 2026-03-09T15:48:11.662+0000 7f3e922d1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T15:48:11.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:11 vm01 bash[21002]: debug 2026-03-09T15:48:11.706+0000 7f3e922d1140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T15:48:11.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:11 vm01 bash[21002]: debug 2026-03-09T15:48:11.830+0000 7f3e922d1140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:48:12.281 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:12 vm01 bash[21002]: debug 2026-03-09T15:48:11.998+0000 7f3e922d1140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T15:48:12.281 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:12 vm01 bash[21002]: debug 2026-03-09T15:48:12.190+0000 7f3e922d1140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T15:48:12.281 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:12 vm01 bash[21002]: debug 2026-03-09T15:48:12.230+0000 7f3e922d1140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T15:48:12.683 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:12 vm01 bash[21002]: debug 2026-03-09T15:48:12.274+0000 7f3e922d1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T15:48:12.683 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:12 vm01 bash[21002]: debug 2026-03-09T15:48:12.434+0000 7f3e922d1140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:12 vm01 bash[21002]: debug 2026-03-09T15:48:12.682+0000 7f3e922d1140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.689730+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.689730+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.690181+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.690181+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.695330+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.695330+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.695459+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: y(active, starting, since 0.00540697s) 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.695459+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: y(active, starting, since 0.00540697s) 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.698260+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.698260+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.699474+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.699474+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.700523+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.700523+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.700844+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.700844+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.701158+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.701158+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.708158+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: cluster 2026-03-09T15:48:12.708158+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.718877+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.718877+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.722479+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.722479+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.737496+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.737496+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.738758+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:13.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:12 vm01 bash[20728]: audit 2026-03-09T15:48:12.738758+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd78963d640 1 Processor -- start 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd78963d640 1 -- start start 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd78963d640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd784074b30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd78963d640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fd784075100 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd783fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd784074b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd783fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd784074b30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43794/0 (socket says 192.168.123.101:43794) 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd783fff640 1 -- 192.168.123.101:0/1731972293 learned_addr learned my addr 192.168.123.101:0/1731972293 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd783fff640 1 -- 192.168.123.101:0/1731972293 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd78410e1c0 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd783fff640 1 --2- 192.168.123.101:0/1731972293 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd784074b30 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7fd77400a9c0 tx=0x7fd774033650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d81b34f9e3fcac6d server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd7837fe640 1 -- 192.168.123.101:0/1731972293 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd774037580 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd7837fe640 1 -- 192.168.123.101:0/1731972293 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fd774037b40 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd7837fe640 1 -- 192.168.123.101:0/1731972293 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd77403ca80 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd78963d640 1 -- 192.168.123.101:0/1731972293 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 msgr2=0x7fd784074b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.954+0000 7fd78963d640 1 --2- 192.168.123.101:0/1731972293 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd784074b30 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7fd77400a9c0 tx=0x7fd774033650 comp rx=0 tx=0).stop 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 -- 192.168.123.101:0/1731972293 shutdown_connections 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 --2- 192.168.123.101:0/1731972293 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd784074b30 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 -- 192.168.123.101:0/1731972293 >> 192.168.123.101:0/1731972293 conn(0x7fd78406fa60 msgr2=0x7fd784071ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 -- 192.168.123.101:0/1731972293 shutdown_connections 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 -- 192.168.123.101:0/1731972293 wait complete. 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 Processor -- start 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 -- start start 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd7841b3c70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fd78410eb60 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd783fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd7841b3c70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd783fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd7841b3c70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43800/0 (socket says 192.168.123.101:43800) 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd783fff640 1 -- 192.168.123.101:0/2091487693 learned_addr learned my addr 192.168.123.101:0/2091487693 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd783fff640 1 -- 192.168.123.101:0/2091487693 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd7841b41b0 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd783fff640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd7841b3c70 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fd77400aaf0 tx=0x7fd774037680 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd774043070 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fd774037940 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd774046540 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 -- 192.168.123.101:0/2091487693 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd7841b4440 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd78963d640 1 -- 192.168.123.101:0/2091487693 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd7841b7130 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 4) ==== 50247+0+0 (secure 0 0 0) 0x7fd77403b070 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd781ffb640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 0x7fd764040190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 --> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7fd7640408a0 con 0x7fd76403dcd0 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.958+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fd774077d20 con 0x7fd784074730 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.962+0000 7fd77bfff640 1 -- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 msgr2=0x7fd764040190 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2299265276 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:09.962+0000 7fd77bfff640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 0x7fd764040190 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:10.162+0000 7fd77bfff640 1 -- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 msgr2=0x7fd764040190 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2299265276 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:10.162+0000 7fd77bfff640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 0x7fd764040190 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:10.562+0000 7fd77bfff640 1 -- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 msgr2=0x7fd764040190 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2299265276 2026-03-09T15:48:13.772 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:10.562+0000 7fd77bfff640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 0x7fd764040190 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:11.362+0000 7fd77bfff640 1 -- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 msgr2=0x7fd764040190 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2299265276 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:11.362+0000 7fd77bfff640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 0x7fd764040190 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:12.690+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mgrmap(e 5) ==== 50014+0+0 (secure 0 0 0) 0x7fd77400bce0 con 0x7fd784074730 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:12.690+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 msgr2=0x7fd764040190 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:12.690+0000 7fd781ffb640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 0x7fd764040190 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.690+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mgrmap(e 6) ==== 50141+0+0 (secure 0 0 0) 0x7fd774046a10 con 0x7fd784074730 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.690+0000 7fd781ffb640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fd764041670 0x7fd764043a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.690+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7fd77400de50 con 0x7fd764041670 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.694+0000 7fd77bfff640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fd764041670 0x7fd764043a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.694+0000 7fd77bfff640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fd764041670 0x7fd764043a60 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fd770003a80 tx=0x7fd7700092b0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.698+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 <== mgr.14118 v2:192.168.123.101:6800/2530303036 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (secure 0 0 0) 0x7fd77400de50 con 0x7fd764041670 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 -- 192.168.123.101:0/2091487693 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7fd784074b30 con 0x7fd764041670 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd781ffb640 1 -- 192.168.123.101:0/2091487693 <== mgr.14118 v2:192.168.123.101:6800/2530303036 2 ==== command_reply(tid 1: 0 ) ==== 8+0+51 (secure 0 0 0) 0x7fd784074b30 con 0x7fd764041670 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 -- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fd764041670 msgr2=0x7fd764043a60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fd764041670 0x7fd764043a60 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fd770003a80 tx=0x7fd7700092b0 comp rx=0 tx=0).stop 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 -- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 msgr2=0x7fd7841b3c70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd7841b3c70 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fd77400aaf0 tx=0x7fd774037680 comp rx=0 tx=0).stop 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 -- 192.168.123.101:0/2091487693 shutdown_connections 2026-03-09T15:48:13.773 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fd764041670 0x7fd764043a60 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:13.774 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:6800/2299265276,v1:192.168.123.101:6801/2299265276] conn(0x7fd76403dcd0 0x7fd764040190 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:13.774 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 --2- 192.168.123.101:0/2091487693 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd784074730 0x7fd7841b3c70 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:13.774 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 -- 192.168.123.101:0/2091487693 >> 192.168.123.101:0/2091487693 conn(0x7fd78406fa60 msgr2=0x7fd784070520 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:13.774 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 -- 192.168.123.101:0/2091487693 shutdown_connections 2026-03-09T15:48:13.774 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.702+0000 7fd78963d640 1 -- 192.168.123.101:0/2091487693 wait complete. 2026-03-09T15:48:13.774 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 4 is available 2026-03-09T15:48:13.774 INFO:teuthology.orchestra.run.vm01.stdout:Setting orchestrator backend to cephadm... 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: cephadm 2026-03-09T15:48:12.716436+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: cephadm 2026-03-09T15:48:12.716436+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: audit 2026-03-09T15:48:12.748922+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: audit 2026-03-09T15:48:12.748922+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: audit 2026-03-09T15:48:12.752175+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: audit 2026-03-09T15:48:12.752175+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: audit 2026-03-09T15:48:13.239987+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: audit 2026-03-09T15:48:13.239987+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: audit 2026-03-09T15:48:13.242976+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: audit 2026-03-09T15:48:13.242976+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: cluster 2026-03-09T15:48:13.699230+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: y(active, since 1.00918s) 2026-03-09T15:48:14.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:13 vm01 bash[20728]: cluster 2026-03-09T15:48:13.699230+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: y(active, since 1.00918s) 2026-03-09T15:48:14.123 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 Processor -- start 2026-03-09T15:48:14.123 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 -- start start 2026-03-09T15:48:14.123 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a80a4d30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.123 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fc7a80a5300 con 0x7fc7a80a4930 2026-03-09T15:48:14.123 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b63da640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a80a4d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.123 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b63da640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a80a4d30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43880/0 (socket says 192.168.123.101:43880) 2026-03-09T15:48:14.123 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b63da640 1 -- 192.168.123.101:0/3815435928 learned_addr learned my addr 192.168.123.101:0/3815435928 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:14.123 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b63da640 1 -- 192.168.123.101:0/3815435928 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc7a80a5b30 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b63da640 1 --2- 192.168.123.101:0/3815435928 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a80a4d30 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fc7ac00a9c0 tx=0x7fc7ac033650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9e9567a3cb3b4f9e server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b53d8640 1 -- 192.168.123.101:0/3815435928 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc7ac037580 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b53d8640 1 -- 192.168.123.101:0/3815435928 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fc7ac037b40 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b53d8640 1 -- 192.168.123.101:0/3815435928 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc7ac03ca80 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 -- 192.168.123.101:0/3815435928 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 msgr2=0x7fc7a80a4d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 --2- 192.168.123.101:0/3815435928 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a80a4d30 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fc7ac00a9c0 tx=0x7fc7ac033650 comp rx=0 tx=0).stop 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 -- 192.168.123.101:0/3815435928 shutdown_connections 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 --2- 192.168.123.101:0/3815435928 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a80a4d30 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 -- 192.168.123.101:0/3815435928 >> 192.168.123.101:0/3815435928 conn(0x7fc7a809fc40 msgr2=0x7fc7a80a20a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 -- 192.168.123.101:0/3815435928 shutdown_connections 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 -- 192.168.123.101:0/3815435928 wait complete. 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.910+0000 7fc7b73dc640 1 Processor -- start 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.914+0000 7fc7b73dc640 1 -- start start 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.914+0000 7fc7b73dc640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a8142bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.914+0000 7fc7b73dc640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fc7a80a64d0 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.914+0000 7fc7b63da640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a8142bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.914+0000 7fc7b63da640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a8142bd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43882/0 (socket says 192.168.123.101:43882) 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.914+0000 7fc7b63da640 1 -- 192.168.123.101:0/3010145751 learned_addr learned my addr 192.168.123.101:0/3010145751 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.914+0000 7fc7b63da640 1 -- 192.168.123.101:0/3010145751 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc7a8143110 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.914+0000 7fc7b63da640 1 --2- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a8142bd0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fc7ac00aaf0 tx=0x7fc7ac037680 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc79f7fe640 1 -- 192.168.123.101:0/3010145751 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc7ac043070 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc7b73dc640 1 -- 192.168.123.101:0/3010145751 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc7a81433a0 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc7b73dc640 1 -- 192.168.123.101:0/3010145751 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc7a8146090 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc79f7fe640 1 -- 192.168.123.101:0/3010145751 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fc7ac037940 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc79f7fe640 1 -- 192.168.123.101:0/3010145751 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc7ac046540 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc79f7fe640 1 -- 192.168.123.101:0/3010145751 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 6) ==== 50141+0+0 (secure 0 0 0) 0x7fc7ac03b070 con 0x7fc7a80a4930 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc79f7fe640 1 --2- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fc79003dba0 0x7fc790040060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc7b5bd9640 1 --2- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fc79003dba0 0x7fc790040060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.124 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc7b5bd9640 1 --2- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fc79003dba0 0x7fc790040060 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fc7b00521f0 tx=0x7fc7b006a990 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc79f7fe640 1 -- 192.168.123.101:0/3010145751 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7fc7ac077b20 con 0x7fc7a80a4930 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.918+0000 7fc7b73dc640 1 -- 192.168.123.101:0/3010145751 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc77c005180 con 0x7fc7a80a4930 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:13.922+0000 7fc79f7fe640 1 -- 192.168.123.101:0/3010145751 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fc7ac04c220 con 0x7fc7a80a4930 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.058+0000 7fc7b73dc640 1 -- 192.168.123.101:0/3010145751 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- mgr_command(tid 0: {"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}) -- 0x7fc77c002bf0 con 0x7fc79003dba0 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.070+0000 7fc79f7fe640 1 -- 192.168.123.101:0/3010145751 <== mgr.14118 v2:192.168.123.101:6800/2530303036 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7fc77c002bf0 con 0x7fc79003dba0 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.070+0000 7fc79d7fa640 1 -- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fc79003dba0 msgr2=0x7fc790040060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.070+0000 7fc79d7fa640 1 --2- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fc79003dba0 0x7fc790040060 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fc7b00521f0 tx=0x7fc7b006a990 comp rx=0 tx=0).stop 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.070+0000 7fc79d7fa640 1 -- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 msgr2=0x7fc7a8142bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.070+0000 7fc79d7fa640 1 --2- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a8142bd0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fc7ac00aaf0 tx=0x7fc7ac037680 comp rx=0 tx=0).stop 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.074+0000 7fc79d7fa640 1 -- 192.168.123.101:0/3010145751 shutdown_connections 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.074+0000 7fc79d7fa640 1 --2- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fc79003dba0 0x7fc790040060 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.074+0000 7fc79d7fa640 1 --2- 192.168.123.101:0/3010145751 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc7a80a4930 0x7fc7a8142bd0 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.074+0000 7fc79d7fa640 1 -- 192.168.123.101:0/3010145751 >> 192.168.123.101:0/3010145751 conn(0x7fc7a809fc40 msgr2=0x7fc7a80a06d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.074+0000 7fc79d7fa640 1 -- 192.168.123.101:0/3010145751 shutdown_connections 2026-03-09T15:48:14.125 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.074+0000 7fc79d7fa640 1 -- 192.168.123.101:0/3010145751 wait complete. 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 Processor -- start 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 -- start start 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf40a4910 0x7fddf40a4d10 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fddf40a52e0 con 0x7fddf40a4910 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfaffd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf40a4910 0x7fddf40a4d10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfaffd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf40a4910 0x7fddf40a4d10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43884/0 (socket says 192.168.123.101:43884) 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfaffd640 1 -- 192.168.123.101:0/1550257823 learned_addr learned my addr 192.168.123.101:0/1550257823 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfaffd640 1 -- 192.168.123.101:0/1550257823 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fddf40a5b10 con 0x7fddf40a4910 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfaffd640 1 --2- 192.168.123.101:0/1550257823 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf40a4910 0x7fddf40a4d10 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fddec00a9c0 tx=0x7fddec033650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c3050a57f3c31c52 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddf9ffb640 1 -- 192.168.123.101:0/1550257823 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fddec037580 con 0x7fddf40a4910 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddf9ffb640 1 -- 192.168.123.101:0/1550257823 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fddec037b40 con 0x7fddf40a4910 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 -- 192.168.123.101:0/1550257823 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf40a4910 msgr2=0x7fddf40a4d10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 --2- 192.168.123.101:0/1550257823 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf40a4910 0x7fddf40a4d10 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fddec00a9c0 tx=0x7fddec033650 comp rx=0 tx=0).stop 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 -- 192.168.123.101:0/1550257823 shutdown_connections 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 --2- 192.168.123.101:0/1550257823 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf40a4910 0x7fddf40a4d10 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 -- 192.168.123.101:0/1550257823 >> 192.168.123.101:0/1550257823 conn(0x7fddf409fc20 msgr2=0x7fddf40a2080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 -- 192.168.123.101:0/1550257823 shutdown_connections 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 -- 192.168.123.101:0/1550257823 wait complete. 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 Processor -- start 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.258+0000 7fddfbfff640 1 -- start start 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfbfff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf4135f80 0x7fddf41363a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfbfff640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fddf40a64b0 con 0x7fddf4135f80 2026-03-09T15:48:14.406 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfaffd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf4135f80 0x7fddf41363a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfaffd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf4135f80 0x7fddf41363a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43900/0 (socket says 192.168.123.101:43900) 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfaffd640 1 -- 192.168.123.101:0/2790985412 learned_addr learned my addr 192.168.123.101:0/2790985412 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfaffd640 1 -- 192.168.123.101:0/2790985412 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fddf41368e0 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfaffd640 1 --2- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf4135f80 0x7fddf41363a0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fddec03d9e0 tx=0x7fddec03da10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fdddbfff640 1 -- 192.168.123.101:0/2790985412 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fddec04a070 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfbfff640 1 -- 192.168.123.101:0/2790985412 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fddf4136b70 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfbfff640 1 -- 192.168.123.101:0/2790985412 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fddf41376c0 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fdddbfff640 1 -- 192.168.123.101:0/2790985412 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fddec010380 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfbfff640 1 -- 192.168.123.101:0/2790985412 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fddcc005180 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fdddbfff640 1 -- 192.168.123.101:0/2790985412 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fddec045af0 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fdddbfff640 1 -- 192.168.123.101:0/2790985412 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 6) ==== 50141+0+0 (secure 0 0 0) 0x7fddec03b070 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fdddbfff640 1 --2- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fddd403dbc0 0x7fddd4040080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fdddbfff640 1 -- 192.168.123.101:0/2790985412 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7fddec0795d0 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfa7fc640 1 --2- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fddd403dbc0 0x7fddd4040080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.262+0000 7fddfa7fc640 1 --2- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fddd403dbc0 0x7fddd4040080 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fddfc0521f0 tx=0x7fddfc06a800 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.266+0000 7fdddbfff640 1 -- 192.168.123.101:0/2790985412 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fddec010070 con 0x7fddf4135f80 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.358+0000 7fddfbfff640 1 -- 192.168.123.101:0/2790985412 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- mgr_command(tid 0: {"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}) -- 0x7fddcc002bf0 con 0x7fddd403dbc0 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fdddbfff640 1 -- 192.168.123.101:0/2790985412 <== mgr.14118 v2:192.168.123.101:6800/2530303036 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+16 (secure 0 0 0) 0x7fddcc002bf0 con 0x7fddd403dbc0 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fddd9ffb640 1 -- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fddd403dbc0 msgr2=0x7fddd4040080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fddd9ffb640 1 --2- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fddd403dbc0 0x7fddd4040080 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fddfc0521f0 tx=0x7fddfc06a800 comp rx=0 tx=0).stop 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fddd9ffb640 1 -- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf4135f80 msgr2=0x7fddf41363a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fddd9ffb640 1 --2- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf4135f80 0x7fddf41363a0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fddec03d9e0 tx=0x7fddec03da10 comp rx=0 tx=0).stop 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fddd9ffb640 1 -- 192.168.123.101:0/2790985412 shutdown_connections 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fddd9ffb640 1 --2- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fddd403dbc0 0x7fddd4040080 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fddd9ffb640 1 --2- 192.168.123.101:0/2790985412 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fddf4135f80 0x7fddf41363a0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fddd9ffb640 1 -- 192.168.123.101:0/2790985412 >> 192.168.123.101:0/2790985412 conn(0x7fddf409fc20 msgr2=0x7fddf40a05f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.362+0000 7fddd9ffb640 1 -- 192.168.123.101:0/2790985412 shutdown_connections 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.366+0000 7fddd9ffb640 1 -- 192.168.123.101:0/2790985412 wait complete. 2026-03-09T15:48:14.407 INFO:teuthology.orchestra.run.vm01.stdout:Generating ssh key... 2026-03-09T15:48:14.694 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.510+0000 7fecc0853640 1 Processor -- start 2026-03-09T15:48:14.694 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecc0853640 1 -- start start 2026-03-09T15:48:14.694 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecc0853640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc1069b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecc0853640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fecbc106f80 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecba575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc1069b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecba575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc1069b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43910/0 (socket says 192.168.123.101:43910) 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecba575640 1 -- 192.168.123.101:0/1409045613 learned_addr learned my addr 192.168.123.101:0/1409045613 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecba575640 1 -- 192.168.123.101:0/1409045613 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fecbc1077b0 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecba575640 1 --2- 192.168.123.101:0/1409045613 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc1069b0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7feca4009b80 tx=0x7feca402f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=131ad72141d8a96f server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecb9573640 1 -- 192.168.123.101:0/1409045613 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7feca403c070 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecb9573640 1 -- 192.168.123.101:0/1409045613 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7feca4037440 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecb9573640 1 -- 192.168.123.101:0/1409045613 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7feca4035340 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecc0853640 1 -- 192.168.123.101:0/1409045613 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 msgr2=0x7fecbc1069b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.514+0000 7fecc0853640 1 --2- 192.168.123.101:0/1409045613 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc1069b0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7feca4009b80 tx=0x7feca402f190 comp rx=0 tx=0).stop 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 -- 192.168.123.101:0/1409045613 shutdown_connections 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 --2- 192.168.123.101:0/1409045613 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc1069b0 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 -- 192.168.123.101:0/1409045613 >> 192.168.123.101:0/1409045613 conn(0x7fecbc101d60 msgr2=0x7fecbc104180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 -- 192.168.123.101:0/1409045613 shutdown_connections 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 -- 192.168.123.101:0/1409045613 wait complete. 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 Processor -- start 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 -- start start 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc07c3e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fecbc1084b0 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecba575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc07c3e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecba575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc07c3e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43912/0 (socket says 192.168.123.101:43912) 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecba575640 1 -- 192.168.123.101:0/3069061418 learned_addr learned my addr 192.168.123.101:0/3069061418 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecba575640 1 -- 192.168.123.101:0/3069061418 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fecbc07c920 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecba575640 1 --2- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc07c3e0 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7feca40365d0 tx=0x7feca4036600 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7feca37fe640 1 -- 192.168.123.101:0/3069061418 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7feca4047070 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7feca37fe640 1 -- 192.168.123.101:0/3069061418 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7feca4036e50 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7feca37fe640 1 -- 192.168.123.101:0/3069061418 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7feca403c070 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fecbc07ab10 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.518+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fecbc07aff0 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.522+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fecbc10a9e0 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.522+0000 7feca37fe640 1 -- 192.168.123.101:0/3069061418 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 6) ==== 50141+0+0 (secure 0 0 0) 0x7feca4054080 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.522+0000 7feca37fe640 1 --2- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fec9403d760 0x7fec9403fc20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.522+0000 7fecb9d74640 1 --2- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fec9403d760 0x7fec9403fc20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.522+0000 7feca37fe640 1 -- 192.168.123.101:0/3069061418 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7feca4076f10 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.526+0000 7feca37fe640 1 -- 192.168.123.101:0/3069061418 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7feca40773d0 con 0x7fecbc1065b0 2026-03-09T15:48:14.695 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.530+0000 7fecb9d74640 1 --2- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fec9403d760 0x7fec9403fc20 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fecb00099c0 tx=0x7fecb0006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.618+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- mgr_command(tid 0: {"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}) -- 0x7fecbc07b650 con 0x7fec9403d760 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.646+0000 7feca37fe640 1 -- 192.168.123.101:0/3069061418 <== mgr.14118 v2:192.168.123.101:6800/2530303036 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7fecbc07b650 con 0x7fec9403d760 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fec9403d760 msgr2=0x7fec9403fc20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 --2- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fec9403d760 0x7fec9403fc20 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fecb00099c0 tx=0x7fecb0006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 msgr2=0x7fecbc07c3e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 --2- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc07c3e0 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7feca40365d0 tx=0x7feca4036600 comp rx=0 tx=0).stop 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 shutdown_connections 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 --2- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fec9403d760 0x7fec9403fc20 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 --2- 192.168.123.101:0/3069061418 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fecbc1065b0 0x7fecbc07c3e0 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 >> 192.168.123.101:0/3069061418 conn(0x7fecbc101d60 msgr2=0x7fecbc1028d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 shutdown_connections 2026-03-09T15:48:14.696 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.650+0000 7fecc0853640 1 -- 192.168.123.101:0/3069061418 wait complete. 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: Generating public/private ed25519 key pair. 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: Your identification has been saved in /tmp/tmpdvia4s8l/key 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: Your public key has been saved in /tmp/tmpdvia4s8l/key.pub 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: The key fingerprint is: 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: SHA256:xUrV9cBbaroFXUJQ7xPuXUkKh4HpFfq6GB6gq5KuGrQ ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: The key's randomart image is: 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: +--[ED25519 256]--+ 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: | ++o*= | 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: | =..+ o+o| 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: | o.+o o Oo| 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: | . +. + O.o| 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: | . . S . = +o| 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: |. . . . . . o +| 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: |.E . o . o ..| 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: |o. . . + . . | 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: |Bo.. o . | 2026-03-09T15:48:14.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:14 vm01 bash[21002]: +----[SHA256]-----+ 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK9x06PqRiQAZsjB9w6vP4G9bJhdyvM1QlHX61CC2Pex ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.822+0000 7fe7b61a0640 1 Processor -- start 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.822+0000 7fe7b61a0640 1 -- start start 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.822+0000 7fe7b61a0640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01071e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.822+0000 7fe7b61a0640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fe7b007a650 con 0x7fe7b0104dd0 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.822+0000 7fe7af7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01071e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.822+0000 7fe7af7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01071e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43918/0 (socket says 192.168.123.101:43918) 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.822+0000 7fe7af7fe640 1 -- 192.168.123.101:0/1671860879 learned_addr learned my addr 192.168.123.101:0/1671860879 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.822+0000 7fe7af7fe640 1 -- 192.168.123.101:0/1671860879 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe7b0107720 con 0x7fe7b0104dd0 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7af7fe640 1 --2- 192.168.123.101:0/1671860879 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01071e0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fe794009920 tx=0x7fe79402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c0a4f5c09973e13d server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7ae7fc640 1 -- 192.168.123.101:0/1671860879 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe79403c070 con 0x7fe7b0104dd0 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7ae7fc640 1 -- 192.168.123.101:0/1671860879 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fe794037440 con 0x7fe7b0104dd0 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1671860879 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 msgr2=0x7fe7b01071e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 --2- 192.168.123.101:0/1671860879 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01071e0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fe794009920 tx=0x7fe79402ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1671860879 shutdown_connections 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 --2- 192.168.123.101:0/1671860879 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01071e0 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1671860879 >> 192.168.123.101:0/1671860879 conn(0x7fe7b0100c00 msgr2=0x7fe7b0103040 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1671860879 shutdown_connections 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1671860879 wait complete. 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 Processor -- start 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 -- start start 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01a2c20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7b61a0640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fe7b01085a0 con 0x7fe7b0104dd0 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7af7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01a2c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7af7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01a2c20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43932/0 (socket says 192.168.123.101:43932) 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7af7fe640 1 -- 192.168.123.101:0/1735138072 learned_addr learned my addr 192.168.123.101:0/1735138072 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:14.984 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.826+0000 7fe7af7fe640 1 -- 192.168.123.101:0/1735138072 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe7b01a3160 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.830+0000 7fe7af7fe640 1 --2- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01a2c20 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fe79402f450 tx=0x7fe794035d30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.830+0000 7fe7acff9640 1 -- 192.168.123.101:0/1735138072 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe79403c070 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.830+0000 7fe7acff9640 1 -- 192.168.123.101:0/1735138072 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fe794045070 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.830+0000 7fe7acff9640 1 -- 192.168.123.101:0/1735138072 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe794040a60 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.830+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe7b01a33f0 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.830+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe7b01a60e0 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.830+0000 7fe7acff9640 1 -- 192.168.123.101:0/1735138072 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 6) ==== 50141+0+0 (secure 0 0 0) 0x7fe794037650 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.830+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe7b01051d0 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.834+0000 7fe7acff9640 1 --2- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fe78803d7b0 0x7fe78803fc70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.834+0000 7fe7acff9640 1 -- 192.168.123.101:0/1735138072 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7fe794076780 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.834+0000 7fe7aeffd640 1 --2- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fe78803d7b0 0x7fe78803fc70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.834+0000 7fe7aeffd640 1 --2- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fe78803d7b0 0x7fe78803fc70 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7fe79c009a10 tx=0x7fe79c006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.834+0000 7fe7acff9640 1 -- 192.168.123.101:0/1735138072 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe7940368f0 con 0x7fe7b0104dd0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.930+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- mgr_command(tid 0: {"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}) -- 0x7fe7b019bff0 con 0x7fe78803d7b0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.934+0000 7fe7acff9640 1 -- 192.168.123.101:0/1735138072 <== mgr.14118 v2:192.168.123.101:6800/2530303036 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+123 (secure 0 0 0) 0x7fe7b019bff0 con 0x7fe78803d7b0 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.934+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fe78803d7b0 msgr2=0x7fe78803fc70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.934+0000 7fe7b61a0640 1 --2- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fe78803d7b0 0x7fe78803fc70 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7fe79c009a10 tx=0x7fe79c006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.934+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 msgr2=0x7fe7b01a2c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.934+0000 7fe7b61a0640 1 --2- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01a2c20 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fe79402f450 tx=0x7fe794035d30 comp rx=0 tx=0).stop 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.938+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 shutdown_connections 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.938+0000 7fe7b61a0640 1 --2- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fe78803d7b0 0x7fe78803fc70 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.938+0000 7fe7b61a0640 1 --2- 192.168.123.101:0/1735138072 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7b0104dd0 0x7fe7b01a2c20 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.938+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 >> 192.168.123.101:0/1735138072 conn(0x7fe7b0100c00 msgr2=0x7fe7b010ace0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.938+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 shutdown_connections 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:14.938+0000 7fe7b61a0640 1 -- 192.168.123.101:0/1735138072 wait complete. 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T15:48:14.985 INFO:teuthology.orchestra.run.vm01.stdout:Adding host vm01... 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:13.701716+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:13.701716+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:13.706313+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:13.706313+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.064286+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.064286+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.068160+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.068160+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.078272+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.078272+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.588032+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.588032+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.648209+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.648209+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.649952+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:15.235 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:15 vm01 bash[20728]: audit 2026-03-09T15:48:14.649952+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: audit 2026-03-09T15:48:14.365811+0000 mgr.y (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: audit 2026-03-09T15:48:14.365811+0000 mgr.y (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.375549+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Bus STARTING 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.375549+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Bus STARTING 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.476978+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.476978+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.587385+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.587385+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.587497+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Bus STARTED 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.587497+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Bus STARTED 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.587883+0000 mgr.y (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Client ('192.168.123.101', 47824) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.587883+0000 mgr.y (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Client ('192.168.123.101', 47824) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: audit 2026-03-09T15:48:14.625431+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: audit 2026-03-09T15:48:14.625431+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.625592+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cephadm 2026-03-09T15:48:14.625592+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: audit 2026-03-09T15:48:14.937553+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: audit 2026-03-09T15:48:14.937553+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: audit 2026-03-09T15:48:15.223723+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: audit 2026-03-09T15:48:15.223723+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cluster 2026-03-09T15:48:15.656726+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-09T15:48:16.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:16 vm01 bash[20728]: cluster 2026-03-09T15:48:15.656726+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-09T15:48:17.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Added host 'vm01' with addr '192.168.123.101' 2026-03-09T15:48:17.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.110+0000 7fbf3c3e3640 1 Processor -- start 2026-03-09T15:48:17.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.110+0000 7fbf3c3e3640 1 -- start start 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.110+0000 7fbf3c3e3640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf3407ab30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.110+0000 7fbf3c3e3640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fbf3407b070 con 0x7fbf3407c6d0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.110+0000 7fbf3a158640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf3407ab30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.110+0000 7fbf3a158640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf3407ab30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43936/0 (socket says 192.168.123.101:43936) 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.110+0000 7fbf3a158640 1 -- 192.168.123.101:0/4074485415 learned_addr learned my addr 192.168.123.101:0/4074485415 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.110+0000 7fbf3a158640 1 -- 192.168.123.101:0/4074485415 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbf3407b1f0 con 0x7fbf3407c6d0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3a158640 1 --2- 192.168.123.101:0/4074485415 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf3407ab30 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fbf24009920 tx=0x7fbf2402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d5ac0bb34dbba80d server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf39156640 1 -- 192.168.123.101:0/4074485415 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbf2403c070 con 0x7fbf3407c6d0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf39156640 1 -- 192.168.123.101:0/4074485415 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fbf24037440 con 0x7fbf3407c6d0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf39156640 1 -- 192.168.123.101:0/4074485415 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbf24035340 con 0x7fbf3407c6d0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/4074485415 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 msgr2=0x7fbf3407ab30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 --2- 192.168.123.101:0/4074485415 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf3407ab30 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fbf24009920 tx=0x7fbf2402ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/4074485415 shutdown_connections 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 --2- 192.168.123.101:0/4074485415 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf3407ab30 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/4074485415 >> 192.168.123.101:0/4074485415 conn(0x7fbf34101db0 msgr2=0x7fbf341041d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/4074485415 shutdown_connections 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/4074485415 wait complete. 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 Processor -- start 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 -- start start 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf341098b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3a158640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf341098b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3a158640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf341098b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:43940/0 (socket says 192.168.123.101:43940) 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3a158640 1 -- 192.168.123.101:0/957689319 learned_addr learned my addr 192.168.123.101:0/957689319 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fbf3410ca10 con 0x7fbf3407c6d0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3a158640 1 -- 192.168.123.101:0/957689319 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbf34109df0 con 0x7fbf3407c6d0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3a158640 1 --2- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf341098b0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fbf24009a50 tx=0x7fbf24037670 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf237fe640 1 -- 192.168.123.101:0/957689319 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbf2403c030 con 0x7fbf3407c6d0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf237fe640 1 -- 192.168.123.101:0/957689319 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fbf2403e070 con 0x7fbf3407c6d0 2026-03-09T15:48:17.260 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf237fe640 1 -- 192.168.123.101:0/957689319 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbf24042d00 con 0x7fbf3407c6d0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.114+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fbf3410a080 con 0x7fbf3407c6d0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.118+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fbf341063f0 con 0x7fbf3407c6d0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.118+0000 7fbf237fe640 1 -- 192.168.123.101:0/957689319 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 6) ==== 50141+0+0 (secure 0 0 0) 0x7fbf2404c430 con 0x7fbf3407c6d0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.118+0000 7fbf237fe640 1 --2- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fbf1403db20 0x7fbf1403ffe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.118+0000 7fbf237fe640 1 -- 192.168.123.101:0/957689319 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7fbf2403d070 con 0x7fbf3407c6d0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.118+0000 7fbf39957640 1 --2- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fbf1403db20 0x7fbf1403ffe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.118+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbefc005180 con 0x7fbf3407c6d0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.122+0000 7fbf39957640 1 --2- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fbf1403db20 0x7fbf1403ffe0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fbf280099c0 tx=0x7fbf28006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.122+0000 7fbf237fe640 1 -- 192.168.123.101:0/957689319 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fbf24047210 con 0x7fbf3407c6d0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.218+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}) -- 0x7fbefc002bf0 con 0x7fbf1403db20 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:15.654+0000 7fbf237fe640 1 -- 192.168.123.101:0/957689319 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7fbf2404c7a0 con 0x7fbf3407c6d0 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.194+0000 7fbf237fe640 1 -- 192.168.123.101:0/957689319 <== mgr.14118 v2:192.168.123.101:6800/2530303036 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (secure 0 0 0) 0x7fbefc002bf0 con 0x7fbf1403db20 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.194+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fbf1403db20 msgr2=0x7fbf1403ffe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.194+0000 7fbf3c3e3640 1 --2- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fbf1403db20 0x7fbf1403ffe0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fbf280099c0 tx=0x7fbf28006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.194+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 msgr2=0x7fbf341098b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.194+0000 7fbf3c3e3640 1 --2- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf341098b0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fbf24009a50 tx=0x7fbf24037670 comp rx=0 tx=0).stop 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.198+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 shutdown_connections 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.198+0000 7fbf3c3e3640 1 --2- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7fbf1403db20 0x7fbf1403ffe0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.198+0000 7fbf3c3e3640 1 --2- 192.168.123.101:0/957689319 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fbf3407c6d0 0x7fbf341098b0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.198+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 >> 192.168.123.101:0/957689319 conn(0x7fbf34101db0 msgr2=0x7fbf341028c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.198+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 shutdown_connections 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.198+0000 7fbf3c3e3640 1 -- 192.168.123.101:0/957689319 wait complete. 2026-03-09T15:48:17.261 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mon service... 2026-03-09T15:48:17.332 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:17 vm01 bash[20728]: cephadm 2026-03-09T15:48:15.831307+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm01 2026-03-09T15:48:17.332 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:17 vm01 bash[20728]: cephadm 2026-03-09T15:48:15.831307+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm01 2026-03-09T15:48:17.591 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T15:48:17.591 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 Processor -- start 2026-03-09T15:48:17.591 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 -- start start 2026-03-09T15:48:17.591 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c080370 0x7f408c080770 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:17.591 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f408c080d40 con 0x7f408c080370 2026-03-09T15:48:17.591 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f40921ac640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c080370 0x7f408c080770 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f40921ac640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c080370 0x7f408c080770 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38236/0 (socket says 192.168.123.101:38236) 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f40921ac640 1 -- 192.168.123.101:0/2957288217 learned_addr learned my addr 192.168.123.101:0/2957288217 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f40921ac640 1 -- 192.168.123.101:0/2957288217 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f408c0815c0 con 0x7f408c080370 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f40921ac640 1 --2- 192.168.123.101:0/2957288217 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c080370 0x7f408c080770 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f407c009920 tx=0x7f407c02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=264b2d4709e7161a server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f40911aa640 1 -- 192.168.123.101:0/2957288217 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f407c03c070 con 0x7f408c080370 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f40911aa640 1 -- 192.168.123.101:0/2957288217 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f407c037440 con 0x7f408c080370 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 -- 192.168.123.101:0/2957288217 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c080370 msgr2=0x7f408c080770 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 --2- 192.168.123.101:0/2957288217 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c080370 0x7f408c080770 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f407c009920 tx=0x7f407c02ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 -- 192.168.123.101:0/2957288217 shutdown_connections 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 --2- 192.168.123.101:0/2957288217 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c080370 0x7f408c080770 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 -- 192.168.123.101:0/2957288217 >> 192.168.123.101:0/2957288217 conn(0x7f408c07bc10 msgr2=0x7f408c07e030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 -- 192.168.123.101:0/2957288217 shutdown_connections 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 -- 192.168.123.101:0/2957288217 wait complete. 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 Processor -- start 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 -- start start 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c1ab1a0 0x7f408c1ab5c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.414+0000 7f4094437640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f408c0822c0 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f40921ac640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c1ab1a0 0x7f408c1ab5c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f40921ac640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c1ab1a0 0x7f408c1ab5c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38238/0 (socket says 192.168.123.101:38238) 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f40921ac640 1 -- 192.168.123.101:0/3164976693 learned_addr learned my addr 192.168.123.101:0/3164976693 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f40921ac640 1 -- 192.168.123.101:0/3164976693 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f408c1abb00 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f40921ac640 1 --2- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c1ab1a0 0x7f408c1ab5c0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f407c035770 tx=0x7f407c0357a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f407b7fe640 1 -- 192.168.123.101:0/3164976693 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f407c045070 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f408c1abd90 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f408c10fb20 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f407b7fe640 1 -- 192.168.123.101:0/3164976693 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f407c035d80 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f407b7fe640 1 -- 192.168.123.101:0/3164976693 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f407c03c050 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f407b7fe640 1 -- 192.168.123.101:0/3164976693 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7f407c04a430 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f407b7fe640 1 --2- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f407404f120 0x7f40740515e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f40919ab640 1 --2- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f407404f120 0x7f40740515e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f40919ab640 1 --2- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f407404f120 0x7f40740515e0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f408800ad80 tx=0x7f40880093f0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f407b7fe640 1 -- 192.168.123.101:0/3164976693 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f407c076ff0 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.418+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4060005180 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.422+0000 7f407b7fe640 1 -- 192.168.123.101:0/3164976693 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f407c03c1f0 con 0x7f408c1ab1a0 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.530+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7f4054000d90 con 0x7f407404f120 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.538+0000 7f407b7fe640 1 -- 192.168.123.101:0/3164976693 <== mgr.14118 v2:192.168.123.101:6800/2530303036 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f4054000d90 con 0x7f407404f120 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f407404f120 msgr2=0x7f40740515e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 --2- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f407404f120 0x7f40740515e0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f408800ad80 tx=0x7f40880093f0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c1ab1a0 msgr2=0x7f408c1ab5c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 --2- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c1ab1a0 0x7f408c1ab5c0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f407c035770 tx=0x7f407c0357a0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 shutdown_connections 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 --2- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f407404f120 0x7f40740515e0 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 --2- 192.168.123.101:0/3164976693 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f408c1ab1a0 0x7f408c1ab5c0 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 >> 192.168.123.101:0/3164976693 conn(0x7f408c07bc10 msgr2=0x7f408c07c420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 shutdown_connections 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.542+0000 7f4094437640 1 -- 192.168.123.101:0/3164976693 wait complete. 2026-03-09T15:48:17.592 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mgr service... 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.718+0000 7f97e610b640 1 Processor -- start 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 -- start start 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e0108c30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f97e0109200 con 0x7f97e0108830 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97df7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e0108c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97df7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e0108c30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38250/0 (socket says 192.168.123.101:38250) 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97df7fe640 1 -- 192.168.123.101:0/1593026761 learned_addr learned my addr 192.168.123.101:0/1593026761 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97df7fe640 1 -- 192.168.123.101:0/1593026761 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f97e0109a30 con 0x7f97e0108830 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97df7fe640 1 --2- 192.168.123.101:0/1593026761 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e0108c30 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f97cc009920 tx=0x7f97cc02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=cdcc1aaaeea89a04 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97de7fc640 1 -- 192.168.123.101:0/1593026761 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f97cc03c070 con 0x7f97e0108830 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97de7fc640 1 -- 192.168.123.101:0/1593026761 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f97cc037440 con 0x7f97e0108830 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97de7fc640 1 -- 192.168.123.101:0/1593026761 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f97cc035340 con 0x7f97e0108830 2026-03-09T15:48:17.894 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 -- 192.168.123.101:0/1593026761 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 msgr2=0x7f97e0108c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 --2- 192.168.123.101:0/1593026761 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e0108c30 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f97cc009920 tx=0x7f97cc02ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 -- 192.168.123.101:0/1593026761 shutdown_connections 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 --2- 192.168.123.101:0/1593026761 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e0108c30 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 -- 192.168.123.101:0/1593026761 >> 192.168.123.101:0/1593026761 conn(0x7f97e007bd50 msgr2=0x7f97e007c1a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 -- 192.168.123.101:0/1593026761 shutdown_connections 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 -- 192.168.123.101:0/1593026761 wait complete. 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 Processor -- start 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.722+0000 7f97e610b640 1 -- start start 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97e610b640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e019e610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97e610b640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f97e010a730 con 0x7f97e0108830 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97df7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e019e610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:17.895 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97df7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e019e610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38264/0 (socket says 192.168.123.101:38264) 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97df7fe640 1 -- 192.168.123.101:0/545361177 learned_addr learned my addr 192.168.123.101:0/545361177 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97df7fe640 1 -- 192.168.123.101:0/545361177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f97e019eb50 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97df7fe640 1 --2- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e019e610 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f97cc009a50 tx=0x7f97cc037860 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97dcff9640 1 -- 192.168.123.101:0/545361177 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f97cc03c030 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f97e019ede0 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f97e01a1ad0 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97dcff9640 1 -- 192.168.123.101:0/545361177 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f97cc03e070 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97dcff9640 1 -- 192.168.123.101:0/545361177 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f97cc042c80 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97dcff9640 1 -- 192.168.123.101:0/545361177 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7f97cc04c430 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97dcff9640 1 --2- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f97a803dc90 0x7f97a8040150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97dcff9640 1 -- 192.168.123.101:0/545361177 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f97cc03d070 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.726+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f97ac005180 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.730+0000 7f97dcff9640 1 -- 192.168.123.101:0/545361177 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f97cc042e20 con 0x7f97e0108830 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.730+0000 7f97deffd640 1 --2- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f97a803dc90 0x7f97a8040150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.730+0000 7f97deffd640 1 --2- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f97a803dc90 0x7f97a8040150 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f97d0009a10 tx=0x7f97d0006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.834+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7f97ac002bf0 con 0x7f97a803dc90 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.842+0000 7f97dcff9640 1 -- 192.168.123.101:0/545361177 <== mgr.14118 v2:192.168.123.101:6800/2530303036 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f97ac002bf0 con 0x7f97a803dc90 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.842+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f97a803dc90 msgr2=0x7f97a8040150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.842+0000 7f97e610b640 1 --2- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f97a803dc90 0x7f97a8040150 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f97d0009a10 tx=0x7f97d0006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.842+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 msgr2=0x7f97e019e610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.842+0000 7f97e610b640 1 --2- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e019e610 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f97cc009a50 tx=0x7f97cc037860 comp rx=0 tx=0).stop 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.846+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 shutdown_connections 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.846+0000 7f97e610b640 1 --2- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f97a803dc90 0x7f97a8040150 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.846+0000 7f97e610b640 1 --2- 192.168.123.101:0/545361177 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f97e0108830 0x7f97e019e610 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.846+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 >> 192.168.123.101:0/545361177 conn(0x7f97e007bd50 msgr2=0x7f97e0105f40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.846+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 shutdown_connections 2026-03-09T15:48:17.896 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:17.846+0000 7f97e610b640 1 -- 192.168.123.101:0/545361177 wait complete. 2026-03-09T15:48:18.191 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.026+0000 7f3f31082640 1 Processor -- start 2026-03-09T15:48:18.191 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.026+0000 7f3f31082640 1 -- start start 2026-03-09T15:48:18.191 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.026+0000 7f3f31082640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:18.191 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.026+0000 7f3f31082640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3f2c109230 con 0x7f3f2c108860 2026-03-09T15:48:18.191 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.026+0000 7f3f2ad76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:18.191 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.026+0000 7f3f2ad76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38268/0 (socket says 192.168.123.101:38268) 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.026+0000 7f3f2ad76640 1 -- 192.168.123.101:0/1329867441 learned_addr learned my addr 192.168.123.101:0/1329867441 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f2ad76640 1 -- 192.168.123.101:0/1329867441 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3f2c109a60 con 0x7f3f2c108860 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f2ad76640 1 --2- 192.168.123.101:0/1329867441 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c108c60 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f3f18009920 tx=0x7f3f1802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b69d87bde6023f6e server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f29d74640 1 -- 192.168.123.101:0/1329867441 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3f1803c070 con 0x7f3f2c108860 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f29d74640 1 -- 192.168.123.101:0/1329867441 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f3f18037440 con 0x7f3f2c108860 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 -- 192.168.123.101:0/1329867441 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 msgr2=0x7f3f2c108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 --2- 192.168.123.101:0/1329867441 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c108c60 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f3f18009920 tx=0x7f3f1802ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 -- 192.168.123.101:0/1329867441 shutdown_connections 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 --2- 192.168.123.101:0/1329867441 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c108c60 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 -- 192.168.123.101:0/1329867441 >> 192.168.123.101:0/1329867441 conn(0x7f3f2c07bda0 msgr2=0x7f3f2c07c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 -- 192.168.123.101:0/1329867441 shutdown_connections 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 -- 192.168.123.101:0/1329867441 wait complete. 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 Processor -- start 2026-03-09T15:48:18.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 -- start start 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c19e700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f31082640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3f2c10a760 con 0x7f3f2c108860 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f2ad76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c19e700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f2ad76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c19e700 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38278/0 (socket says 192.168.123.101:38278) 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f2ad76640 1 -- 192.168.123.101:0/3698565340 learned_addr learned my addr 192.168.123.101:0/3698565340 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.030+0000 7f3f2ad76640 1 -- 192.168.123.101:0/3698565340 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3f2c19ec40 con 0x7f3f2c108860 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f2ad76640 1 --2- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c19e700 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f3f18037b80 tx=0x7f3f18037bb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f07fff640 1 -- 192.168.123.101:0/3698565340 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3f1803c070 con 0x7f3f2c108860 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f07fff640 1 -- 192.168.123.101:0/3698565340 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f3f18045070 con 0x7f3f2c108860 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3f2c19eed0 con 0x7f3f2c108860 2026-03-09T15:48:18.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f07fff640 1 -- 192.168.123.101:0/3698565340 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3f18040a10 con 0x7f3f2c108860 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f07fff640 1 -- 192.168.123.101:0/3698565340 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7f3f18040cc0 con 0x7f3f2c108860 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3f2c1a1bc0 con 0x7f3f2c108860 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3f2c10cc90 con 0x7f3f2c108860 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f07fff640 1 --2- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3f0003d880 0x7f3f0003fd40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.034+0000 7f3f07fff640 1 -- 192.168.123.101:0/3698565340 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f3f1807d240 con 0x7f3f2c108860 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.038+0000 7f3f2a575640 1 --2- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3f0003d880 0x7f3f0003fd40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.038+0000 7f3f2a575640 1 --2- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3f0003d880 0x7f3f0003fd40 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f3f200099c0 tx=0x7f3f20006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.038+0000 7f3f07fff640 1 -- 192.168.123.101:0/3698565340 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3f18047bd0 con 0x7f3f2c108860 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.134+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) -- 0x7f3f2c108c60 con 0x7f3f2c108860 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.138+0000 7f3f07fff640 1 -- 192.168.123.101:0/3698565340 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/container_init}]=0 v6)=0 v6) ==== 142+0+0 (secure 0 0 0) 0x7f3f18035c20 con 0x7f3f2c108860 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3f0003d880 msgr2=0x7f3f0003fd40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 --2- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3f0003d880 0x7f3f0003fd40 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f3f200099c0 tx=0x7f3f20006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 msgr2=0x7f3f2c19e700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 --2- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c19e700 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f3f18037b80 tx=0x7f3f18037bb0 comp rx=0 tx=0).stop 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 shutdown_connections 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 --2- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3f0003d880 0x7f3f0003fd40 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 --2- 192.168.123.101:0/3698565340 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3f2c108860 0x7f3f2c19e700 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 >> 192.168.123.101:0/3698565340 conn(0x7f3f2c07bda0 msgr2=0x7f3f2c106060 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 shutdown_connections 2026-03-09T15:48:18.194 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.146+0000 7f3f31082640 1 -- 192.168.123.101:0/3698565340 wait complete. 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:17.196453+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:17.196453+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: cephadm 2026-03-09T15:48:17.196972+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm01 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: cephadm 2026-03-09T15:48:17.196972+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm01 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:17.197229+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:17.197229+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:17.542266+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:17.542266+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:17.845304+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:17.845304+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:18.143742+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.101:0/3698565340' entity='client.admin' 2026-03-09T15:48:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:18 vm01 bash[20728]: audit 2026-03-09T15:48:18.143742+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.101:0/3698565340' entity='client.admin' 2026-03-09T15:48:18.484 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 Processor -- start 2026-03-09T15:48:18.484 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 -- start start 2026-03-09T15:48:18.484 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af4108c30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:18.484 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f8af4109200 con 0x7f8af4108830 2026-03-09T15:48:18.484 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8af9219640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af4108c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:18.484 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8af9219640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af4108c30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38294/0 (socket says 192.168.123.101:38294) 2026-03-09T15:48:18.484 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8af9219640 1 -- 192.168.123.101:0/2374991918 learned_addr learned my addr 192.168.123.101:0/2374991918 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:18.484 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8af9219640 1 -- 192.168.123.101:0/2374991918 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8af4109a30 con 0x7f8af4108830 2026-03-09T15:48:18.484 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8af9219640 1 --2- 192.168.123.101:0/2374991918 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af4108c30 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f8ae8009920 tx=0x7f8ae802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5bca270efa6d5c16 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8ae3fff640 1 -- 192.168.123.101:0/2374991918 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8ae803c070 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8ae3fff640 1 -- 192.168.123.101:0/2374991918 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f8ae8037440 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 -- 192.168.123.101:0/2374991918 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 msgr2=0x7f8af4108c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 --2- 192.168.123.101:0/2374991918 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af4108c30 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f8ae8009920 tx=0x7f8ae802ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 -- 192.168.123.101:0/2374991918 shutdown_connections 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 --2- 192.168.123.101:0/2374991918 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af4108c30 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 -- 192.168.123.101:0/2374991918 >> 192.168.123.101:0/2374991918 conn(0x7f8af407bd50 msgr2=0x7f8af407c1a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 -- 192.168.123.101:0/2374991918 shutdown_connections 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.318+0000 7f8afb4a4640 1 -- 192.168.123.101:0/2374991918 wait complete. 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8afb4a4640 1 Processor -- start 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8afb4a4640 1 -- start start 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8afb4a4640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af419e510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8afb4a4640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f8af410a730 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8af9219640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af419e510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8af9219640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af419e510 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38304/0 (socket says 192.168.123.101:38304) 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8af9219640 1 -- 192.168.123.101:0/4031595610 learned_addr learned my addr 192.168.123.101:0/4031595610 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8af9219640 1 -- 192.168.123.101:0/4031595610 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8af419ea50 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8af9219640 1 --2- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af419e510 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f8ae8006fd0 tx=0x7f8ae8035d50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8ae27fc640 1 -- 192.168.123.101:0/4031595610 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8ae8045070 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8ae27fc640 1 -- 192.168.123.101:0/4031595610 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f8ae8040430 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8ae27fc640 1 -- 192.168.123.101:0/4031595610 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8ae803c050 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f8af419ece0 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f8af41a19d0 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.322+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8abc005180 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.326+0000 7f8ae27fc640 1 -- 192.168.123.101:0/4031595610 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7f8ae802fbc0 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.326+0000 7f8ae27fc640 1 --2- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f8acc03dc90 0x7f8acc040150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.326+0000 7f8ae27fc640 1 -- 192.168.123.101:0/4031595610 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f8ae8076a50 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.326+0000 7f8af8a18640 1 --2- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f8acc03dc90 0x7f8acc040150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.326+0000 7f8ae27fc640 1 -- 192.168.123.101:0/4031595610 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f8ae8076e20 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.326+0000 7f8af8a18640 1 --2- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f8acc03dc90 0x7f8acc040150 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f8ae40099c0 tx=0x7f8ae4006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.426+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0) -- 0x7f8abc005470 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.434+0000 7f8ae27fc640 1 -- 192.168.123.101:0/4031595610 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/dashboard/ssl_server_port}]=0 v7)=0 v7) ==== 130+0+0 (secure 0 0 0) 0x7f8ae8033c90 con 0x7f8af4108830 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f8acc03dc90 msgr2=0x7f8acc040150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 --2- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f8acc03dc90 0x7f8acc040150 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f8ae40099c0 tx=0x7f8ae4006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 msgr2=0x7f8af419e510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 --2- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af419e510 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f8ae8006fd0 tx=0x7f8ae8035d50 comp rx=0 tx=0).stop 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 shutdown_connections 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 --2- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f8acc03dc90 0x7f8acc040150 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 --2- 192.168.123.101:0/4031595610 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8af4108830 0x7f8af419e510 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 >> 192.168.123.101:0/4031595610 conn(0x7f8af407bd50 msgr2=0x7f8af4105ec0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 shutdown_connections 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.438+0000 7f8afb4a4640 1 -- 192.168.123.101:0/4031595610 wait complete. 2026-03-09T15:48:18.485 INFO:teuthology.orchestra.run.vm01.stdout:Enabling the dashboard module... 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:17.536807+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:17.536807+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: cephadm 2026-03-09T15:48:17.537890+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: cephadm 2026-03-09T15:48:17.537890+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:17.841521+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:17.841521+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: cephadm 2026-03-09T15:48:17.842475+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: cephadm 2026-03-09T15:48:17.842475+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:18.433632+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.101:0/4031595610' entity='client.admin' 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:18.433632+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.101:0/4031595610' entity='client.admin' 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:18.766175+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:18.766175+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:18.824749+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:18.824749+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:19.141222+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:19.453 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:19 vm01 bash[20728]: audit 2026-03-09T15:48:19.141222+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0263577640 1 Processor -- start 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0263577640 1 -- start start 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0263577640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264105690 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0263577640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f0264105c60 con 0x7f0264105290 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0262575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264105690 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0262575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264105690 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38310/0 (socket says 192.168.123.101:38310) 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0262575640 1 -- 192.168.123.101:0/21203749 learned_addr learned my addr 192.168.123.101:0/21203749 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0262575640 1 -- 192.168.123.101:0/21203749 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0264106490 con 0x7f0264105290 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0262575640 1 --2- 192.168.123.101:0/21203749 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264105690 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f024c01adc0 tx=0x7f024c0402a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c9274b52c0ea5c2b server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:19.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0261573640 1 -- 192.168.123.101:0/21203749 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f024c040aa0 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0261573640 1 -- 192.168.123.101:0/21203749 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f024c040c40 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.614+0000 7f0261573640 1 -- 192.168.123.101:0/21203749 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f024c0473a0 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- 192.168.123.101:0/21203749 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 msgr2=0x7f0264105690 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 --2- 192.168.123.101:0/21203749 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264105690 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f024c01adc0 tx=0x7f024c0402a0 comp rx=0 tx=0).stop 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- 192.168.123.101:0/21203749 shutdown_connections 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 --2- 192.168.123.101:0/21203749 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264105690 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- 192.168.123.101:0/21203749 >> 192.168.123.101:0/21203749 conn(0x7f0264100a40 msgr2=0x7f0264102e60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- 192.168.123.101:0/21203749 shutdown_connections 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- 192.168.123.101:0/21203749 wait complete. 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 Processor -- start 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- start start 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264191430 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f02641071f0 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0262575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264191430 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0262575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264191430 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38324/0 (socket says 192.168.123.101:38324) 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0262575640 1 -- 192.168.123.101:0/179694937 learned_addr learned my addr 192.168.123.101:0/179694937 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0262575640 1 -- 192.168.123.101:0/179694937 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0264191970 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0262575640 1 --2- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264191430 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f024c0475d0 tx=0x7f024c018b10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f02537fe640 1 -- 192.168.123.101:0/179694937 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f024c01a2e0 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f02537fe640 1 -- 192.168.123.101:0/179694937 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f024c057070 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f02537fe640 1 -- 192.168.123.101:0/179694937 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f024c052d10 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- 192.168.123.101:0/179694937 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0264191c00 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- 192.168.123.101:0/179694937 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f02641948f0 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f02537fe640 1 -- 192.168.123.101:0/179694937 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7f024c051070 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f0263577640 1 -- 192.168.123.101:0/179694937 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0264105690 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.618+0000 7f02537fe640 1 --2- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f023c03dc90 0x7f023c040150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.622+0000 7f0261d74640 1 --2- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f023c03dc90 0x7f023c040150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.622+0000 7f02537fe640 1 -- 192.168.123.101:0/179694937 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f024c089040 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.622+0000 7f02537fe640 1 -- 192.168.123.101:0/179694937 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f024c052480 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.622+0000 7f0261d74640 1 --2- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f023c03dc90 0x7f023c040150 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f0258009a10 tx=0x7f0258006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.762+0000 7f0263577640 1 -- 192.168.123.101:0/179694937 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0) -- 0x7f0264071300 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:18.822+0000 7f02537fe640 1 -- 192.168.123.101:0/179694937 <== mon.0 v2:192.168.123.101:3300/0 7 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f024c052660 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.430+0000 7f02537fe640 1 -- 192.168.123.101:0/179694937 <== mon.0 v2:192.168.123.101:3300/0 8 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "dashboard"}]=0 v8) ==== 88+0+0 (secure 0 0 0) 0x7f024c0483f0 con 0x7f0264105290 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 -- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f023c03dc90 msgr2=0x7f023c040150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 --2- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f023c03dc90 0x7f023c040150 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f0258009a10 tx=0x7f0258006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 -- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 msgr2=0x7f0264191430 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 --2- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264191430 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f024c0475d0 tx=0x7f024c018b10 comp rx=0 tx=0).stop 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 -- 192.168.123.101:0/179694937 shutdown_connections 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 --2- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f023c03dc90 0x7f023c040150 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 --2- 192.168.123.101:0/179694937 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0264105290 0x7f0264191430 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:19.554 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 -- 192.168.123.101:0/179694937 >> 192.168.123.101:0/179694937 conn(0x7f0264100a40 msgr2=0x7f026406ea00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:19.555 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 -- 192.168.123.101:0/179694937 shutdown_connections 2026-03-09T15:48:19.555 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.434+0000 7f02517fa640 1 -- 192.168.123.101:0/179694937 wait complete. 2026-03-09T15:48:19.793 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:19 vm01 bash[21002]: ignoring --setuser ceph since I am not root 2026-03-09T15:48:19.793 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:19 vm01 bash[21002]: ignoring --setgroup ceph since I am not root 2026-03-09T15:48:19.793 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:19 vm01 bash[21002]: debug 2026-03-09T15:48:19.590+0000 7f2b6907b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T15:48:19.793 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:19 vm01 bash[21002]: debug 2026-03-09T15:48:19.650+0000 7f2b6907b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T15:48:19.906 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T15:48:19.906 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-09T15:48:19.906 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T15:48:19.906 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 Processor -- start 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 -- start start 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc0a4d20 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f07cc0a52f0 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc0a4d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc0a4d20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38336/0 (socket says 192.168.123.101:38336) 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 -- 192.168.123.101:0/1965021295 learned_addr learned my addr 192.168.123.101:0/1965021295 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 -- 192.168.123.101:0/1965021295 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f07cc0a5b20 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 --2- 192.168.123.101:0/1965021295 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc0a4d20 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f07d00089a0 tx=0x7f07d0031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=e5dc3a21a36bb019 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d88c9640 1 -- 192.168.123.101:0/1965021295 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f07d003c480 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d88c9640 1 -- 192.168.123.101:0/1965021295 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f07d003ca60 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 -- 192.168.123.101:0/1965021295 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 msgr2=0x7f07cc0a4d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 --2- 192.168.123.101:0/1965021295 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc0a4d20 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f07d00089a0 tx=0x7f07d0031440 comp rx=0 tx=0).stop 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 -- 192.168.123.101:0/1965021295 shutdown_connections 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 --2- 192.168.123.101:0/1965021295 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc0a4d20 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 -- 192.168.123.101:0/1965021295 >> 192.168.123.101:0/1965021295 conn(0x7f07cc09fc30 msgr2=0x7f07cc0a2090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 -- 192.168.123.101:0/1965021295 shutdown_connections 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 -- 192.168.123.101:0/1965021295 wait complete. 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 Processor -- start 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 -- start start 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc13d6e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07da8cd640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f07cc0a65a0 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc13d6e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc13d6e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38342/0 (socket says 192.168.123.101:38342) 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 -- 192.168.123.101:0/1250862618 learned_addr learned my addr 192.168.123.101:0/1250862618 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 -- 192.168.123.101:0/1250862618 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f07cc13dc20 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.702+0000 7f07d98cb640 1 --2- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc13d6e0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f07d0002410 tx=0x7f07d0009a50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07c2ffd640 1 -- 192.168.123.101:0/1250862618 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f07d003c480 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07da8cd640 1 -- 192.168.123.101:0/1250862618 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f07cc13deb0 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07da8cd640 1 -- 192.168.123.101:0/1250862618 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f07cc13a220 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07c2ffd640 1 -- 192.168.123.101:0/1250862618 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f07d003ca60 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07c2ffd640 1 -- 192.168.123.101:0/1250862618 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f07d000b3c0 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07c2ffd640 1 -- 192.168.123.101:0/1250862618 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 8) ==== 50260+0+0 (secure 0 0 0) 0x7f07d000bce0 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07c2ffd640 1 --2- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f07a403dc90 0x7f07a4040150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07d90ca640 1 -- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f07a403dc90 msgr2=0x7f07a4040150 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2530303036 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07d90ca640 1 --2- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f07a403dc90 0x7f07a4040150 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07c2ffd640 1 -- 192.168.123.101:0/1250862618 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f07d0077210 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.706+0000 7f07da8cd640 1 -- 192.168.123.101:0/1250862618 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f079c005180 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.710+0000 7f07c2ffd640 1 -- 192.168.123.101:0/1250862618 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f07d004ec70 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.830+0000 7f07da8cd640 1 -- 192.168.123.101:0/1250862618 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7f079c005c80 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.830+0000 7f07c2ffd640 1 -- 192.168.123.101:0/1250862618 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v8) ==== 56+0+88 (secure 0 0 0) 0x7f07d00411a0 con 0x7f07cc0a4920 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.834+0000 7f07c0ff9640 1 -- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f07a403dc90 msgr2=0x7f07a4040150 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.834+0000 7f07c0ff9640 1 --2- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f07a403dc90 0x7f07a4040150 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.834+0000 7f07c0ff9640 1 -- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 msgr2=0x7f07cc13d6e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.834+0000 7f07c0ff9640 1 --2- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc13d6e0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f07d0002410 tx=0x7f07d0009a50 comp rx=0 tx=0).stop 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.838+0000 7f07c0ff9640 1 -- 192.168.123.101:0/1250862618 shutdown_connections 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.838+0000 7f07c0ff9640 1 --2- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f07a403dc90 0x7f07a4040150 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.838+0000 7f07c0ff9640 1 --2- 192.168.123.101:0/1250862618 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f07cc0a4920 0x7f07cc13d6e0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.838+0000 7f07c0ff9640 1 -- 192.168.123.101:0/1250862618 >> 192.168.123.101:0/1250862618 conn(0x7f07cc09fc30 msgr2=0x7f07cc0a06f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.838+0000 7f07c0ff9640 1 -- 192.168.123.101:0/1250862618 shutdown_connections 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:19.838+0000 7f07c0ff9640 1 -- 192.168.123.101:0/1250862618 wait complete. 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-09T15:48:19.907 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 8... 2026-03-09T15:48:20.154 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:19 vm01 bash[21002]: debug 2026-03-09T15:48:19.786+0000 7f2b6907b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T15:48:20.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:20 vm01 bash[21002]: debug 2026-03-09T15:48:20.146+0000 7f2b6907b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T15:48:20.741 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:20 vm01 bash[21002]: debug 2026-03-09T15:48:20.638+0000 7f2b6907b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T15:48:20.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:20 vm01 bash[20728]: audit 2026-03-09T15:48:19.435111+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T15:48:20.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:20 vm01 bash[20728]: audit 2026-03-09T15:48:19.435111+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T15:48:20.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:20 vm01 bash[20728]: cluster 2026-03-09T15:48:19.442122+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 6s) 2026-03-09T15:48:20.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:20 vm01 bash[20728]: cluster 2026-03-09T15:48:19.442122+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 6s) 2026-03-09T15:48:20.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:20 vm01 bash[20728]: audit 2026-03-09T15:48:19.836256+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.101:0/1250862618' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:20.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:20 vm01 bash[20728]: audit 2026-03-09T15:48:19.836256+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.101:0/1250862618' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:21.025 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:20 vm01 bash[21002]: debug 2026-03-09T15:48:20.734+0000 7f2b6907b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T15:48:21.026 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:20 vm01 bash[21002]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T15:48:21.026 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:20 vm01 bash[21002]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T15:48:21.026 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:20 vm01 bash[21002]: from numpy import show_config as show_numpy_config 2026-03-09T15:48:21.026 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:20 vm01 bash[21002]: debug 2026-03-09T15:48:20.870+0000 7f2b6907b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T15:48:21.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:21 vm01 bash[21002]: debug 2026-03-09T15:48:21.018+0000 7f2b6907b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T15:48:21.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:21 vm01 bash[21002]: debug 2026-03-09T15:48:21.058+0000 7f2b6907b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T15:48:21.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:21 vm01 bash[21002]: debug 2026-03-09T15:48:21.102+0000 7f2b6907b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T15:48:21.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:21 vm01 bash[21002]: debug 2026-03-09T15:48:21.146+0000 7f2b6907b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T15:48:21.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:21 vm01 bash[21002]: debug 2026-03-09T15:48:21.202+0000 7f2b6907b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T15:48:22.007 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:21 vm01 bash[21002]: debug 2026-03-09T15:48:21.714+0000 7f2b6907b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T15:48:22.007 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:21 vm01 bash[21002]: debug 2026-03-09T15:48:21.758+0000 7f2b6907b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T15:48:22.007 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:21 vm01 bash[21002]: debug 2026-03-09T15:48:21.798+0000 7f2b6907b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T15:48:22.007 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:21 vm01 bash[21002]: debug 2026-03-09T15:48:21.954+0000 7f2b6907b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T15:48:22.007 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:22 vm01 bash[21002]: debug 2026-03-09T15:48:21.998+0000 7f2b6907b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T15:48:22.346 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:22 vm01 bash[21002]: debug 2026-03-09T15:48:22.042+0000 7f2b6907b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T15:48:22.346 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:22 vm01 bash[21002]: debug 2026-03-09T15:48:22.166+0000 7f2b6907b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:48:22.630 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:22 vm01 bash[21002]: debug 2026-03-09T15:48:22.338+0000 7f2b6907b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T15:48:22.630 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:22 vm01 bash[21002]: debug 2026-03-09T15:48:22.534+0000 7f2b6907b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T15:48:22.630 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:22 vm01 bash[21002]: debug 2026-03-09T15:48:22.574+0000 7f2b6907b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T15:48:22.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:22 vm01 bash[21002]: debug 2026-03-09T15:48:22.622+0000 7f2b6907b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T15:48:22.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:22 vm01 bash[21002]: debug 2026-03-09T15:48:22.782+0000 7f2b6907b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:23 vm01 bash[21002]: debug 2026-03-09T15:48:23.026+0000 7f2b6907b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.033962+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.033962+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.034379+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.034379+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.039775+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.039775+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.039905+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00562542s) 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.039905+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00562542s) 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.041929+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.041929+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.042547+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.042547+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.043913+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.043913+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.044492+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.044492+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.044988+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.044988+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.052265+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: cluster 2026-03-09T15:48:23.052265+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.076637+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.076637+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.077406+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:23 vm01 bash[20728]: audit 2026-03-09T15:48:23.077406+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.058+0000 7f3c9498f640 1 Processor -- start 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.058+0000 7f3c9498f640 1 -- start start 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.058+0000 7f3c9498f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c900746b0 0x7f3c90074ab0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.058+0000 7f3c9498f640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3c90075080 con 0x7f3c900746b0 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.058+0000 7f3c8ed76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c900746b0 0x7f3c90074ab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.058+0000 7f3c8ed76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c900746b0 0x7f3c90074ab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38344/0 (socket says 192.168.123.101:38344) 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.058+0000 7f3c8ed76640 1 -- 192.168.123.101:0/3636612313 learned_addr learned my addr 192.168.123.101:0/3636612313 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c8ed76640 1 -- 192.168.123.101:0/3636612313 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3c9010e1c0 con 0x7f3c900746b0 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c8ed76640 1 --2- 192.168.123.101:0/3636612313 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c900746b0 0x7f3c90074ab0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f3c8000a9c0 tx=0x7f3c80033650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=359654c26ad4c1fd server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c8dd74640 1 -- 192.168.123.101:0/3636612313 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3c80037580 con 0x7f3c900746b0 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c8dd74640 1 -- 192.168.123.101:0/3636612313 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3c80037b60 con 0x7f3c900746b0 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 -- 192.168.123.101:0/3636612313 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c900746b0 msgr2=0x7f3c90074ab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 --2- 192.168.123.101:0/3636612313 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c900746b0 0x7f3c90074ab0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f3c8000a9c0 tx=0x7f3c80033650 comp rx=0 tx=0).stop 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 -- 192.168.123.101:0/3636612313 shutdown_connections 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 --2- 192.168.123.101:0/3636612313 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c900746b0 0x7f3c90074ab0 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 -- 192.168.123.101:0/3636612313 >> 192.168.123.101:0/3636612313 conn(0x7f3c9006fa30 msgr2=0x7f3c90071e70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 -- 192.168.123.101:0/3636612313 shutdown_connections 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 -- 192.168.123.101:0/3636612313 wait complete. 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 Processor -- start 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 -- start start 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c901b3b80 0x7f3c901b3fa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c9498f640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3c9010eb70 con 0x7f3c901b3b80 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c8ed76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c901b3b80 0x7f3c901b3fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c8ed76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c901b3b80 0x7f3c901b3fa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38354/0 (socket says 192.168.123.101:38354) 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c8ed76640 1 -- 192.168.123.101:0/945684667 learned_addr learned my addr 192.168.123.101:0/945684667 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:24.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c8ed76640 1 -- 192.168.123.101:0/945684667 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3c901b44e0 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.062+0000 7f3c8ed76640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c901b3b80 0x7f3c901b3fa0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f3c8003d000 tx=0x7f3c8003d740 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3c8004a070 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3c8003de40 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3c80045c60 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c9498f640 1 -- 192.168.123.101:0/945684667 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3c901b4770 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c9498f640 1 -- 192.168.123.101:0/945684667 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3c901b7250 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 8) ==== 50260+0+0 (secure 0 0 0) 0x7f3c8003b070 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c6ffff640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 0x7f3c7c03fe10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 --> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f3c7c040520 con 0x7f3c7c03d950 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f3c800789e0 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c8e575640 1 -- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 msgr2=0x7f3c7c03fe10 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2530303036 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.066+0000 7f3c8e575640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 0x7f3c7c03fe10 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.266+0000 7f3c8e575640 1 -- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 msgr2=0x7f3c7c03fe10 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2530303036 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.266+0000 7f3c8e575640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 0x7f3c7c03fe10 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.666+0000 7f3c8e575640 1 -- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 msgr2=0x7f3c7c03fe10 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2530303036 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:20.666+0000 7f3c8e575640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 0x7f3c7c03fe10 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:21.470+0000 7f3c8e575640 1 -- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 msgr2=0x7f3c7c03fe10 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/2530303036 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:21.470+0000 7f3c8e575640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 0x7f3c7c03fe10 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:23.034+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mgrmap(e 9) ==== 50027+0+0 (secure 0 0 0) 0x7f3c8000bce0 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:23.034+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 msgr2=0x7f3c7c03fe10 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:23.034+0000 7f3c6ffff640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 0x7f3c7c03fe10 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.046+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7f3c80078be0 con 0x7f3c901b3b80 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.046+0000 7f3c6ffff640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3c7c041480 0x7f3c7c043870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.046+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f3c8003aec0 con 0x7f3c7c041480 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.046+0000 7f3c8e575640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3c7c041480 0x7f3c7c043870 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.046+0000 7f3c8e575640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3c7c041480 0x7f3c7c043870 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f3c88003e00 tx=0x7f3c88007330 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.046+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (secure 0 0 0) 0x7f3c8003aec0 con 0x7f3c7c041480 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c9498f640 1 -- 192.168.123.101:0/945684667 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7f3c5c002670 con 0x7f3c7c041480 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6ffff640 1 -- 192.168.123.101:0/945684667 <== mgr.14150 v2:192.168.123.101:6800/1421049061 2 ==== command_reply(tid 1: 0 ) ==== 8+0+52 (secure 0 0 0) 0x7f3c5c002670 con 0x7f3c7c041480 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 -- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3c7c041480 msgr2=0x7f3c7c043870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3c7c041480 0x7f3c7c043870 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f3c88003e00 tx=0x7f3c88007330 comp rx=0 tx=0).stop 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 -- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c901b3b80 msgr2=0x7f3c901b3fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c901b3b80 0x7f3c901b3fa0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f3c8003d000 tx=0x7f3c8003d740 comp rx=0 tx=0).stop 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 -- 192.168.123.101:0/945684667 shutdown_connections 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3c7c041480 0x7f3c7c043870 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:6800/2530303036,v1:192.168.123.101:6801/2530303036] conn(0x7f3c7c03d950 0x7f3c7c03fe10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 --2- 192.168.123.101:0/945684667 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3c901b3b80 0x7f3c901b3fa0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.156 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 -- 192.168.123.101:0/945684667 >> 192.168.123.101:0/945684667 conn(0x7f3c9006fa30 msgr2=0x7f3c90070390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:24.157 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 -- 192.168.123.101:0/945684667 shutdown_connections 2026-03-09T15:48:24.157 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.050+0000 7f3c6dffb640 1 -- 192.168.123.101:0/945684667 wait complete. 2026-03-09T15:48:24.157 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 8 is available 2026-03-09T15:48:24.157 INFO:teuthology.orchestra.run.vm01.stdout:Generating a dashboard self-signed certificate... 2026-03-09T15:48:24.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:24 vm01 bash[20728]: audit 2026-03-09T15:48:23.087450+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:24.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:24 vm01 bash[20728]: audit 2026-03-09T15:48:23.087450+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:24.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:24 vm01 bash[20728]: cluster 2026-03-09T15:48:24.052982+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.01867s) 2026-03-09T15:48:24.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:24 vm01 bash[20728]: cluster 2026-03-09T15:48:24.052982+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.01867s) 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.274+0000 7f99d4615640 1 Processor -- start 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.274+0000 7f99d4615640 1 -- start start 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.274+0000 7f99d4615640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f99cc109230 con 0x7f99cc108860 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d238a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d238a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38418/0 (socket says 192.168.123.101:38418) 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d238a640 1 -- 192.168.123.101:0/2755633041 learned_addr learned my addr 192.168.123.101:0/2755633041 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d238a640 1 -- 192.168.123.101:0/2755633041 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f99cc109a60 con 0x7f99cc108860 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d238a640 1 --2- 192.168.123.101:0/2755633041 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc108c60 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f99bc009920 tx=0x7f99bc02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b02102b04dedda60 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d1388640 1 -- 192.168.123.101:0/2755633041 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f99bc03c070 con 0x7f99cc108860 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d1388640 1 -- 192.168.123.101:0/2755633041 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f99bc037440 con 0x7f99cc108860 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d1388640 1 -- 192.168.123.101:0/2755633041 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f99bc035340 con 0x7f99cc108860 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 -- 192.168.123.101:0/2755633041 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 msgr2=0x7f99cc108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 --2- 192.168.123.101:0/2755633041 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc108c60 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f99bc009920 tx=0x7f99bc02ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 -- 192.168.123.101:0/2755633041 shutdown_connections 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 --2- 192.168.123.101:0/2755633041 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc108c60 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 -- 192.168.123.101:0/2755633041 >> 192.168.123.101:0/2755633041 conn(0x7f99cc07bda0 msgr2=0x7f99cc07c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 -- 192.168.123.101:0/2755633041 shutdown_connections 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 -- 192.168.123.101:0/2755633041 wait complete. 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 Processor -- start 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 -- start start 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc19e610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d4615640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f99cc10cc90 con 0x7f99cc108860 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d238a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc19e610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d238a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc19e610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38426/0 (socket says 192.168.123.101:38426) 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.278+0000 7f99d238a640 1 -- 192.168.123.101:0/467313273 learned_addr learned my addr 192.168.123.101:0/467313273 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.282+0000 7f99d238a640 1 -- 192.168.123.101:0/467313273 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f99cc19eb50 con 0x7f99cc108860 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.282+0000 7f99d238a640 1 --2- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc19e610 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f99bc009a50 tx=0x7f99bc037b00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.282+0000 7f99b37fe640 1 -- 192.168.123.101:0/467313273 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f99bc047070 con 0x7f99cc108860 2026-03-09T15:48:24.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.282+0000 7f99b37fe640 1 -- 192.168.123.101:0/467313273 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f99bc035dc0 con 0x7f99cc108860 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.282+0000 7f99b37fe640 1 -- 192.168.123.101:0/467313273 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f99bc03c070 con 0x7f99cc108860 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.282+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f99cc19ede0 con 0x7f99cc108860 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.282+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f99cc1a1ad0 con 0x7f99cc108860 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.282+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9994005180 con 0x7f99cc108860 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.286+0000 7f99b37fe640 1 -- 192.168.123.101:0/467313273 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7f99bc054080 con 0x7f99cc108860 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.286+0000 7f99b37fe640 1 --2- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f99a803dbc0 0x7f99a8040080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.286+0000 7f99b37fe640 1 -- 192.168.123.101:0/467313273 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f99bc077620 con 0x7f99cc108860 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.286+0000 7f99b37fe640 1 -- 192.168.123.101:0/467313273 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f99bc077ae0 con 0x7f99cc108860 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.286+0000 7f99d1b89640 1 --2- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f99a803dbc0 0x7f99a8040080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.286+0000 7f99d1b89640 1 --2- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f99a803dbc0 0x7f99a8040080 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f99c0009a10 tx=0x7f99c0006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.382+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}) -- 0x7f9994002bf0 con 0x7f99a803dbc0 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.418+0000 7f99b37fe640 1 -- 192.168.123.101:0/467313273 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f9994002bf0 con 0x7f99a803dbc0 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.418+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f99a803dbc0 msgr2=0x7f99a8040080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.418+0000 7f99d4615640 1 --2- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f99a803dbc0 0x7f99a8040080 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f99c0009a10 tx=0x7f99c0006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.418+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 msgr2=0x7f99cc19e610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.418+0000 7f99d4615640 1 --2- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc19e610 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f99bc009a50 tx=0x7f99bc037b00 comp rx=0 tx=0).stop 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.418+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 shutdown_connections 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.418+0000 7f99d4615640 1 --2- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f99a803dbc0 0x7f99a8040080 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.418+0000 7f99d4615640 1 --2- 192.168.123.101:0/467313273 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f99cc108860 0x7f99cc19e610 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.418+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 >> 192.168.123.101:0/467313273 conn(0x7f99cc07bda0 msgr2=0x7f99cc105ef0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.422+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 shutdown_connections 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.422+0000 7f99d4615640 1 -- 192.168.123.101:0/467313273 wait complete. 2026-03-09T15:48:24.469 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial admin user... 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$fXcVvHo824Lq3JnTVMLY8ujNysrt.K2iUNSRXScCfJhU0UXqfnwCG", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773071304, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.594+0000 7f53454e1640 1 Processor -- start 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.594+0000 7f53454e1640 1 -- start start 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.594+0000 7f53454e1640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f5340108c30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.594+0000 7f53454e1640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f5340109200 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.594+0000 7f533effd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f5340108c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.594+0000 7f533effd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f5340108c30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38432/0 (socket says 192.168.123.101:38432) 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.594+0000 7f533effd640 1 -- 192.168.123.101:0/1592194472 learned_addr learned my addr 192.168.123.101:0/1592194472 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.594+0000 7f533effd640 1 -- 192.168.123.101:0/1592194472 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5340109a30 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f533effd640 1 --2- 192.168.123.101:0/1592194472 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f5340108c30 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f5334009920 tx=0x7f533402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=953b9da4a4e923ab server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f533dffb640 1 -- 192.168.123.101:0/1592194472 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f533403c070 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f533dffb640 1 -- 192.168.123.101:0/1592194472 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f5334037440 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f533dffb640 1 -- 192.168.123.101:0/1592194472 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5334035340 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 -- 192.168.123.101:0/1592194472 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 msgr2=0x7f5340108c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 --2- 192.168.123.101:0/1592194472 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f5340108c30 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f5334009920 tx=0x7f533402ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 -- 192.168.123.101:0/1592194472 shutdown_connections 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 --2- 192.168.123.101:0/1592194472 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f5340108c30 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 -- 192.168.123.101:0/1592194472 >> 192.168.123.101:0/1592194472 conn(0x7f534007bd50 msgr2=0x7f534007c1a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 -- 192.168.123.101:0/1592194472 shutdown_connections 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 -- 192.168.123.101:0/1592194472 wait complete. 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 Processor -- start 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 -- start start 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f534019e5b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f534010cc60 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f533effd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f534019e5b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f533effd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f534019e5b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38440/0 (socket says 192.168.123.101:38440) 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f533effd640 1 -- 192.168.123.101:0/1553647638 learned_addr learned my addr 192.168.123.101:0/1553647638 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f533effd640 1 -- 192.168.123.101:0/1553647638 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f534019eaf0 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f533effd640 1 --2- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f534019e5b0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f53340098f0 tx=0x7f5334037f60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f531ffff640 1 -- 192.168.123.101:0/1553647638 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5334047070 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f534019ed80 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f53401a1a70 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f531ffff640 1 -- 192.168.123.101:0/1553647638 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f5334035dc0 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.598+0000 7f531ffff640 1 -- 192.168.123.101:0/1553647638 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f533403c070 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.602+0000 7f531ffff640 1 -- 192.168.123.101:0/1553647638 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7f533404c430 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.602+0000 7f531ffff640 1 --2- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f530c03dbc0 0x7f530c040080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.602+0000 7f531ffff640 1 -- 192.168.123.101:0/1553647638 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f5334077ab0 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.602+0000 7f533e7fc640 1 --2- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f530c03dbc0 0x7f530c040080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.602+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5300005180 con 0x7f5340108830 2026-03-09T15:48:24.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.602+0000 7f533e7fc640 1 --2- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f530c03dbc0 0x7f530c040080 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f5328009a10 tx=0x7f5328006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.606+0000 7f531ffff640 1 -- 192.168.123.101:0/1553647638 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f53340342a0 con 0x7f5340108830 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.706+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}) -- 0x7f5300003c00 con 0x7f530c03dbc0 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f531ffff640 1 -- 192.168.123.101:0/1553647638 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+252 (secure 0 0 0) 0x7f5300003c00 con 0x7f530c03dbc0 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f530c03dbc0 msgr2=0x7f530c040080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 --2- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f530c03dbc0 0x7f530c040080 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f5328009a10 tx=0x7f5328006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 msgr2=0x7f534019e5b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 --2- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f534019e5b0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f53340098f0 tx=0x7f5334037f60 comp rx=0 tx=0).stop 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 shutdown_connections 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 --2- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f530c03dbc0 0x7f530c040080 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 --2- 192.168.123.101:0/1553647638 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5340108830 0x7f534019e5b0 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 >> 192.168.123.101:0/1553647638 conn(0x7f534007bd50 msgr2=0x7f5340105f00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 shutdown_connections 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:24.870+0000 7f53454e1640 1 -- 192.168.123.101:0/1553647638 wait complete. 2026-03-09T15:48:24.917 INFO:teuthology.orchestra.run.vm01.stdout:Fetching dashboard port number... 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.030+0000 7ffa45a7d640 1 Processor -- start 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.030+0000 7ffa45a7d640 1 -- start start 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.030+0000 7ffa45a7d640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa40108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.030+0000 7ffa45a7d640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7ffa40109230 con 0x7ffa40108860 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.030+0000 7ffa3effd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa40108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.030+0000 7ffa3effd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa40108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38454/0 (socket says 192.168.123.101:38454) 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.030+0000 7ffa3effd640 1 -- 192.168.123.101:0/1890033706 learned_addr learned my addr 192.168.123.101:0/1890033706 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.030+0000 7ffa3effd640 1 -- 192.168.123.101:0/1890033706 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ffa40109a60 con 0x7ffa40108860 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa3effd640 1 --2- 192.168.123.101:0/1890033706 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa40108c60 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7ffa34009920 tx=0x7ffa3402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1b6e02f93952e669 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa3dffb640 1 -- 192.168.123.101:0/1890033706 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ffa3403c070 con 0x7ffa40108860 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa3dffb640 1 -- 192.168.123.101:0/1890033706 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ffa34037440 con 0x7ffa40108860 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1890033706 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 msgr2=0x7ffa40108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 --2- 192.168.123.101:0/1890033706 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa40108c60 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7ffa34009920 tx=0x7ffa3402ef20 comp rx=0 tx=0).stop 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1890033706 shutdown_connections 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 --2- 192.168.123.101:0/1890033706 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa40108c60 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1890033706 >> 192.168.123.101:0/1890033706 conn(0x7ffa4007bda0 msgr2=0x7ffa4007c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1890033706 shutdown_connections 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1890033706 wait complete. 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 Processor -- start 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 -- start start 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa4019e750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7ffa4010cc90 con 0x7ffa40108860 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa3effd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa4019e750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa3effd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa4019e750 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38462/0 (socket says 192.168.123.101:38462) 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa3effd640 1 -- 192.168.123.101:0/1869152731 learned_addr learned my addr 192.168.123.101:0/1869152731 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa3effd640 1 -- 192.168.123.101:0/1869152731 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ffa4019ec90 con 0x7ffa40108860 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa3effd640 1 --2- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa4019e750 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7ffa34035eb0 tx=0x7ffa34035ee0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa44a7b640 1 -- 192.168.123.101:0/1869152731 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ffa3403c070 con 0x7ffa40108860 2026-03-09T15:48:25.192 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ffa4019ef20 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.034+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ffa401a1c10 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.038+0000 7ffa44a7b640 1 -- 192.168.123.101:0/1869152731 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ffa34044070 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.038+0000 7ffa44a7b640 1 -- 192.168.123.101:0/1869152731 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ffa3403f9e0 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.038+0000 7ffa44a7b640 1 -- 192.168.123.101:0/1869152731 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7ffa3403fb80 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.038+0000 7ffa44a7b640 1 --2- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ffa0803dbc0 0x7ffa08040080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.038+0000 7ffa44a7b640 1 -- 192.168.123.101:0/1869152731 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7ffa340771b0 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.038+0000 7ffa3e7fc640 1 --2- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ffa0803dbc0 0x7ffa08040080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.038+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ffa0c005180 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.042+0000 7ffa44a7b640 1 -- 192.168.123.101:0/1869152731 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ffa3404e1f0 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.042+0000 7ffa3e7fc640 1 --2- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ffa0803dbc0 0x7ffa08040080 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7ffa280099c0 tx=0x7ffa28006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.142+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"} v 0) -- 0x7ffa0c005470 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.142+0000 7ffa44a7b640 1 -- 192.168.123.101:0/1869152731 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]=0 v8) ==== 112+0+5 (secure 0 0 0) 0x7ffa340483c0 con 0x7ffa40108860 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ffa0803dbc0 msgr2=0x7ffa08040080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 --2- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ffa0803dbc0 0x7ffa08040080 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7ffa280099c0 tx=0x7ffa28006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 msgr2=0x7ffa4019e750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 --2- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa4019e750 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7ffa34035eb0 tx=0x7ffa34035ee0 comp rx=0 tx=0).stop 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 shutdown_connections 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 --2- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ffa0803dbc0 0x7ffa08040080 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 --2- 192.168.123.101:0/1869152731 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ffa40108860 0x7ffa4019e750 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 >> 192.168.123.101:0/1869152731 conn(0x7ffa4007bda0 msgr2=0x7ffa40106070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 shutdown_connections 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.146+0000 7ffa45a7d640 1 -- 192.168.123.101:0/1869152731 wait complete. 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:Ceph Dashboard is now available at: 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout: URL: https://vm01.local:8443/ 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout: User: admin 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout: Password: fgulgjefo6 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.193 INFO:teuthology.orchestra.run.vm01.stdout:Saving cluster configuration to /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config directory 2026-03-09T15:48:25.529 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.318+0000 7f3e5e75e640 1 Processor -- start 2026-03-09T15:48:25.529 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.318+0000 7f3e5e75e640 1 -- start start 2026-03-09T15:48:25.529 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.318+0000 7f3e5e75e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581069b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:25.529 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.318+0000 7f3e5e75e640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3e58106f80 con 0x7f3e581065b0 2026-03-09T15:48:25.529 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.318+0000 7f3e57fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581069b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:25.529 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.318+0000 7f3e57fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581069b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38478/0 (socket says 192.168.123.101:38478) 2026-03-09T15:48:25.529 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.318+0000 7f3e57fff640 1 -- 192.168.123.101:0/1774696915 learned_addr learned my addr 192.168.123.101:0/1774696915 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:25.529 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.318+0000 7f3e57fff640 1 -- 192.168.123.101:0/1774696915 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3e581077b0 con 0x7f3e581065b0 2026-03-09T15:48:25.529 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e57fff640 1 --2- 192.168.123.101:0/1774696915 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581069b0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f3e3c009b80 tx=0x7f3e3c02f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=eb3976a7e65807fb server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e56ffd640 1 -- 192.168.123.101:0/1774696915 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3e3c03c070 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e56ffd640 1 -- 192.168.123.101:0/1774696915 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3e3c037440 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 -- 192.168.123.101:0/1774696915 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 msgr2=0x7f3e581069b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 --2- 192.168.123.101:0/1774696915 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581069b0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f3e3c009b80 tx=0x7f3e3c02f190 comp rx=0 tx=0).stop 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 -- 192.168.123.101:0/1774696915 shutdown_connections 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 --2- 192.168.123.101:0/1774696915 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581069b0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 -- 192.168.123.101:0/1774696915 >> 192.168.123.101:0/1774696915 conn(0x7f3e58101d60 msgr2=0x7f3e58104180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 -- 192.168.123.101:0/1774696915 shutdown_connections 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 -- 192.168.123.101:0/1774696915 wait complete. 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 Processor -- start 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 -- start start 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581953a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e5e75e640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3e5810a9e0 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e57fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581953a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e57fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581953a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:38480/0 (socket says 192.168.123.101:38480) 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e57fff640 1 -- 192.168.123.101:0/1457351094 learned_addr learned my addr 192.168.123.101:0/1457351094 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e57fff640 1 -- 192.168.123.101:0/1457351094 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3e581958e0 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e57fff640 1 --2- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581953a0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f3e3c003940 tx=0x7f3e3c003970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.322+0000 7f3e557fa640 1 -- 192.168.123.101:0/1457351094 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3e3c047070 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e557fa640 1 -- 192.168.123.101:0/1457351094 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3e3c003c00 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e557fa640 1 -- 192.168.123.101:0/1457351094 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3e3c03c040 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e5e75e640 1 -- 192.168.123.101:0/1457351094 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3e58195b70 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e5e75e640 1 -- 192.168.123.101:0/1457351094 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3e58192130 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e4affd640 1 -- 192.168.123.101:0/1457351094 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3e20005180 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e557fa640 1 -- 192.168.123.101:0/1457351094 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7f3e3c035640 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e557fa640 1 --2- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3e2c03d800 0x7f3e2c03fcc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e557fa640 1 -- 192.168.123.101:0/1457351094 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f3e3c0766f0 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e577fe640 1 --2- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3e2c03d800 0x7f3e2c03fcc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.326+0000 7f3e577fe640 1 --2- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3e2c03d800 0x7f3e2c03fcc0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f3e44009a10 tx=0x7f3e44006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.330+0000 7f3e557fa640 1 -- 192.168.123.101:0/1457351094 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3e3c0342a0 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.478+0000 7f3e4affd640 1 -- 192.168.123.101:0/1457351094 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) -- 0x7f3e20005470 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.482+0000 7f3e557fa640 1 -- 192.168.123.101:0/1457351094 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{prefix=config-key set, key=mgr/dashboard/cluster/status}]=0 set mgr/dashboard/cluster/status v24)=0 set mgr/dashboard/cluster/status v24) ==== 153+0+0 (secure 0 0 0) 0x7f3e3c003db0 con 0x7f3e581065b0 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 -- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3e2c03d800 msgr2=0x7f3e2c03fcc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 --2- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3e2c03d800 0x7f3e2c03fcc0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f3e44009a10 tx=0x7f3e44006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 -- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 msgr2=0x7f3e581953a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 --2- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581953a0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f3e3c003940 tx=0x7f3e3c003970 comp rx=0 tx=0).stop 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 -- 192.168.123.101:0/1457351094 shutdown_connections 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 --2- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3e2c03d800 0x7f3e2c03fcc0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 --2- 192.168.123.101:0/1457351094 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e581065b0 0x7f3e581953a0 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 -- 192.168.123.101:0/1457351094 >> 192.168.123.101:0/1457351094 conn(0x7f3e58101d60 msgr2=0x7f3e581029e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 -- 192.168.123.101:0/1457351094 shutdown_connections 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr 2026-03-09T15:48:25.486+0000 7f3e4affd640 1 -- 192.168.123.101:0/1457351094 wait complete. 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: sudo /usr/sbin/cephadm shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: sudo /usr/sbin/cephadm shell 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: ceph telemetry on 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout:For more information see: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T15:48:25.530 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:25.531 INFO:teuthology.orchestra.run.vm01.stdout:Bootstrap complete. 2026-03-09T15:48:25.554 INFO:tasks.cephadm:Fetching config... 2026-03-09T15:48:25.554 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:48:25.554 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T15:48:25.598 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T15:48:25.598 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:48:25.599 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T15:48:25.642 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T15:48:25.642 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:48:25.642 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.a/keyring of=/dev/stdout 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.145032+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTING 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.145032+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTING 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.270360+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.270360+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.271018+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Client ('192.168.123.101', 56886) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.271018+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Client ('192.168.123.101', 56886) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.371955+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.371955+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.372272+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTED 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: cephadm 2026-03-09T15:48:24.372272+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTED 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:24.389155+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:24.389155+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:24.417819+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:24.417819+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:24.421016+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:24.421016+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:24.872886+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:24.872886+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:25.148261+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.101:0/1869152731' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T15:48:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:25 vm01 bash[20728]: audit 2026-03-09T15:48:25.148261+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.101:0/1869152731' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T15:48:25.690 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T15:48:25.690 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:48:25.690 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T15:48:25.735 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T15:48:25.735 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK9x06PqRiQAZsjB9w6vP4G9bJhdyvM1QlHX61CC2Pex ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T15:48:25.790 INFO:teuthology.orchestra.run.vm01.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK9x06PqRiQAZsjB9w6vP4G9bJhdyvM1QlHX61CC2Pex ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:25.795 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK9x06PqRiQAZsjB9w6vP4G9bJhdyvM1QlHX61CC2Pex ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T15:48:25.808 INFO:teuthology.orchestra.run.vm09.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK9x06PqRiQAZsjB9w6vP4G9bJhdyvM1QlHX61CC2Pex ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:25.813 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T15:48:26.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:26 vm01 bash[20728]: audit 2026-03-09T15:48:24.712827+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:26.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:26 vm01 bash[20728]: audit 2026-03-09T15:48:24.712827+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:26.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:26 vm01 bash[20728]: audit 2026-03-09T15:48:25.486282+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.101:0/1457351094' entity='client.admin' 2026-03-09T15:48:26.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:26 vm01 bash[20728]: audit 2026-03-09T15:48:25.486282+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.101:0/1457351094' entity='client.admin' 2026-03-09T15:48:26.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:26 vm01 bash[20728]: cluster 2026-03-09T15:48:25.877108+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-09T15:48:26.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:26 vm01 bash[20728]: cluster 2026-03-09T15:48:25.877108+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-09T15:48:29.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:29 vm01 bash[20728]: audit 2026-03-09T15:48:28.132267+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:29.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:29 vm01 bash[20728]: audit 2026-03-09T15:48:28.132267+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:29.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:29 vm01 bash[20728]: audit 2026-03-09T15:48:28.773105+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:29.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:29 vm01 bash[20728]: audit 2026-03-09T15:48:28.773105+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:29.980 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.a/config 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.118+0000 7f69c002c640 1 -- 192.168.123.101:0/1553632258 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 msgr2=0x7f69b8075500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.118+0000 7f69c002c640 1 --2- 192.168.123.101:0/1553632258 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 0x7f69b8075500 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f69a8009a00 tx=0x7f69a802f310 comp rx=0 tx=0).stop 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 -- 192.168.123.101:0/1553632258 shutdown_connections 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 --2- 192.168.123.101:0/1553632258 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 0x7f69b8075500 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 -- 192.168.123.101:0/1553632258 >> 192.168.123.101:0/1553632258 conn(0x7f69b80fdaf0 msgr2=0x7f69b80fff30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 -- 192.168.123.101:0/1553632258 shutdown_connections 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 -- 192.168.123.101:0/1553632258 wait complete. 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 Processor -- start 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 -- start start 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 0x7f69b81a7f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:30.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f69b81054b0 con 0x7f69b80770a0 2026-03-09T15:48:30.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69bdda1640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 0x7f69b81a7f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:30.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69bdda1640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 0x7f69b81a7f00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:60354/0 (socket says 192.168.123.101:60354) 2026-03-09T15:48:30.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69bdda1640 1 -- 192.168.123.101:0/168017167 learned_addr learned my addr 192.168.123.101:0/168017167 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:30.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69bdda1640 1 -- 192.168.123.101:0/168017167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f69b81a8440 con 0x7f69b80770a0 2026-03-09T15:48:30.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69bdda1640 1 --2- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 0x7f69b81a7f00 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f69a8004600 tx=0x7f69a8004630 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:30.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69a6ffd640 1 -- 192.168.123.101:0/168017167 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f69a803d070 con 0x7f69b80770a0 2026-03-09T15:48:30.130 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f69b81a86d0 con 0x7f69b80770a0 2026-03-09T15:48:30.130 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69a6ffd640 1 -- 192.168.123.101:0/168017167 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f69a8045070 con 0x7f69b80770a0 2026-03-09T15:48:30.130 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69a6ffd640 1 -- 192.168.123.101:0/168017167 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f69a80403f0 con 0x7f69b80770a0 2026-03-09T15:48:30.130 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69a6ffd640 1 -- 192.168.123.101:0/168017167 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f69a8040630 con 0x7f69b80770a0 2026-03-09T15:48:30.130 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f69b81ab3c0 con 0x7f69b80770a0 2026-03-09T15:48:30.131 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.122+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f69b8105c50 con 0x7f69b80770a0 2026-03-09T15:48:30.134 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.126+0000 7f69a6ffd640 1 --2- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f699403d9c0 0x7f699403fe80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:30.135 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.126+0000 7f69bd5a0640 1 --2- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f699403d9c0 0x7f699403fe80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:30.135 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.126+0000 7f69bd5a0640 1 --2- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f699403d9c0 0x7f699403fe80 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f69ac009a10 tx=0x7f69ac006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:30.135 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.130+0000 7f69a6ffd640 1 -- 192.168.123.101:0/168017167 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f69a8079910 con 0x7f69b80770a0 2026-03-09T15:48:30.135 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.130+0000 7f69a6ffd640 1 -- 192.168.123.101:0/168017167 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f69a8079bd0 con 0x7f69b80770a0 2026-03-09T15:48:30.229 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.222+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/allow_ptrace}] v 0) -- 0x7f69b806fb90 con 0x7f69b80770a0 2026-03-09T15:48:30.236 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.226+0000 7f69a6ffd640 1 -- 192.168.123.101:0/168017167 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/allow_ptrace}]=0 v9)=0 v9) ==== 125+0+0 (secure 0 0 0) 0x7f69a80357c0 con 0x7f69b80770a0 2026-03-09T15:48:30.241 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f699403d9c0 msgr2=0x7f699403fe80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:30.241 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 --2- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f699403d9c0 0x7f699403fe80 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f69ac009a10 tx=0x7f69ac006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:30.241 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 msgr2=0x7f69b81a7f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:30.241 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 --2- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 0x7f69b81a7f00 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f69a8004600 tx=0x7f69a8004630 comp rx=0 tx=0).stop 2026-03-09T15:48:30.241 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 shutdown_connections 2026-03-09T15:48:30.241 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 --2- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f699403d9c0 0x7f699403fe80 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:30.241 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 --2- 192.168.123.101:0/168017167 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f69b80770a0 0x7f69b81a7f00 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:30.241 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 >> 192.168.123.101:0/168017167 conn(0x7f69b80fdaf0 msgr2=0x7f69b80fe4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:30.242 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 shutdown_connections 2026-03-09T15:48:30.242 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:30.234+0000 7f69c002c640 1 -- 192.168.123.101:0/168017167 wait complete. 2026-03-09T15:48:30.305 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T15:48:30.306 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T15:48:31.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:30 vm01 bash[20728]: cluster 2026-03-09T15:48:29.776614+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-09T15:48:31.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:30 vm01 bash[20728]: cluster 2026-03-09T15:48:29.776614+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-09T15:48:31.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:30 vm01 bash[20728]: audit 2026-03-09T15:48:30.229654+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.101:0/168017167' entity='client.admin' 2026-03-09T15:48:31.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:30 vm01 bash[20728]: audit 2026-03-09T15:48:30.229654+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.101:0/168017167' entity='client.admin' 2026-03-09T15:48:34.994 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.a/config 2026-03-09T15:48:35.149 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 -- 192.168.123.101:0/3900011709 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 msgr2=0x7f44dc1049f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:35.149 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 --2- 192.168.123.101:0/3900011709 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 0x7f44dc1049f0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f44c4009990 tx=0x7f44c402f290 comp rx=0 tx=0).stop 2026-03-09T15:48:35.149 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 -- 192.168.123.101:0/3900011709 shutdown_connections 2026-03-09T15:48:35.149 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 --2- 192.168.123.101:0/3900011709 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 0x7f44dc1049f0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:35.149 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 -- 192.168.123.101:0/3900011709 >> 192.168.123.101:0/3900011709 conn(0x7f44dc0ffda0 msgr2=0x7f44dc1021c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:35.149 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 -- 192.168.123.101:0/3900011709 shutdown_connections 2026-03-09T15:48:35.149 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 -- 192.168.123.101:0/3900011709 wait complete. 2026-03-09T15:48:35.149 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 Processor -- start 2026-03-09T15:48:35.149 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 -- start start 2026-03-09T15:48:35.150 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 0x7f44dc19b180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:35.150 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44e0add640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f44dc1099f0 con 0x7f44dc1045f0 2026-03-09T15:48:35.153 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44da575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 0x7f44dc19b180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:35.154 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44da575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 0x7f44dc19b180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:60386/0 (socket says 192.168.123.101:60386) 2026-03-09T15:48:35.154 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.142+0000 7f44da575640 1 -- 192.168.123.101:0/1833225223 learned_addr learned my addr 192.168.123.101:0/1833225223 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:35.154 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.146+0000 7f44da575640 1 -- 192.168.123.101:0/1833225223 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f44dc19b6c0 con 0x7f44dc1045f0 2026-03-09T15:48:35.154 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.146+0000 7f44da575640 1 --2- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 0x7f44dc19b180 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f44c4002e80 tx=0x7f44c4004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:35.154 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.146+0000 7f44c37fe640 1 -- 192.168.123.101:0/1833225223 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f44c40049f0 con 0x7f44dc1045f0 2026-03-09T15:48:35.155 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.146+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f44dc19b950 con 0x7f44dc1045f0 2026-03-09T15:48:35.155 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.146+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f44dc19bd50 con 0x7f44dc1045f0 2026-03-09T15:48:35.156 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.146+0000 7f44c37fe640 1 -- 192.168.123.101:0/1833225223 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f44c4005020 con 0x7f44dc1045f0 2026-03-09T15:48:35.156 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.146+0000 7f44c37fe640 1 -- 192.168.123.101:0/1833225223 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f44c4038600 con 0x7f44dc1045f0 2026-03-09T15:48:35.156 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.150+0000 7f44c37fe640 1 -- 192.168.123.101:0/1833225223 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f44c40387a0 con 0x7f44dc1045f0 2026-03-09T15:48:35.156 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.150+0000 7f44c37fe640 1 --2- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f44ac03dd80 0x7f44ac040240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:35.156 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.150+0000 7f44c37fe640 1 -- 192.168.123.101:0/1833225223 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f44c403e070 con 0x7f44dc1045f0 2026-03-09T15:48:35.157 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.150+0000 7f44d9d74640 1 --2- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f44ac03dd80 0x7f44ac040240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:35.157 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.150+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f44dc10a190 con 0x7f44dc1045f0 2026-03-09T15:48:35.160 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.154+0000 7f44d9d74640 1 --2- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f44ac03dd80 0x7f44ac040240 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f44d00099c0 tx=0x7f44d0006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:35.161 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.154+0000 7f44c37fe640 1 -- 192.168.123.101:0/1833225223 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f44c40373d0 con 0x7f44dc1045f0 2026-03-09T15:48:35.257 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.250+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}) -- 0x7f44dc1947f0 con 0x7f44ac03dd80 2026-03-09T15:48:35.263 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.254+0000 7f44c37fe640 1 -- 192.168.123.101:0/1833225223 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7f44dc1947f0 con 0x7f44ac03dd80 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f44ac03dd80 msgr2=0x7f44ac040240 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 --2- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f44ac03dd80 0x7f44ac040240 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f44d00099c0 tx=0x7f44d0006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 msgr2=0x7f44dc19b180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 --2- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 0x7f44dc19b180 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f44c4002e80 tx=0x7f44c4004290 comp rx=0 tx=0).stop 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 shutdown_connections 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 --2- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f44ac03dd80 0x7f44ac040240 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 --2- 192.168.123.101:0/1833225223 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f44dc1045f0 0x7f44dc19b180 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 >> 192.168.123.101:0/1833225223 conn(0x7f44dc0ffda0 msgr2=0x7f44dc100880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 shutdown_connections 2026-03-09T15:48:35.272 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:35.266+0000 7f44e0add640 1 -- 192.168.123.101:0/1833225223 wait complete. 2026-03-09T15:48:35.344 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-09T15:48:35.344 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:48:35.344 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-09T15:48:35.348 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:48:35.348 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:35.392 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-09T15:48:35.393 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch host add vm09 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.720735+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.720735+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.724500+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.724500+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.725414+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.725414+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.729433+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.729433+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.735362+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.735362+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.738347+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:34.738347+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.255546+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.255546+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.259012+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.259012+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.259827+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.259827+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.261457+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.261457+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:36.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.262249+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.262249+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: cephadm 2026-03-09T15:48:35.263131+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: cephadm 2026-03-09T15:48:35.263131+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: cephadm 2026-03-09T15:48:35.312035+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: cephadm 2026-03-09T15:48:35.312035+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: cephadm 2026-03-09T15:48:35.361598+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: cephadm 2026-03-09T15:48:35.361598+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: cephadm 2026-03-09T15:48:35.396145+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: cephadm 2026-03-09T15:48:35.396145+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.450121+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.450121+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.453315+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.453315+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.456253+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:36.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:35 vm01 bash[20728]: audit 2026-03-09T15:48:35.456253+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:40.028 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.a/config 2026-03-09T15:48:40.199 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 -- 192.168.123.101:0/2374419675 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 msgr2=0x7f347c102940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:40.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 --2- 192.168.123.101:0/2374419675 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 0x7f347c102940 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f34640099b0 tx=0x7f346402f2b0 comp rx=0 tx=0).stop 2026-03-09T15:48:40.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 -- 192.168.123.101:0/2374419675 shutdown_connections 2026-03-09T15:48:40.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 --2- 192.168.123.101:0/2374419675 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 0x7f347c102940 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:40.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 -- 192.168.123.101:0/2374419675 >> 192.168.123.101:0/2374419675 conn(0x7f347c0fe000 msgr2=0x7f347c100420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:40.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 -- 192.168.123.101:0/2374419675 shutdown_connections 2026-03-09T15:48:40.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 -- 192.168.123.101:0/2374419675 wait complete. 2026-03-09T15:48:40.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 Processor -- start 2026-03-09T15:48:40.201 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 -- start start 2026-03-09T15:48:40.201 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 0x7f347c075370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:40.201 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f347c10bc90 con 0x7f347c102560 2026-03-09T15:48:40.201 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3480a0c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 0x7f347c075370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:40.201 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3480a0c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 0x7f347c075370 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:50470/0 (socket says 192.168.123.101:50470) 2026-03-09T15:48:40.201 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3480a0c640 1 -- 192.168.123.101:0/521715138 learned_addr learned my addr 192.168.123.101:0/521715138 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:40.202 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3480a0c640 1 -- 192.168.123.101:0/521715138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f347c077260 con 0x7f347c102560 2026-03-09T15:48:40.202 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3480a0c640 1 --2- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 0x7f347c075370 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f3464004290 tx=0x7f34640042c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:40.203 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3471ffb640 1 -- 192.168.123.101:0/521715138 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3464038470 con 0x7f347c102560 2026-03-09T15:48:40.203 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3471ffb640 1 -- 192.168.123.101:0/521715138 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3464046070 con 0x7f347c102560 2026-03-09T15:48:40.203 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3471ffb640 1 -- 192.168.123.101:0/521715138 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3464041600 con 0x7f347c102560 2026-03-09T15:48:40.203 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f347c0758b0 con 0x7f347c102560 2026-03-09T15:48:40.203 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.194+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f347c075d10 con 0x7f347c102560 2026-03-09T15:48:40.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.198+0000 7f3471ffb640 1 -- 192.168.123.101:0/521715138 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f34640417a0 con 0x7f347c102560 2026-03-09T15:48:40.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.198+0000 7f3471ffb640 1 --2- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f345803dd30 0x7f34580401f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:40.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.198+0000 7f3471ffb640 1 -- 192.168.123.101:0/521715138 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f3464077a80 con 0x7f347c102560 2026-03-09T15:48:40.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.198+0000 7f3473fff640 1 --2- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f345803dd30 0x7f34580401f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:40.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.198+0000 7f3473fff640 1 --2- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f345803dd30 0x7f34580401f0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f346c009a10 tx=0x7f346c006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:40.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.198+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3444005180 con 0x7f347c102560 2026-03-09T15:48:40.210 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.202+0000 7f3471ffb640 1 -- 192.168.123.101:0/521715138 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f34640071c0 con 0x7f347c102560 2026-03-09T15:48:40.305 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:40.298+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}) -- 0x7f3444002bf0 con 0x7f345803dd30 2026-03-09T15:48:40.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:40 vm01 bash[20728]: audit 2026-03-09T15:48:40.303714+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:40.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:40 vm01 bash[20728]: audit 2026-03-09T15:48:40.303714+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:41.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:41 vm01 bash[20728]: cephadm 2026-03-09T15:48:40.860373+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-09T15:48:41.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:41 vm01 bash[20728]: cephadm 2026-03-09T15:48:40.860373+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-09T15:48:42.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.174+0000 7f3471ffb640 1 -- 192.168.123.101:0/521715138 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (secure 0 0 0) 0x7f3444002bf0 con 0x7f345803dd30 2026-03-09T15:48:42.183 INFO:teuthology.orchestra.run.vm01.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-09T15:48:42.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f345803dd30 msgr2=0x7f34580401f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:42.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 --2- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f345803dd30 0x7f34580401f0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f346c009a10 tx=0x7f346c006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:42.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 msgr2=0x7f347c075370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:42.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 --2- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 0x7f347c075370 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f3464004290 tx=0x7f34640042c0 comp rx=0 tx=0).stop 2026-03-09T15:48:42.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 shutdown_connections 2026-03-09T15:48:42.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 --2- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f345803dd30 0x7f34580401f0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:42.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 --2- 192.168.123.101:0/521715138 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f347c102560 0x7f347c075370 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:42.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 >> 192.168.123.101:0/521715138 conn(0x7f347c0fe000 msgr2=0x7f347c10a280 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:42.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 shutdown_connections 2026-03-09T15:48:42.185 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:42.178+0000 7f3482c97640 1 -- 192.168.123.101:0/521715138 wait complete. 2026-03-09T15:48:42.235 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch host ls --format=json 2026-03-09T15:48:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:43 vm01 bash[20728]: audit 2026-03-09T15:48:42.177384+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:43 vm01 bash[20728]: audit 2026-03-09T15:48:42.177384+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:43 vm01 bash[20728]: cephadm 2026-03-09T15:48:42.178004+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm09 2026-03-09T15:48:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:43 vm01 bash[20728]: cephadm 2026-03-09T15:48:42.178004+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm09 2026-03-09T15:48:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:43 vm01 bash[20728]: audit 2026-03-09T15:48:42.178762+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:43 vm01 bash[20728]: audit 2026-03-09T15:48:42.178762+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:43 vm01 bash[20728]: audit 2026-03-09T15:48:42.482178+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:43 vm01 bash[20728]: audit 2026-03-09T15:48:42.482178+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:44 vm01 bash[20728]: cluster 2026-03-09T15:48:43.045223+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:44 vm01 bash[20728]: cluster 2026-03-09T15:48:43.045223+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:44 vm01 bash[20728]: audit 2026-03-09T15:48:43.829980+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:44 vm01 bash[20728]: audit 2026-03-09T15:48:43.829980+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:45.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:45 vm01 bash[20728]: audit 2026-03-09T15:48:44.437332+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:45.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:45 vm01 bash[20728]: audit 2026-03-09T15:48:44.437332+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:46.848 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.a/config 2026-03-09T15:48:46.865 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:46 vm01 bash[20728]: cluster 2026-03-09T15:48:45.045524+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:46.865 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:46 vm01 bash[20728]: cluster 2026-03-09T15:48:45.045524+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:47.001 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 -- 192.168.123.101:0/3042399219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 msgr2=0x7fe090101650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:47.001 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 --2- 192.168.123.101:0/3042399219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 0x7fe090101650 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fe078009a00 tx=0x7fe07802f3a0 comp rx=0 tx=0).stop 2026-03-09T15:48:47.001 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 -- 192.168.123.101:0/3042399219 shutdown_connections 2026-03-09T15:48:47.002 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 --2- 192.168.123.101:0/3042399219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 0x7fe090101650 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:47.002 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 -- 192.168.123.101:0/3042399219 >> 192.168.123.101:0/3042399219 conn(0x7fe0900fcec0 msgr2=0x7fe0900ff2e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:47.002 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 -- 192.168.123.101:0/3042399219 shutdown_connections 2026-03-09T15:48:47.002 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 -- 192.168.123.101:0/3042399219 wait complete. 2026-03-09T15:48:47.002 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 Processor -- start 2026-03-09T15:48:47.002 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 -- start start 2026-03-09T15:48:47.002 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 0x7fe0901a5670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:47.003 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe09740e640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fe090112480 con 0x7fe090101270 2026-03-09T15:48:47.003 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe095183640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 0x7fe0901a5670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:47.003 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe095183640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 0x7fe0901a5670 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:50488/0 (socket says 192.168.123.101:50488) 2026-03-09T15:48:47.003 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.994+0000 7fe095183640 1 -- 192.168.123.101:0/3879974491 learned_addr learned my addr 192.168.123.101:0/3879974491 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:47.003 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe095183640 1 -- 192.168.123.101:0/3879974491 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe0901a5bb0 con 0x7fe090101270 2026-03-09T15:48:47.003 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe095183640 1 --2- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 0x7fe0901a5670 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fe0780059c0 tx=0x7fe078002c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:47.006 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe0867fc640 1 -- 192.168.123.101:0/3879974491 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe078038470 con 0x7fe090101270 2026-03-09T15:48:47.006 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe0901a5e40 con 0x7fe090101270 2026-03-09T15:48:47.006 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe09019f7f0 con 0x7fe090101270 2026-03-09T15:48:47.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe0867fc640 1 -- 192.168.123.101:0/3879974491 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fe078033020 con 0x7fe090101270 2026-03-09T15:48:47.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe0867fc640 1 -- 192.168.123.101:0/3879974491 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe078038de0 con 0x7fe090101270 2026-03-09T15:48:47.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe0867fc640 1 -- 192.168.123.101:0/3879974491 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7fe07804a410 con 0x7fe090101270 2026-03-09T15:48:47.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe0867fc640 1 --2- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fe06403dd30 0x7fe0640401f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:47.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe0867fc640 1 -- 192.168.123.101:0/3879974491 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7fe078077a10 con 0x7fe090101270 2026-03-09T15:48:47.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:46.998+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe058005180 con 0x7fe090101270 2026-03-09T15:48:47.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.002+0000 7fe0867fc640 1 -- 192.168.123.101:0/3879974491 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe078048310 con 0x7fe090101270 2026-03-09T15:48:47.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.002+0000 7fe094982640 1 --2- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fe06403dd30 0x7fe0640401f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:47.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.002+0000 7fe094982640 1 --2- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fe06403dd30 0x7fe0640401f0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fe0800099c0 tx=0x7fe080006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:47.111 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.102+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7fe058002bf0 con 0x7fe06403dd30 2026-03-09T15:48:47.112 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.106+0000 7fe0867fc640 1 -- 192.168.123.101:0/3879974491 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+155 (secure 0 0 0) 0x7fe058002bf0 con 0x7fe06403dd30 2026-03-09T15:48:47.113 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:48:47.113 INFO:teuthology.orchestra.run.vm01.stdout:[{"addr": "192.168.123.101", "hostname": "vm01", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-09T15:48:47.115 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.106+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fe06403dd30 msgr2=0x7fe0640401f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:47.115 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.106+0000 7fe09740e640 1 --2- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fe06403dd30 0x7fe0640401f0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fe0800099c0 tx=0x7fe080006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:47.116 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.110+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 msgr2=0x7fe0901a5670 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:47.116 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.110+0000 7fe09740e640 1 --2- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 0x7fe0901a5670 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fe0780059c0 tx=0x7fe078002c80 comp rx=0 tx=0).stop 2026-03-09T15:48:47.116 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.110+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 shutdown_connections 2026-03-09T15:48:47.116 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.110+0000 7fe09740e640 1 --2- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fe06403dd30 0x7fe0640401f0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:47.116 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.110+0000 7fe09740e640 1 --2- 192.168.123.101:0/3879974491 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe090101270 0x7fe0901a5670 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:47.116 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.110+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 >> 192.168.123.101:0/3879974491 conn(0x7fe0900fcec0 msgr2=0x7fe0900fd8a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:47.116 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.110+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 shutdown_connections 2026-03-09T15:48:47.116 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:47.110+0000 7fe09740e640 1 -- 192.168.123.101:0/3879974491 wait complete. 2026-03-09T15:48:47.176 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T15:48:47.177 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd crush tunables default 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cluster 2026-03-09T15:48:47.045794+0000 mgr.y (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cluster 2026-03-09T15:48:47.045794+0000 mgr.y (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.109767+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.109767+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.308382+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.308382+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.310287+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.310287+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.312804+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.312804+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.314734+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.314734+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.315300+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.315300+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.316216+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.316216+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.316927+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.316927+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cephadm 2026-03-09T15:48:47.317725+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cephadm 2026-03-09T15:48:47.317725+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cephadm 2026-03-09T15:48:47.350884+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cephadm 2026-03-09T15:48:47.350884+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cephadm 2026-03-09T15:48:47.380467+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cephadm 2026-03-09T15:48:47.380467+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cephadm 2026-03-09T15:48:47.412650+0000 mgr.y (mgr.14150) 25 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: cephadm 2026-03-09T15:48:47.412650+0000 mgr.y (mgr.14150) 25 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.452388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.452388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.455838+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.455838+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.459150+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:48 vm01 bash[20728]: audit 2026-03-09T15:48:47.459150+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:50 vm01 bash[20728]: cluster 2026-03-09T15:48:49.046093+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:50 vm01 bash[20728]: cluster 2026-03-09T15:48:49.046093+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:50.855 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.a/config 2026-03-09T15:48:51.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 -- 192.168.123.101:0/1266997775 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 msgr2=0x7faa1010aa30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:51.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 --2- 192.168.123.101:0/1266997775 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 0x7faa1010aa30 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7faa04009a00 tx=0x7faa0402f310 comp rx=0 tx=0).stop 2026-03-09T15:48:51.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 -- 192.168.123.101:0/1266997775 shutdown_connections 2026-03-09T15:48:51.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 --2- 192.168.123.101:0/1266997775 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 0x7faa1010aa30 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:51.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 -- 192.168.123.101:0/1266997775 >> 192.168.123.101:0/1266997775 conn(0x7faa10100280 msgr2=0x7faa101026a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:51.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 -- 192.168.123.101:0/1266997775 shutdown_connections 2026-03-09T15:48:51.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 -- 192.168.123.101:0/1266997775 wait complete. 2026-03-09T15:48:51.010 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 Processor -- start 2026-03-09T15:48:51.011 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 -- start start 2026-03-09T15:48:51.011 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 0x7faa1019b450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:51.011 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa185d9640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7faa1010df70 con 0x7faa1010a650 2026-03-09T15:48:51.011 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.002+0000 7faa1634e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 0x7faa1019b450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:51.011 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7faa1634e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 0x7faa1019b450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:36636/0 (socket says 192.168.123.101:36636) 2026-03-09T15:48:51.011 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7faa1634e640 1 -- 192.168.123.101:0/3012074737 learned_addr learned my addr 192.168.123.101:0/3012074737 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:48:51.011 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7faa1634e640 1 -- 192.168.123.101:0/3012074737 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faa1019b990 con 0x7faa1010a650 2026-03-09T15:48:51.012 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7faa1634e640 1 --2- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 0x7faa1019b450 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7faa04002410 tx=0x7faa04002c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:51.014 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7fa9ff7fe640 1 -- 192.168.123.101:0/3012074737 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faa04038470 con 0x7faa1010a650 2026-03-09T15:48:51.014 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7faa1019bc20 con 0x7faa1010a650 2026-03-09T15:48:51.014 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7faa1019c080 con 0x7faa1010a650 2026-03-09T15:48:51.014 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7fa9ff7fe640 1 -- 192.168.123.101:0/3012074737 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7faa04033020 con 0x7faa1010a650 2026-03-09T15:48:51.014 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7fa9ff7fe640 1 -- 192.168.123.101:0/3012074737 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faa0404a410 con 0x7faa1010a650 2026-03-09T15:48:51.014 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7fa9ff7fe640 1 -- 192.168.123.101:0/3012074737 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7faa0404a5b0 con 0x7faa1010a650 2026-03-09T15:48:51.018 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7fa9ff7fe640 1 --2- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa9dc046740 0x7fa9dc048c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:51.018 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7fa9ff7fe640 1 -- 192.168.123.101:0/3012074737 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7faa04077ab0 con 0x7faa1010a650 2026-03-09T15:48:51.018 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa9e0005180 con 0x7faa1010a650 2026-03-09T15:48:51.018 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.006+0000 7faa15b4d640 1 --2- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa9dc046740 0x7fa9dc048c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:51.018 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.010+0000 7faa15b4d640 1 --2- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa9dc046740 0x7fa9dc048c00 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7faa000099c0 tx=0x7faa00006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:51.019 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.010+0000 7fa9ff7fe640 1 -- 192.168.123.101:0/3012074737 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7faa04048310 con 0x7faa1010a650 2026-03-09T15:48:51.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.110+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) -- 0x7fa9e0005470 con 0x7faa1010a650 2026-03-09T15:48:51.319 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.310+0000 7fa9ff7fe640 1 -- 192.168.123.101:0/3012074737 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v4) ==== 124+0+0 (secure 0 0 0) 0x7faa040373d0 con 0x7faa1010a650 2026-03-09T15:48:51.319 INFO:teuthology.orchestra.run.vm01.stderr:adjusted tunables profile to default 2026-03-09T15:48:51.322 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa9dc046740 msgr2=0x7fa9dc048c00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:51.322 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 --2- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa9dc046740 0x7fa9dc048c00 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7faa000099c0 tx=0x7faa00006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:51.322 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 msgr2=0x7faa1019b450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:51.322 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 --2- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 0x7faa1019b450 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7faa04002410 tx=0x7faa04002c80 comp rx=0 tx=0).stop 2026-03-09T15:48:51.323 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 shutdown_connections 2026-03-09T15:48:51.323 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 --2- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa9dc046740 0x7fa9dc048c00 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:51.323 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 --2- 192.168.123.101:0/3012074737 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7faa1010a650 0x7faa1019b450 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:51.323 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 >> 192.168.123.101:0/3012074737 conn(0x7faa10100280 msgr2=0x7faa10101c90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:51.323 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 shutdown_connections 2026-03-09T15:48:51.323 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:48:51.314+0000 7faa185d9640 1 -- 192.168.123.101:0/3012074737 wait complete. 2026-03-09T15:48:51.378 INFO:tasks.cephadm:Adding mon.a on vm01 2026-03-09T15:48:51.378 INFO:tasks.cephadm:Adding mon.c on vm01 2026-03-09T15:48:51.379 INFO:tasks.cephadm:Adding mon.b on vm09 2026-03-09T15:48:51.379 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch apply mon '3;vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b' 2026-03-09T15:48:51.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:51 vm01 bash[20728]: audit 2026-03-09T15:48:51.116207+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T15:48:51.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:51 vm01 bash[20728]: audit 2026-03-09T15:48:51.116207+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T15:48:52.505 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.647+0000 7facb20da640 1 -- 192.168.123.109:0/1939592987 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 msgr2=0x7facac1088f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.647+0000 7facb20da640 1 --2- 192.168.123.109:0/1939592987 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 0x7facac1088f0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7fac9c0099b0 tx=0x7fac9c02f2b0 comp rx=0 tx=0).stop 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.647+0000 7facb20da640 1 -- 192.168.123.109:0/1939592987 shutdown_connections 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.647+0000 7facb20da640 1 --2- 192.168.123.109:0/1939592987 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 0x7facac1088f0 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.647+0000 7facb20da640 1 -- 192.168.123.109:0/1939592987 >> 192.168.123.109:0/1939592987 conn(0x7facac0fc1d0 msgr2=0x7facac0fe5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.647+0000 7facb20da640 1 -- 192.168.123.109:0/1939592987 shutdown_connections 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.647+0000 7facb20da640 1 -- 192.168.123.109:0/1939592987 wait complete. 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.647+0000 7facb20da640 1 Processor -- start 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facb20da640 1 -- start start 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facb20da640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 0x7facac1991b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:52.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facb20da640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7facac10bcd0 con 0x7facac108510 2026-03-09T15:48:52.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facab7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 0x7facac1991b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:52.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facab7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 0x7facac1991b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.109:56646/0 (socket says 192.168.123.109:56646) 2026-03-09T15:48:52.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facab7fe640 1 -- 192.168.123.109:0/3296993125 learned_addr learned my addr 192.168.123.109:0/3296993125 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:48:52.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facab7fe640 1 -- 192.168.123.109:0/3296993125 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7facac1996f0 con 0x7facac108510 2026-03-09T15:48:52.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facab7fe640 1 --2- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 0x7facac1991b0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fac9c004290 tx=0x7fac9c0042c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:52.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7faca8ff9640 1 -- 192.168.123.109:0/3296993125 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fac9c038470 con 0x7facac108510 2026-03-09T15:48:52.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7facac199980 con 0x7facac108510 2026-03-09T15:48:52.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7facac19d9e0 con 0x7facac108510 2026-03-09T15:48:52.656 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7faca8ff9640 1 -- 192.168.123.109:0/3296993125 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fac9c046070 con 0x7facac108510 2026-03-09T15:48:52.656 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7faca8ff9640 1 -- 192.168.123.109:0/3296993125 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fac9c041400 con 0x7facac108510 2026-03-09T15:48:52.656 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7faca8ff9640 1 -- 192.168.123.109:0/3296993125 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7fac9c038610 con 0x7facac108510 2026-03-09T15:48:52.656 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fac70005180 con 0x7facac108510 2026-03-09T15:48:52.657 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7faca8ff9640 1 --2- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fac8003d970 0x7fac8003fe30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:52.657 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7faca8ff9640 1 -- 192.168.123.109:0/3296993125 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7fac9c076d30 con 0x7facac108510 2026-03-09T15:48:52.657 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facaaffd640 1 --2- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fac8003d970 0x7fac8003fe30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:52.657 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.651+0000 7facaaffd640 1 --2- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fac8003d970 0x7fac8003fe30 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7fac980099c0 tx=0x7fac98006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:52.660 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.655+0000 7faca8ff9640 1 -- 192.168.123.109:0/3296993125 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fac9c007c50 con 0x7facac108510 2026-03-09T15:48:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:52 vm01 bash[20728]: cluster 2026-03-09T15:48:51.046385+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:52 vm01 bash[20728]: cluster 2026-03-09T15:48:51.046385+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:52 vm01 bash[20728]: audit 2026-03-09T15:48:51.315982+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T15:48:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:52 vm01 bash[20728]: audit 2026-03-09T15:48:51.315982+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T15:48:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:52 vm01 bash[20728]: cluster 2026-03-09T15:48:51.317849+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:48:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:52 vm01 bash[20728]: cluster 2026-03-09T15:48:51.317849+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:48:52.762 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.755+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}) -- 0x7fac70002cc0 con 0x7fac8003d970 2026-03-09T15:48:52.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7faca8ff9640 1 -- 192.168.123.109:0/3296993125 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7fac70002cc0 con 0x7fac8003d970 2026-03-09T15:48:52.769 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-09T15:48:52.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fac8003d970 msgr2=0x7fac8003fe30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:52.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 --2- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fac8003d970 0x7fac8003fe30 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7fac980099c0 tx=0x7fac98006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:52.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 msgr2=0x7facac1991b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:52.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 --2- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 0x7facac1991b0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fac9c004290 tx=0x7fac9c0042c0 comp rx=0 tx=0).stop 2026-03-09T15:48:52.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 shutdown_connections 2026-03-09T15:48:52.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 --2- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fac8003d970 0x7fac8003fe30 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:52.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 --2- 192.168.123.109:0/3296993125 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7facac108510 0x7facac1991b0 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:52.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 >> 192.168.123.109:0/3296993125 conn(0x7facac0fc1d0 msgr2=0x7facac0fde50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:52.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 shutdown_connections 2026-03-09T15:48:52.773 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:52.767+0000 7facb20da640 1 -- 192.168.123.109:0/3296993125 wait complete. 2026-03-09T15:48:52.845 DEBUG:teuthology.orchestra.run.vm01:mon.c> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.c.service 2026-03-09T15:48:52.847 DEBUG:teuthology.orchestra.run.vm09:mon.b> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.b.service 2026-03-09T15:48:52.848 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T15:48:52.848 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph mon dump -f json 2026-03-09T15:48:54.010 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.760360+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.760360+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: cephadm 2026-03-09T15:48:52.761587+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b;count:3 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: cephadm 2026-03-09T15:48:52.761587+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b;count:3 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.765368+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.765368+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.766539+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.766539+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.767997+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.767997+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:54.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.768658+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.768658+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.773022+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.773022+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.775070+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.775070+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.775786+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: audit 2026-03-09T15:48:52.775786+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: cephadm 2026-03-09T15:48:52.776597+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm09 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: cephadm 2026-03-09T15:48:52.776597+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm09 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: cluster 2026-03-09T15:48:53.046738+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:54.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:53 vm01 bash[20728]: cluster 2026-03-09T15:48:53.046738+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:54.376 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:48:54.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 -- 192.168.123.109:0/3931752466 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 msgr2=0x7f5d74073dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:54.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 --2- 192.168.123.109:0/3931752466 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 0x7f5d74073dd0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f5d6c00b0a0 tx=0x7f5d6c02f530 comp rx=0 tx=0).stop 2026-03-09T15:48:54.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 -- 192.168.123.109:0/3931752466 shutdown_connections 2026-03-09T15:48:54.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 --2- 192.168.123.109:0/3931752466 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 0x7f5d74073dd0 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:54.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 -- 192.168.123.109:0/3931752466 >> 192.168.123.109:0/3931752466 conn(0x7f5d7406d270 msgr2=0x7f5d7406d680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:54.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 -- 192.168.123.109:0/3931752466 shutdown_connections 2026-03-09T15:48:54.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 -- 192.168.123.109:0/3931752466 wait complete. 2026-03-09T15:48:54.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 Processor -- start 2026-03-09T15:48:54.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 -- start start 2026-03-09T15:48:54.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 0x7f5d741177d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:54.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7b0ec640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f5d7411ad70 con 0x7f5d740739f0 2026-03-09T15:48:54.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7a0ea640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 0x7f5d741177d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:54.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7a0ea640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 0x7f5d741177d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.109:56668/0 (socket says 192.168.123.109:56668) 2026-03-09T15:48:54.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7a0ea640 1 -- 192.168.123.109:0/1660780692 learned_addr learned my addr 192.168.123.109:0/1660780692 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:48:54.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.463+0000 7f5d7a0ea640 1 -- 192.168.123.109:0/1660780692 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5d74117d10 con 0x7f5d740739f0 2026-03-09T15:48:54.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.467+0000 7f5d7a0ea640 1 --2- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 0x7f5d741177d0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f5d6c0045c0 tx=0x7f5d6c0045f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:54.472 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.467+0000 7f5d637fe640 1 -- 192.168.123.109:0/1660780692 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5d6c047070 con 0x7f5d740739f0 2026-03-09T15:48:54.472 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.467+0000 7f5d637fe640 1 -- 192.168.123.109:0/1660780692 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f5d6c007d90 con 0x7f5d740739f0 2026-03-09T15:48:54.472 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.467+0000 7f5d637fe640 1 -- 192.168.123.109:0/1660780692 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5d6c042ca0 con 0x7f5d740739f0 2026-03-09T15:48:54.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.467+0000 7f5d7b0ec640 1 -- 192.168.123.109:0/1660780692 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5d74117fa0 con 0x7f5d740739f0 2026-03-09T15:48:54.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.467+0000 7f5d7b0ec640 1 -- 192.168.123.109:0/1660780692 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5d74118460 con 0x7f5d740739f0 2026-03-09T15:48:54.475 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.467+0000 7f5d7b0ec640 1 -- 192.168.123.109:0/1660780692 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5d3c005180 con 0x7f5d740739f0 2026-03-09T15:48:54.476 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.471+0000 7f5d637fe640 1 -- 192.168.123.109:0/1660780692 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f5d6c007950 con 0x7f5d740739f0 2026-03-09T15:48:54.476 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.471+0000 7f5d637fe640 1 --2- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5d5003da10 0x7f5d5003fed0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:54.476 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.471+0000 7f5d637fe640 1 -- 192.168.123.109:0/1660780692 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7f5d6c03e030 con 0x7f5d740739f0 2026-03-09T15:48:54.480 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.475+0000 7f5d798e9640 1 --2- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5d5003da10 0x7f5d5003fed0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:54.480 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.475+0000 7f5d637fe640 1 -- 192.168.123.109:0/1660780692 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f5d6c041e10 con 0x7f5d740739f0 2026-03-09T15:48:54.491 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.487+0000 7f5d798e9640 1 --2- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5d5003da10 0x7f5d5003fed0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f5d680099c0 tx=0x7f5d68006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 systemd[1]: Started Ceph mon.b for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 6 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 0 pidfile_write: ignore empty --pid-file 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 0 load: jerasure load: lrc 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Git sha 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: DB SUMMARY 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: DB Session ID: VNBIIQR0TQX9CX70GKLO 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 0, files: 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000004.log size: 511 ; 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.env: 0x5650d4407dc0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.info_log: 0x5650e5c32700 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.db_log_dir: 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.wal_dir: 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.write_buffer_manager: 0x5650e5c37900 2026-03-09T15:48:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.row_cache: None 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.wal_filter: None 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T15:48:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Compression algorithms supported: 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: kZSTD supported: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.merge_operator: 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650e5c32640) 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cache_index_and_filter_blocks: 1 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T15:48:54.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: pin_top_level_index_and_filter: 1 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: index_type: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: data_block_index_type: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: index_shortening: 1 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: checksum: 4 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: no_block_cache: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: block_cache: 0x5650e5c59350 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: block_cache_name: BinnedLRUCache 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: block_cache_options: 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: capacity : 536870912 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: num_shard_bits : 4 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: strict_capacity_limit : 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: high_pri_pool_ratio: 0.000 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: block_cache_compressed: (nil) 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: persistent_cache: (nil) 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: block_size: 4096 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: block_size_deviation: 10 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: block_restart_interval: 16 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: index_block_restart_interval: 1 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: metadata_block_size: 4096 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: partition_filters: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: use_delta_encoding: 1 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: filter_policy: bloomfilter 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: whole_key_filtering: 1 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: verify_compression: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: read_amp_bytes_per_bit: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: format_version: 5 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: enable_index_compression: 1 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: block_align: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: max_auto_readahead_size: 262144 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: prepopulate_block_cache: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: initial_auto_readahead_size: 8192 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: num_file_reads_for_auto_readahead: 2 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.num_levels: 7 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T15:48:54.635 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T15:48:54.636 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.519+0000 7fb59b1d2d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.539+0000 7fb59b1d2d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.539+0000 7fb59b1d2d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.539+0000 7fb59b1d2d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8553bfe6-e76e-4de7-aa7e-2e302f12b058 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.539+0000 7fb59b1d2d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773071334543279, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.539+0000 7fb59b1d2d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.539+0000 7fb59b1d2d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773071334544816, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773071334, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8553bfe6-e76e-4de7-aa7e-2e302f12b058", "db_session_id": "VNBIIQR0TQX9CX70GKLO", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.539+0000 7fb59b1d2d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773071334544892, "job": 1, "event": "recovery_finished"} 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.539+0000 7fb59b1d2d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.543+0000 7fb59b1d2d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.543+0000 7fb59b1d2d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5650e5c5ae00 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.543+0000 7fb59b1d2d80 4 rocksdb: DB pointer 0x5650e5d70000 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.543+0000 7fb590f9c640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.543+0000 7fb590f9c640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: ** DB Stats ** 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: ** Compaction Stats [default] ** 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: ** Compaction Stats [default] ** 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Cumulative compaction: 0.00 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Interval compaction: 0.00 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Block cache BinnedLRUCache@0x5650e5c59350#6 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.543+0000 7fb59b1d2d80 0 mon.b does not exist in monmap, will attempt to join an existing cluster 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.543+0000 7fb59b1d2d80 0 using public_addr v2:192.168.123.109:0/0 -> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.547+0000 7fb59b1d2d80 0 starting mon.b rank -1 at public addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] at bind addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:54.637 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.547+0000 7fb59b1d2d80 1 mon.b@-1(???) e0 preinit fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.563+0000 7fb593fa2640 0 mon.b@-1(synchronizing).mds e1 new map 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.563+0000 7fb593fa2640 0 mon.b@-1(synchronizing).mds e1 print_map 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: e1 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: btime 2026-03-09T15:48:02:072481+0000 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: legacy client fscid: -1 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: No filesystems configured 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 1 mon.b@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 1 mon.b@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 1 mon.b@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 1 mon.b@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 1 mon.b@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 1 mon.b@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 0 mon.b@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.567+0000 7fb593fa2640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:02.073092+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:02.073092+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:02.064502+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:02.064502+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065526+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065526+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065569+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065569+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065574+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065574+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065577+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065577+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065585+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065585+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065588+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065588+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065592+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065592+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065595+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065595+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065821+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065821+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065832+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.065832+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.066377+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:03.066377+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:03.126268+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.101:0/1084004242' entity='client.admin' 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:03.126268+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.101:0/1084004242' entity='client.admin' 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:03.777206+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.101:0/3214342441' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:03.777206+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.101:0/3214342441' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:06.033996+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.101:0/3143822304' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:06.033996+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.101:0/3143822304' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:07.145624+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:07.145624+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:07.152301+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00676112s) 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:07.152301+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00676112s) 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.155235+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.155235+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.155286+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.155286+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:54.638 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.155336+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.155336+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.157201+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.157201+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.157308+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.157308+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:07.164365+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:07.164365+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.186896+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.186896+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.190851+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.190851+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.192973+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.192973+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.197636+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.197636+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.207147+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:07.207147+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.101:0/1838944567' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:08.162817+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01729s) 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:08.162817+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01729s) 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:08.491718+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.101:0/1471219276' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:08.491718+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.101:0/1471219276' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:08.772597+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.101:0/2712992155' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:08.772597+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.101:0/2712992155' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:09.096742+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.101:0/2808265687' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:09.096742+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.101:0/2808265687' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:09.214067+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.101:0/2808265687' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:09.214067+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.101:0/2808265687' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:09.217237+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:09.217237+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:09.657902+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.101:0/836432097' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:09.657902+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.101:0/836432097' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.689730+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.689730+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.690181+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.690181+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.695330+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.695330+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.695459+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: y(active, starting, since 0.00540697s) 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.695459+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: y(active, starting, since 0.00540697s) 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.698260+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.698260+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.699474+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.699474+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.700523+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.700523+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.700844+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.700844+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.701158+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.701158+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.708158+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:12.708158+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.718877+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.718877+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.722479+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.722479+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.737496+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.737496+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.738758+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.738758+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:12.716436+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:12.716436+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.748922+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.748922+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.752175+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:12.752175+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:13.239987+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:13.239987+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:13.242976+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:13.242976+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:13.699230+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: y(active, since 1.00918s) 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:13.699230+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: y(active, since 1.00918s) 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:13.701716+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:13.701716+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:13.706313+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:13.706313+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.064286+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.064286+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.068160+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.068160+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.078272+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.078272+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.588032+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.588032+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.648209+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.648209+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.649952+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.649952+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.365811+0000 mgr.y (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.365811+0000 mgr.y (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.375549+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Bus STARTING 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.375549+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Bus STARTING 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.476978+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.476978+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.587385+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.587385+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.587497+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Bus STARTED 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.587497+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Bus STARTED 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.587883+0000 mgr.y (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Client ('192.168.123.101', 47824) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.587883+0000 mgr.y (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:15:48:14] ENGINE Client ('192.168.123.101', 47824) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.625431+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.625431+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.625592+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:14.625592+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.937553+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:14.937553+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:15.223723+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:15.223723+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:15.656726+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:15.656726+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:15.831307+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm01 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:15.831307+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm01 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.196453+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.196453+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:17.196972+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm01 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:17.196972+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm01 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.197229+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.197229+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.542266+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.542266+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.845304+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.845304+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:18.143742+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.101:0/3698565340' entity='client.admin' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:18.143742+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.101:0/3698565340' entity='client.admin' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.536807+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.536807+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:17.537890+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:17.537890+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.841521+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:17.841521+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:17.842475+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:17.842475+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:18.433632+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.101:0/4031595610' entity='client.admin' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:18.433632+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.101:0/4031595610' entity='client.admin' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:18.766175+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:18.766175+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:18.824749+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:18.824749+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:19.141222+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:19.141222+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:19.435111+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:19.435111+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:19.442122+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 6s) 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:19.442122+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 6s) 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:19.836256+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.101:0/1250862618' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:19.836256+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.101:0/1250862618' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.033962+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.033962+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.034379+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.034379+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.039775+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.039775+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.039905+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00562542s) 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.039905+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00562542s) 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.041929+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.041929+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.042547+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.042547+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.043913+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.043913+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.044492+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:54.641 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.044492+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.044988+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.044988+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.052265+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:23.052265+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.076637+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.076637+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.077406+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.077406+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.087450+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:23.087450+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:24.052982+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.01867s) 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:24.052982+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.01867s) 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.145032+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTING 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.145032+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTING 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.270360+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.270360+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.271018+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Client ('192.168.123.101', 56886) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.271018+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Client ('192.168.123.101', 56886) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.371955+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.371955+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.372272+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTED 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:24.372272+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTED 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.389155+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.389155+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.417819+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.417819+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.421016+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.421016+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.872886+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.872886+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:25.148261+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.101:0/1869152731' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:25.148261+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.101:0/1869152731' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.712827+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:24.712827+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:54.642 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:25.486282+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.101:0/1457351094' entity='client.admin' 2026-03-09T15:48:54.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.639+0000 7f5d7b0ec640 1 -- 192.168.123.109:0/1660780692 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f5d3c005470 con 0x7f5d740739f0 2026-03-09T15:48:54.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.639+0000 7f5d637fe640 1 -- 192.168.123.109:0/1660780692 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) ==== 95+0+753 (secure 0 0 0) 0x7f5d6c042520 con 0x7f5d740739f0 2026-03-09T15:48:54.646 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T15:48:54.646 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:48:54.646 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"397fadc0-1bcf-11f1-8481-edc1430c2c03","modified":"2026-03-09T15:48:00.842739Z","created":"2026-03-09T15:48:00.842739Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 -- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5d5003da10 msgr2=0x7f5d5003fed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 --2- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5d5003da10 0x7f5d5003fed0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f5d680099c0 tx=0x7f5d68006eb0 comp rx=0 tx=0).stop 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 -- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 msgr2=0x7f5d741177d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 --2- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 0x7f5d741177d0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f5d6c0045c0 tx=0x7f5d6c0045f0 comp rx=0 tx=0).stop 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 -- 192.168.123.109:0/1660780692 shutdown_connections 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 --2- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5d5003da10 0x7f5d5003fed0 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 --2- 192.168.123.109:0/1660780692 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5d740739f0 0x7f5d741177d0 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 -- 192.168.123.109:0/1660780692 >> 192.168.123.109:0/1660780692 conn(0x7f5d7406d270 msgr2=0x7f5d74072fc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 -- 192.168.123.109:0/1660780692 shutdown_connections 2026-03-09T15:48:54.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:54.643+0000 7f5d617fa640 1 -- 192.168.123.109:0/1660780692 wait complete. 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:25.486282+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.101:0/1457351094' entity='client.admin' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:25.877108+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:25.877108+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:28.132267+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:28.132267+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:28.773105+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:28.773105+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:29.776614+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:29.776614+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:30.229654+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.101:0/168017167' entity='client.admin' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:30.229654+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.101:0/168017167' entity='client.admin' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.720735+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.720735+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.724500+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.724500+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.725414+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.725414+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.729433+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.729433+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.735362+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.735362+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.738347+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:34.738347+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.255546+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.255546+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.259012+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.259012+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.259827+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.259827+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.261457+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.261457+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.262249+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.262249+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:35.263131+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:35.263131+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:35.312035+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:35.312035+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:35.361598+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:35.361598+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:35.396145+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:35.396145+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.450121+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.450121+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.453315+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.453315+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.456253+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:35.456253+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:40.303714+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:40.303714+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:40.860373+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:40.860373+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:42.177384+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:42.177384+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:42.178004+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm09 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:42.178004+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm09 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:42.178762+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:42.178762+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:42.482178+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:42.482178+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:43.045223+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:43.045223+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:43.829980+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:43.829980+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:44.437332+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:44.437332+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:45.045524+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:45.045524+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:47.045794+0000 mgr.y (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:47.045794+0000 mgr.y (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.109767+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.109767+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.308382+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.308382+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.310287+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.310287+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.312804+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.312804+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.314734+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.314734+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.315300+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.315300+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.316216+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.316216+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.316927+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.316927+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:47.317725+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:47.317725+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:47.350884+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:47.350884+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:47.380467+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:55.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:47.380467+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:47.412650+0000 mgr.y (mgr.14150) 25 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:47.412650+0000 mgr.y (mgr.14150) 25 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.452388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.452388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.455838+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.455838+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.459150+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:47.459150+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:49.046093+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:49.046093+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:51.116207+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:51.116207+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:51.046385+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:51.046385+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:51.315982+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:51.315982+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:51.317849+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:51.317849+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.760360+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.760360+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:52.761587+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b;count:3 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:52.761587+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b;count:3 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.765368+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.765368+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.766539+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.766539+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.767997+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.767997+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.768658+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.768658+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.773022+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.773022+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.775070+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.775070+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.775786+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: audit 2026-03-09T15:48:52.775786+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:52.776597+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm09 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cephadm 2026-03-09T15:48:52.776597+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm09 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:53.046738+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: cluster 2026-03-09T15:48:53.046738+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:54 vm09 bash[22983]: debug 2026-03-09T15:48:54.651+0000 7fb593fa2640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T15:48:55.305 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:48:55.306 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:48:55.555 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:48:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:48:55.555 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:14.937553+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:48:55.716 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T15:48:55.717 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph mon dump -f json 2026-03-09T15:48:55.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:15.223723+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:15.223723+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:15.656726+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-09T15:48:55.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:15.656726+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-09T15:48:55.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:15.831307+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm01 2026-03-09T15:48:55.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:15.831307+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm01 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.196453+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.196453+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:17.196972+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm01 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:17.196972+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm01 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.197229+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.197229+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.542266+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.542266+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.845304+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.845304+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:18.143742+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.101:0/3698565340' entity='client.admin' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:18.143742+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.101:0/3698565340' entity='client.admin' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.536807+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.536807+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:17.537890+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:17.537890+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.841521+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:17.841521+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:17.842475+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:17.842475+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:18.433632+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.101:0/4031595610' entity='client.admin' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:18.433632+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.101:0/4031595610' entity='client.admin' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:18.766175+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:18.766175+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:18.824749+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:18.824749+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:19.141222+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:19.141222+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.101:0/2996174437' entity='mgr.y' 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:19.435111+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:19.435111+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.101:0/179694937' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:19.442122+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 6s) 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:19.442122+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 6s) 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:19.836256+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.101:0/1250862618' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:19.836256+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.101:0/1250862618' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.033962+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.033962+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:48:55.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.034379+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.034379+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.039775+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.039775+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.039905+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00562542s) 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.039905+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00562542s) 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.041929+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.041929+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.042547+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.042547+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.043913+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.043913+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.044492+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.044492+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.044988+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.044988+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.052265+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:23.052265+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.076637+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.076637+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.077406+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.077406+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.087450+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:23.087450+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:24.052982+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.01867s) 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:24.052982+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.01867s) 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.145032+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTING 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.145032+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTING 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.270360+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.270360+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.271018+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Client ('192.168.123.101', 56886) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.271018+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Client ('192.168.123.101', 56886) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.371955+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.371955+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.372272+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTED 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:24.372272+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:15:48:24] ENGINE Bus STARTED 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.389155+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.389155+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.417819+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.417819+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.421016+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.421016+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.872886+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.872886+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:25.148261+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.101:0/1869152731' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:25.148261+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.101:0/1869152731' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.712827+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:24.712827+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:25.486282+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.101:0/1457351094' entity='client.admin' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:25.486282+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.101:0/1457351094' entity='client.admin' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:25.877108+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:25.877108+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:28.132267+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:28.132267+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:28.773105+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:28.773105+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:29.776614+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:29.776614+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:30.229654+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.101:0/168017167' entity='client.admin' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:30.229654+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.101:0/168017167' entity='client.admin' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.720735+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.720735+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.724500+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.724500+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.725414+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.725414+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.729433+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.729433+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.735362+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.735362+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.738347+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:34.738347+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.255546+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.255546+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.259012+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.259012+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.259827+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.259827+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.261457+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.261457+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.262249+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.262249+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:35.263131+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:48:55.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:35.263131+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:35.312035+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:35.312035+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:35.361598+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:35.361598+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:35.396145+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:35.396145+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.450121+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.450121+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.453315+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.453315+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.456253+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:35.456253+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:40.303714+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:40.303714+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:40.860373+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:40.860373+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:42.177384+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:42.177384+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:42.178004+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm09 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:42.178004+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm09 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:42.178762+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:42.178762+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:42.482178+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:42.482178+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:43.045223+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:43.045223+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:43.829980+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:43.829980+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:44.437332+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:44.437332+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:45.045524+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:45.045524+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:47.045794+0000 mgr.y (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:47.045794+0000 mgr.y (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.109767+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.109767+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.308382+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.308382+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.310287+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.310287+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.312804+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.312804+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.314734+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.314734+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.315300+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.315300+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.316216+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.316216+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.316927+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.316927+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:47.317725+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:47.317725+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:47.350884+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:47.350884+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:47.380467+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:47.380467+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:47.412650+0000 mgr.y (mgr.14150) 25 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:47.412650+0000 mgr.y (mgr.14150) 25 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.452388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.452388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.455838+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.455838+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.459150+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:47.459150+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:49.046093+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:49.046093+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:51.116207+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:51.116207+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:51.046385+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:51.046385+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:51.315982+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:51.315982+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.101:0/3012074737' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:51.317849+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:51.317849+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.760360+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.760360+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:52.761587+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b;count:3 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:52.761587+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm01:192.168.123.101=a;vm01:[v2:192.168.123.101:3301,v1:192.168.123.101:6790]=c;vm09:192.168.123.109=b;count:3 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.765368+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.765368+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.766539+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.766539+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.767997+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.767997+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.768658+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.768658+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.773022+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.773022+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.775070+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.775070+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.775786+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: audit 2026-03-09T15:48:52.775786+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:52.776597+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm09 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cephadm 2026-03-09T15:48:52.776597+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm09 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:53.046738+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: cluster 2026-03-09T15:48:53.046738+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 1 mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 1 mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 1 mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 1 mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 1 mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 1 mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 0 mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.542+0000 7f14bc7d3640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T15:48:55.941 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:48:55 vm01 bash[28152]: debug 2026-03-09T15:48:55.546+0000 7f14bc7d3640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T15:48:59.329 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:48:59.700 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 -- 192.168.123.109:0/3183569814 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 msgr2=0x7fafc4103840 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:59.700 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 --2- 192.168.123.109:0/3183569814 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 0x7fafc4103840 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7fafb40099b0 tx=0x7fafb402f340 comp rx=0 tx=0).stop 2026-03-09T15:48:59.700 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 -- 192.168.123.109:0/3183569814 shutdown_connections 2026-03-09T15:48:59.700 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 --2- 192.168.123.109:0/3183569814 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 0x7fafc4103840 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:59.700 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 -- 192.168.123.109:0/3183569814 >> 192.168.123.109:0/3183569814 conn(0x7fafc40fd210 msgr2=0x7fafc40ff600 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:59.700 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 -- 192.168.123.109:0/3183569814 shutdown_connections 2026-03-09T15:48:59.700 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 -- 192.168.123.109:0/3183569814 wait complete. 2026-03-09T15:48:59.701 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 Processor -- start 2026-03-09T15:48:59.701 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 -- start start 2026-03-09T15:48:59.701 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 0x7fafc419b560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:59.702 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fafc419baa0 0x7fafc419fe30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:59.702 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fafc4102770 con 0x7fafc4103460 2026-03-09T15:48:59.702 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.695+0000 7fafc985c640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fafc41025f0 con 0x7fafc419baa0 2026-03-09T15:48:59.702 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafc885a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 0x7fafc419b560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:59.702 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafc885a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 0x7fafc419b560 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.109:39390/0 (socket says 192.168.123.109:39390) 2026-03-09T15:48:59.702 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafc885a640 1 -- 192.168.123.109:0/3755121587 learned_addr learned my addr 192.168.123.109:0/3755121587 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:48:59.702 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafbbfff640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fafc419baa0 0x7fafc419fe30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:59.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafbbfff640 1 -- 192.168.123.109:0/3755121587 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fafc419baa0 msgr2=0x7fafc419fe30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 12 2026-03-09T15:48:59.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafbbfff640 1 -- 192.168.123.109:0/3755121587 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fafc419baa0 msgr2=0x7fafc419fe30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T15:48:59.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafbbfff640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fafc419baa0 0x7fafc419fe30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T15:48:59.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafbbfff640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fafc419baa0 0x7fafc419fe30 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T15:48:59.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafc885a640 1 -- 192.168.123.109:0/3755121587 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fafc419baa0 msgr2=0x7fafc419fe30 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:48:59.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafc885a640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fafc419baa0 0x7fafc419fe30 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:59.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafc885a640 1 -- 192.168.123.109:0/3755121587 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fafc419c150 con 0x7fafc4103460 2026-03-09T15:48:59.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafc885a640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 0x7fafc419b560 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7fafb4004290 tx=0x7fafb40042c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:59.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafb9ffb640 1 -- 192.168.123.109:0/3755121587 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 309+0+0 (secure 0 0 0) 0x7fafb4038470 con 0x7fafc4103460 2026-03-09T15:48:59.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafc985c640 1 -- 192.168.123.109:0/3755121587 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fafc41a0370 con 0x7fafc4103460 2026-03-09T15:48:59.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.699+0000 7fafc985c640 1 -- 192.168.123.109:0/3755121587 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fafc41a0900 con 0x7fafc4103460 2026-03-09T15:48:59.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.703+0000 7fafb9ffb640 1 -- 192.168.123.109:0/3755121587 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fafb4038a50 con 0x7fafc4103460 2026-03-09T15:48:59.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.703+0000 7fafb9ffb640 1 -- 192.168.123.109:0/3755121587 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 309+0+0 (secure 0 0 0) 0x7fafb40415f0 con 0x7fafc4103460 2026-03-09T15:48:59.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.703+0000 7fafb9ffb640 1 -- 192.168.123.109:0/3755121587 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7fafb404b440 con 0x7fafc4103460 2026-03-09T15:48:59.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.703+0000 7fafb9ffb640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fafa003d9a0 0x7fafa003fe60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:48:59.710 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.703+0000 7fafbbfff640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fafa003d9a0 0x7fafa003fe60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:48:59.710 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.703+0000 7fafb9ffb640 1 -- 192.168.123.109:0/3755121587 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7fafb403e070 con 0x7fafc4103460 2026-03-09T15:48:59.710 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.703+0000 7fafbbfff640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fafa003d9a0 0x7fafa003fe60 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7fafac006fb0 tx=0x7fafac008040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:48:59.710 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.703+0000 7fafc985c640 1 -- 192.168.123.109:0/3755121587 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7faf8c005180 con 0x7fafc4103460 2026-03-09T15:48:59.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.711+0000 7fafb9ffb640 1 -- 192.168.123.109:0/3755121587 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fafb4032180 con 0x7fafc4103460 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:54.666277+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:54.666277+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:54.666829+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:54.666829+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:54.668616+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:54.668616+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:55.047733+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:55.047733+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:55.562087+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:55.562087+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:55.659300+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:55.659300+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:56.561906+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:56.561906+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:56.659750+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:56.659750+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:56.664475+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:56.664475+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:57.047923+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:57.047923+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:57.561540+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:57.561540+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:57.658931+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:57.658931+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:58.561821+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:58.561821+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:58.659374+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:58.659374+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.048079+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.048079+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.562105+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.562105+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.659816+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.659816+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.672847+0000 mon.a (mon.0) 153 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.672847+0000 mon.a (mon.0) 153 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677356+0000 mon.a (mon.0) 154 : cluster [DBG] monmap epoch 2 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677356+0000 mon.a (mon.0) 154 : cluster [DBG] monmap epoch 2 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677414+0000 mon.a (mon.0) 155 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677414+0000 mon.a (mon.0) 155 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677457+0000 mon.a (mon.0) 156 : cluster [DBG] last_changed 2026-03-09T15:48:54.662055+0000 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677457+0000 mon.a (mon.0) 156 : cluster [DBG] last_changed 2026-03-09T15:48:54.662055+0000 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677498+0000 mon.a (mon.0) 157 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677498+0000 mon.a (mon.0) 157 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:48:59.745 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677543+0000 mon.a (mon.0) 158 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677543+0000 mon.a (mon.0) 158 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677589+0000 mon.a (mon.0) 159 : cluster [DBG] election_strategy: 1 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677589+0000 mon.a (mon.0) 159 : cluster [DBG] election_strategy: 1 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677629+0000 mon.a (mon.0) 160 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677629+0000 mon.a (mon.0) 160 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677670+0000 mon.a (mon.0) 161 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.677670+0000 mon.a (mon.0) 161 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.678138+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.678138+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.678224+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.678224+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.678468+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.678468+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.678756+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: cluster 2026-03-09T15:48:59.678756+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.685714+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.685714+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.695245+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.695245+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.701606+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.701606+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.706267+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.706267+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.718207+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:59.746 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:48:59 vm09 bash[22983]: audit 2026-03-09T15:48:59.718207+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:48:59.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.851+0000 7fafc985c640 1 -- 192.168.123.109:0/3755121587 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7faf8c005470 con 0x7fafc4103460 2026-03-09T15:48:59.858 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.855+0000 7fafb9ffb640 1 -- 192.168.123.109:0/3755121587 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 2 v2) ==== 95+0+1031 (secure 0 0 0) 0x7fafb4048310 con 0x7fafc4103460 2026-03-09T15:48:59.859 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:48:59.859 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"397fadc0-1bcf-11f1-8481-edc1430c2c03","modified":"2026-03-09T15:48:54.662055Z","created":"2026-03-09T15:48:00.842739Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T15:48:59.859 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-09T15:48:59.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 -- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fafa003d9a0 msgr2=0x7fafa003fe60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:59.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fafa003d9a0 0x7fafa003fe60 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7fafac006fb0 tx=0x7fafac008040 comp rx=0 tx=0).stop 2026-03-09T15:48:59.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 -- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 msgr2=0x7fafc419b560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:48:59.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 0x7fafc419b560 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7fafb4004290 tx=0x7fafb40042c0 comp rx=0 tx=0).stop 2026-03-09T15:48:59.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 -- 192.168.123.109:0/3755121587 shutdown_connections 2026-03-09T15:48:59.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fafa003d9a0 0x7fafa003fe60 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:59.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fafc419baa0 0x7fafc419fe30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:59.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 --2- 192.168.123.109:0/3755121587 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fafc4103460 0x7fafc419b560 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:48:59.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 -- 192.168.123.109:0/3755121587 >> 192.168.123.109:0/3755121587 conn(0x7fafc40fd210 msgr2=0x7fafc4109b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:48:59.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 -- 192.168.123.109:0/3755121587 shutdown_connections 2026-03-09T15:48:59.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:48:59.859+0000 7faf937fe640 1 -- 192.168.123.109:0/3755121587 wait complete. 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:54.666277+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:54.666277+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:54.666829+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:54.666829+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:54.668616+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:54.668616+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:55.047733+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:55.047733+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:55.562087+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:55.562087+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:55.659300+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:55.659300+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:56.561906+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:56.561906+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:56.659750+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:56.659750+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:56.664475+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:56.664475+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:57.047923+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:57.047923+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:00.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:57.561540+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:57.561540+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:57.658931+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:57.658931+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:58.561821+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:58.561821+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:58.659374+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:58.659374+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.048079+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.048079+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.562105+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.562105+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.659816+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.659816+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.672847+0000 mon.a (mon.0) 153 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.672847+0000 mon.a (mon.0) 153 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677356+0000 mon.a (mon.0) 154 : cluster [DBG] monmap epoch 2 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677356+0000 mon.a (mon.0) 154 : cluster [DBG] monmap epoch 2 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677414+0000 mon.a (mon.0) 155 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677414+0000 mon.a (mon.0) 155 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677457+0000 mon.a (mon.0) 156 : cluster [DBG] last_changed 2026-03-09T15:48:54.662055+0000 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677457+0000 mon.a (mon.0) 156 : cluster [DBG] last_changed 2026-03-09T15:48:54.662055+0000 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677498+0000 mon.a (mon.0) 157 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677498+0000 mon.a (mon.0) 157 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677543+0000 mon.a (mon.0) 158 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677543+0000 mon.a (mon.0) 158 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677589+0000 mon.a (mon.0) 159 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677589+0000 mon.a (mon.0) 159 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677629+0000 mon.a (mon.0) 160 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677629+0000 mon.a (mon.0) 160 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677670+0000 mon.a (mon.0) 161 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.677670+0000 mon.a (mon.0) 161 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.678138+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.678138+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.678224+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.678224+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.678468+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.678468+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.678756+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: cluster 2026-03-09T15:48:59.678756+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.685714+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.685714+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.695245+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.695245+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.701606+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.701606+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.706267+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.706267+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.718207+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:00.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:48:59 vm01 bash[20728]: audit 2026-03-09T15:48:59.718207+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:00.832 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:00 vm09 bash[22983]: audit 2026-03-09T15:48:59.855467+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.109:0/3755121587' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:00.832 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:00 vm09 bash[22983]: audit 2026-03-09T15:48:59.855467+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.109:0/3755121587' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:00.832 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:00 vm09 bash[22983]: audit 2026-03-09T15:49:00.561913+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.832 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:00 vm09 bash[22983]: audit 2026-03-09T15:49:00.561913+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:00.832 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:00 vm09 bash[22983]: audit 2026-03-09T15:49:00.659533+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.832 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:00 vm09 bash[22983]: audit 2026-03-09T15:49:00.659533+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:00.948 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T15:49:00.948 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph mon dump -f json 2026-03-09T15:49:01.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:00 vm01 bash[20728]: audit 2026-03-09T15:48:59.855467+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.109:0/3755121587' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:01.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:00 vm01 bash[20728]: audit 2026-03-09T15:48:59.855467+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.109:0/3755121587' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:01.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:00 vm01 bash[20728]: audit 2026-03-09T15:49:00.561913+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:01.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:00 vm01 bash[20728]: audit 2026-03-09T15:49:00.561913+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:01.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:00 vm01 bash[20728]: audit 2026-03-09T15:49:00.659533+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:01.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:00 vm01 bash[20728]: audit 2026-03-09T15:49:00.659533+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:01.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:49:01 vm01 bash[21002]: debug 2026-03-09T15:49:01.654+0000 7f2b353e7640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T15:49:05.559 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:01.570055+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:01.570055+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:01.570572+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:01.570572+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:01.572381+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:01.572381+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:01.572806+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:01.572806+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:01.573036+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:01.573036+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:02.562728+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:02.562728+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:03.048624+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:03.048624+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:03.562625+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:03.562625+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:04.562512+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:04.562512+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:05.048888+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:05.048888+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:05.563197+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:05.563197+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.562842+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.562842+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.572789+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.572789+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.576944+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.576944+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.576989+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.576989+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577030+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577030+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577070+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577070+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577113+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577113+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577184+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577184+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577224+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577224+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577265+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577265+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577309+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577309+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577743+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577743+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577823+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.577823+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.578068+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.578068+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.578364+0000 mon.a (mon.0) 197 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.578364+0000 mon.a (mon.0) 197 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.583883+0000 mon.a (mon.0) 198 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.583883+0000 mon.a (mon.0) 198 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.583898+0000 mon.a (mon.0) 199 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.583898+0000 mon.a (mon.0) 199 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.583908+0000 mon.a (mon.0) 200 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] is down (out of quorum) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: cluster 2026-03-09T15:49:06.583908+0000 mon.a (mon.0) 200 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] is down (out of quorum) 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.586813+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.586813+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.589682+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.589682+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.592454+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.592454+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.595282+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.595282+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.598557+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.598557+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.599420+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.599420+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.600318+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:06.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:06 vm01 bash[20728]: audit 2026-03-09T15:49:06.600318+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:01.570055+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:01.570055+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:01.570572+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:01.570572+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:01.572381+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:01.572381+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:01.572806+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:01.572806+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:01.573036+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:01.573036+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:02.562728+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:02.562728+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:03.048624+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:03.048624+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:03.562625+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:03.562625+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:04.562512+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:04.562512+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:05.048888+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:05.048888+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:05.563197+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:05.563197+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.562842+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.562842+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.572789+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.572789+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.576944+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.576944+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.576989+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.576989+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577030+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577030+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577070+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577070+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577113+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577113+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577184+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577184+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577224+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577224+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577265+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577265+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577309+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577309+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577743+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577743+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577823+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.577823+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.578068+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.578068+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.578364+0000 mon.a (mon.0) 197 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.578364+0000 mon.a (mon.0) 197 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.583883+0000 mon.a (mon.0) 198 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.583883+0000 mon.a (mon.0) 198 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.583898+0000 mon.a (mon.0) 199 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.583898+0000 mon.a (mon.0) 199 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.583908+0000 mon.a (mon.0) 200 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] is down (out of quorum) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: cluster 2026-03-09T15:49:06.583908+0000 mon.a (mon.0) 200 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] is down (out of quorum) 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.586813+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.586813+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.589682+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.589682+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.592454+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.592454+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.595282+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.595282+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.598557+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.598557+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.599420+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.599420+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.600318+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:07.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:06 vm09 bash[22983]: audit 2026-03-09T15:49:06.600318+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:07.144 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- 192.168.123.109:0/3824005848 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff764077620 msgr2=0x7ff764077a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:07.144 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 --2- 192.168.123.109:0/3824005848 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff764077620 0x7ff764077a00 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7ff750009bf0 tx=0x7ff75002fb10 comp rx=0 tx=0).stop 2026-03-09T15:49:07.144 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- 192.168.123.109:0/3824005848 shutdown_connections 2026-03-09T15:49:07.144 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 --2- 192.168.123.109:0/3824005848 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff764077620 0x7ff764077a00 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:07.144 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- 192.168.123.109:0/3824005848 >> 192.168.123.109:0/3824005848 conn(0x7ff7641006d0 msgr2=0x7ff764102ac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:49:07.144 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- 192.168.123.109:0/3824005848 shutdown_connections 2026-03-09T15:49:07.144 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- 192.168.123.109:0/3824005848 wait complete. 2026-03-09T15:49:07.144 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 Processor -- start 2026-03-09T15:49:07.144 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- start start 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff764077620 0x7ff7641a5a50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff7641a5f90 0x7ff76419fb20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff7641a0060 0x7ff7641a0510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7ff764112510 con 0x7ff7641a0060 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7ff764112390 con 0x7ff764077620 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7ff764112690 con 0x7ff7641a5f90 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76910e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff7641a0060 0x7ff7641a0510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76910e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff7641a0060 0x7ff7641a0510 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.109:49052/0 (socket says 192.168.123.109:49052) 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76910e640 1 -- 192.168.123.109:0/1640776524 learned_addr learned my addr 192.168.123.109:0/1640776524 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76910e640 1 -- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff7641a5f90 msgr2=0x7ff76419fb20 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76910e640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff7641a5f90 0x7ff76419fb20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76910e640 1 -- 192.168.123.109:0/1640776524 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff764077620 msgr2=0x7ff7641a5a50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76910e640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff764077620 0x7ff7641a5a50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76910e640 1 -- 192.168.123.109:0/1640776524 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff7641ac800 con 0x7ff7641a0060 2026-03-09T15:49:07.145 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76910e640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff7641a0060 0x7ff7641a0510 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7ff75800b5a0 tx=0x7ff75800ba70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:49:07.146 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff761ffb640 1 -- 192.168.123.109:0/1640776524 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff758012070 con 0x7ff7641a0060 2026-03-09T15:49:07.146 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff761ffb640 1 -- 192.168.123.109:0/1640776524 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff758004480 con 0x7ff7641a0060 2026-03-09T15:49:07.146 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff761ffb640 1 -- 192.168.123.109:0/1640776524 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff75800d940 con 0x7ff7641a0060 2026-03-09T15:49:07.146 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.139+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff7641acaf0 con 0x7ff7641a0060 2026-03-09T15:49:07.147 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.143+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff7641ad000 con 0x7ff7641a0060 2026-03-09T15:49:07.148 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.143+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff72c005180 con 0x7ff7641a0060 2026-03-09T15:49:07.148 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.143+0000 7ff761ffb640 1 -- 192.168.123.109:0/1640776524 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7ff75800dae0 con 0x7ff7641a0060 2026-03-09T15:49:07.148 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.143+0000 7ff761ffb640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff74403dd60 0x7ff744040220 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:07.148 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.143+0000 7ff761ffb640 1 -- 192.168.123.109:0/1640776524 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7ff758051480 con 0x7ff7641a0060 2026-03-09T15:49:07.148 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.143+0000 7ff76890d640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff74403dd60 0x7ff744040220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:07.148 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.143+0000 7ff76890d640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff74403dd60 0x7ff744040220 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7ff750009bf0 tx=0x7ff750034580 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:49:07.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.147+0000 7ff761ffb640 1 -- 192.168.123.109:0/1640776524 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff75801c070 con 0x7ff7641a0060 2026-03-09T15:49:07.285 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.279+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7ff72c005740 con 0x7ff7641a0060 2026-03-09T15:49:07.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.279+0000 7ff761ffb640 1 -- 192.168.123.109:0/1640776524 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 3 v3) ==== 95+0+1307 (secure 0 0 0) 0x7ff75801d2a0 con 0x7ff7641a0060 2026-03-09T15:49:07.286 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:49:07.286 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":3,"fsid":"397fadc0-1bcf-11f1-8481-edc1430c2c03","modified":"2026-03-09T15:49:01.563742Z","created":"2026-03-09T15:48:00.842739Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3301","nonce":0},{"type":"v1","addr":"192.168.123.101:6790","nonce":0}]},"addr":"192.168.123.101:6790/0","public_addr":"192.168.123.101:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T15:49:07.286 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 3 2026-03-09T15:49:07.288 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff74403dd60 msgr2=0x7ff744040220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:07.288 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff74403dd60 0x7ff744040220 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7ff750009bf0 tx=0x7ff750034580 comp rx=0 tx=0).stop 2026-03-09T15:49:07.288 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff7641a0060 msgr2=0x7ff7641a0510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:07.288 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff7641a0060 0x7ff7641a0510 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7ff75800b5a0 tx=0x7ff75800ba70 comp rx=0 tx=0).stop 2026-03-09T15:49:07.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 shutdown_connections 2026-03-09T15:49:07.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff7641a0060 0x7ff7641a0510 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:07.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff74403dd60 0x7ff744040220 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:07.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff7641a5f90 0x7ff76419fb20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:07.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 --2- 192.168.123.109:0/1640776524 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff764077620 0x7ff7641a5a50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:07.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 >> 192.168.123.109:0/1640776524 conn(0x7ff7641006d0 msgr2=0x7ff764101400 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:49:07.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 shutdown_connections 2026-03-09T15:49:07.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:07.283+0000 7ff76ab98640 1 -- 192.168.123.109:0/1640776524 wait complete. 2026-03-09T15:49:07.346 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T15:49:07.346 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph config generate-minimal-conf 2026-03-09T15:49:07.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.601250+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.601250+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.601380+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.601380+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.645018+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.645018+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.645540+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.645540+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.688529+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.688529+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.693558+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.693558+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.697993+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.697993+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.701006+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.701006+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.703794+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.703794+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.717964+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.717964+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.720564+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.720564+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.723232+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.723232+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.726256+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.726256+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.726481+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.726481+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.727118+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.727118+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.727564+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.727564+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.727958+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:06.727958+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.728435+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm01 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:06.728435+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm01 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cluster 2026-03-09T15:49:07.049029+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cluster 2026-03-09T15:49:07.049029+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.113182+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.113182+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.117318+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.117318+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:07.117911+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:07.117911+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.118586+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.118586+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.119148+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.119148+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.119630+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.119630+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:07.120163+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm01 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: cephadm 2026-03-09T15:49:07.120163+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm01 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.283274+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.109:0/1640776524' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.283274+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.109:0/1640776524' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.557404+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.557404+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.561024+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.561024+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:07.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.562105+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:07.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.562105+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:07.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.563191+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:07.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.563191+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:07.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.563360+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.563360+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:07.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.563922+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:07.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:07 vm01 bash[20728]: audit 2026-03-09T15:49:07.563922+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.601250+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.601250+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.601380+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.601380+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.645018+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.645018+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.645540+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.645540+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.688529+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.688529+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.693558+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.693558+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.697993+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.697993+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.701006+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.701006+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.703794+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.703794+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.717964+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.717964+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.720564+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.720564+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.723232+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.723232+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.726256+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.726256+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.726481+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.726481+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.727118+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.727118+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.727564+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.727564+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.727958+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:06.727958+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.728435+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm01 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:06.728435+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm01 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cluster 2026-03-09T15:49:07.049029+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cluster 2026-03-09T15:49:07.049029+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.113182+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.113182+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.117318+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.117318+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:07.117911+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:07.117911+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.118586+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.118586+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.119148+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.119148+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.119630+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.119630+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:07.120163+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm01 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: cephadm 2026-03-09T15:49:07.120163+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm01 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.283274+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.109:0/1640776524' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.283274+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.109:0/1640776524' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.557404+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.557404+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.561024+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.561024+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.562105+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.562105+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.563191+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.563191+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.563360+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.563360+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.563922+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:07 vm09 bash[22983]: audit 2026-03-09T15:49:07.563922+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:54.666277+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:54.666277+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:54.666829+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:54.666829+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:54.668616+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:54.668616+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:55.047733+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:55.047733+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:55.562087+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:55.562087+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:55.659300+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:55.659300+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:56.561906+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:56.561906+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:56.659750+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:56.659750+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:56.664475+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:56.664475+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:57.047923+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:57.047923+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:57.561540+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:57.561540+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:57.658931+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:57.658931+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:58.561821+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:58.561821+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.935 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:58.659374+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:58.659374+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.048079+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.048079+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.562105+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.562105+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.659816+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.659816+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.672847+0000 mon.a (mon.0) 153 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.672847+0000 mon.a (mon.0) 153 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677356+0000 mon.a (mon.0) 154 : cluster [DBG] monmap epoch 2 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677356+0000 mon.a (mon.0) 154 : cluster [DBG] monmap epoch 2 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677414+0000 mon.a (mon.0) 155 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677414+0000 mon.a (mon.0) 155 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677457+0000 mon.a (mon.0) 156 : cluster [DBG] last_changed 2026-03-09T15:48:54.662055+0000 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677457+0000 mon.a (mon.0) 156 : cluster [DBG] last_changed 2026-03-09T15:48:54.662055+0000 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677498+0000 mon.a (mon.0) 157 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677498+0000 mon.a (mon.0) 157 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677543+0000 mon.a (mon.0) 158 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677543+0000 mon.a (mon.0) 158 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677589+0000 mon.a (mon.0) 159 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677589+0000 mon.a (mon.0) 159 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677629+0000 mon.a (mon.0) 160 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677629+0000 mon.a (mon.0) 160 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677670+0000 mon.a (mon.0) 161 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.677670+0000 mon.a (mon.0) 161 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.678138+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.678138+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.678224+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.678224+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.678468+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.678468+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.678756+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:48:59.678756+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.685714+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.685714+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.695245+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.695245+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.936 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.701606+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.701606+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.706267+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.706267+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.718207+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.718207+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.855467+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.109:0/3755121587' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:48:59.855467+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.109:0/3755121587' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:00.561913+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:00.561913+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:00.659533+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:00.659533+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:01.570055+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:01.570055+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:01.570572+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:01.570572+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:01.572381+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:01.572381+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:01.572806+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:01.572806+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:01.573036+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:01.573036+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:02.562728+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:02.562728+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:03.048624+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:03.048624+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:03.562625+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:03.562625+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:04.562512+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:04.562512+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:05.048888+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:05.048888+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:05.563197+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:05.563197+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.562842+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.562842+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.572789+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.572789+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.576944+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.576944+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:08.937 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.576989+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.576989+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577030+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577030+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577070+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577070+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577113+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577113+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577184+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577184+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577224+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577224+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577265+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577265+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577309+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577309+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577743+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577743+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577823+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.577823+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.578068+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.578068+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.578364+0000 mon.a (mon.0) 197 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.578364+0000 mon.a (mon.0) 197 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.583883+0000 mon.a (mon.0) 198 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.583883+0000 mon.a (mon.0) 198 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.583898+0000 mon.a (mon.0) 199 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.583898+0000 mon.a (mon.0) 199 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.583908+0000 mon.a (mon.0) 200 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] is down (out of quorum) 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:06.583908+0000 mon.a (mon.0) 200 : cluster [WRN] mon.c (rank 2) addr [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] is down (out of quorum) 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.586813+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.586813+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.589682+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.589682+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.592454+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.592454+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.595282+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.595282+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.598557+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.598557+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.599420+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.599420+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.600318+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.600318+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.601250+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:49:08.938 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.601250+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.601380+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.601380+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.645018+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.645018+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.645540+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.645540+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.688529+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.688529+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.693558+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.693558+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.697993+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.697993+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.701006+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.701006+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.703794+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.703794+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.717964+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.717964+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.720564+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.720564+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.723232+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.723232+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.726256+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.726256+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.726481+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.726481+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.727118+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.727118+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.727564+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.727564+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.727958+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:06.727958+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.728435+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm01 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:06.728435+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm01 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:07.049029+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:07.049029+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.113182+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.113182+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.117318+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.117318+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:07.117911+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:07.117911+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.118586+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.118586+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.119148+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.119148+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.119630+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.119630+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:07.120163+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm01 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cephadm 2026-03-09T15:49:07.120163+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm01 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.283274+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.109:0/1640776524' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.283274+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.109:0/1640776524' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.557404+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.557404+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.561024+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.561024+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:08.939 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.562105+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.562105+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:49:08.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.563191+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.563191+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:49:08.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.563360+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.563360+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:08.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.563922+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:08.940 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: audit 2026-03-09T15:49:07.563922+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:03.564973+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:03.564973+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.574935+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.574935+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.581234+0000 mon.a (mon.0) 239 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.581234+0000 mon.a (mon.0) 239 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.582226+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.582226+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.583642+0000 mon.a (mon.0) 240 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.583642+0000 mon.a (mon.0) 240 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587736+0000 mon.a (mon.0) 241 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587736+0000 mon.a (mon.0) 241 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587789+0000 mon.a (mon.0) 242 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587789+0000 mon.a (mon.0) 242 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587801+0000 mon.a (mon.0) 243 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587801+0000 mon.a (mon.0) 243 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587810+0000 mon.a (mon.0) 244 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587810+0000 mon.a (mon.0) 244 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587819+0000 mon.a (mon.0) 245 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587819+0000 mon.a (mon.0) 245 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587828+0000 mon.a (mon.0) 246 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587828+0000 mon.a (mon.0) 246 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587838+0000 mon.a (mon.0) 247 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587838+0000 mon.a (mon.0) 247 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587846+0000 mon.a (mon.0) 248 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587846+0000 mon.a (mon.0) 248 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587855+0000 mon.a (mon.0) 249 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.587855+0000 mon.a (mon.0) 249 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588093+0000 mon.a (mon.0) 250 : cluster [DBG] fsmap 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588093+0000 mon.a (mon.0) 250 : cluster [DBG] fsmap 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588113+0000 mon.a (mon.0) 251 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588113+0000 mon.a (mon.0) 251 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588236+0000 mon.a (mon.0) 252 : cluster [DBG] mgrmap e12: y(active, since 45s) 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588236+0000 mon.a (mon.0) 252 : cluster [DBG] mgrmap e12: y(active, since 45s) 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588322+0000 mon.a (mon.0) 253 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588322+0000 mon.a (mon.0) 253 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588334+0000 mon.a (mon.0) 254 : cluster [INF] Cluster is now healthy 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.588334+0000 mon.a (mon.0) 254 : cluster [INF] Cluster is now healthy 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.591371+0000 mon.a (mon.0) 255 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:09.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:08 vm09 bash[22983]: cluster 2026-03-09T15:49:08.591371+0000 mon.a (mon.0) 255 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:03.564973+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:03.564973+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.574935+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.574935+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.581234+0000 mon.a (mon.0) 239 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.581234+0000 mon.a (mon.0) 239 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.582226+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.582226+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.583642+0000 mon.a (mon.0) 240 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.583642+0000 mon.a (mon.0) 240 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587736+0000 mon.a (mon.0) 241 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587736+0000 mon.a (mon.0) 241 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587789+0000 mon.a (mon.0) 242 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587789+0000 mon.a (mon.0) 242 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587801+0000 mon.a (mon.0) 243 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587801+0000 mon.a (mon.0) 243 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587810+0000 mon.a (mon.0) 244 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587810+0000 mon.a (mon.0) 244 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:09.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587819+0000 mon.a (mon.0) 245 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587819+0000 mon.a (mon.0) 245 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587828+0000 mon.a (mon.0) 246 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587828+0000 mon.a (mon.0) 246 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587838+0000 mon.a (mon.0) 247 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587838+0000 mon.a (mon.0) 247 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587846+0000 mon.a (mon.0) 248 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587846+0000 mon.a (mon.0) 248 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587855+0000 mon.a (mon.0) 249 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:03.564973+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:03.564973+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.574935+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.574935+0000 mon.c (mon.2) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.581234+0000 mon.a (mon.0) 239 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.581234+0000 mon.a (mon.0) 239 : cluster [INF] mon.a calling monitor election 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.582226+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.582226+0000 mon.b (mon.1) 3 : cluster [INF] mon.b calling monitor election 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.583642+0000 mon.a (mon.0) 240 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.583642+0000 mon.a (mon.0) 240 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587736+0000 mon.a (mon.0) 241 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587736+0000 mon.a (mon.0) 241 : cluster [DBG] monmap epoch 3 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587789+0000 mon.a (mon.0) 242 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587789+0000 mon.a (mon.0) 242 : cluster [DBG] fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587801+0000 mon.a (mon.0) 243 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587801+0000 mon.a (mon.0) 243 : cluster [DBG] last_changed 2026-03-09T15:49:01.563742+0000 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587810+0000 mon.a (mon.0) 244 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587810+0000 mon.a (mon.0) 244 : cluster [DBG] created 2026-03-09T15:48:00.842739+0000 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587819+0000 mon.a (mon.0) 245 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587819+0000 mon.a (mon.0) 245 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587828+0000 mon.a (mon.0) 246 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587828+0000 mon.a (mon.0) 246 : cluster [DBG] election_strategy: 1 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587838+0000 mon.a (mon.0) 247 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587838+0000 mon.a (mon.0) 247 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587846+0000 mon.a (mon.0) 248 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587846+0000 mon.a (mon.0) 248 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587855+0000 mon.a (mon.0) 249 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.587855+0000 mon.a (mon.0) 249 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588093+0000 mon.a (mon.0) 250 : cluster [DBG] fsmap 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588093+0000 mon.a (mon.0) 250 : cluster [DBG] fsmap 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588113+0000 mon.a (mon.0) 251 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588113+0000 mon.a (mon.0) 251 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588236+0000 mon.a (mon.0) 252 : cluster [DBG] mgrmap e12: y(active, since 45s) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588236+0000 mon.a (mon.0) 252 : cluster [DBG] mgrmap e12: y(active, since 45s) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588322+0000 mon.a (mon.0) 253 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588322+0000 mon.a (mon.0) 253 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588334+0000 mon.a (mon.0) 254 : cluster [INF] Cluster is now healthy 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.588334+0000 mon.a (mon.0) 254 : cluster [INF] Cluster is now healthy 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.591371+0000 mon.a (mon.0) 255 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:09.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:08 vm01 bash[28152]: cluster 2026-03-09T15:49:08.591371+0000 mon.a (mon.0) 255 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.587855+0000 mon.a (mon.0) 249 : cluster [DBG] 2: [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] mon.c 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588093+0000 mon.a (mon.0) 250 : cluster [DBG] fsmap 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588093+0000 mon.a (mon.0) 250 : cluster [DBG] fsmap 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588113+0000 mon.a (mon.0) 251 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588113+0000 mon.a (mon.0) 251 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588236+0000 mon.a (mon.0) 252 : cluster [DBG] mgrmap e12: y(active, since 45s) 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588236+0000 mon.a (mon.0) 252 : cluster [DBG] mgrmap e12: y(active, since 45s) 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588322+0000 mon.a (mon.0) 253 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588322+0000 mon.a (mon.0) 253 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,b) 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588334+0000 mon.a (mon.0) 254 : cluster [INF] Cluster is now healthy 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.588334+0000 mon.a (mon.0) 254 : cluster [INF] Cluster is now healthy 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.591371+0000 mon.a (mon.0) 255 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:09.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:08 vm01 bash[20728]: cluster 2026-03-09T15:49:08.591371+0000 mon.a (mon.0) 255 : cluster [INF] overall HEALTH_OK 2026-03-09T15:49:10.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:10 vm09 bash[22983]: cluster 2026-03-09T15:49:09.049200+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:10.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:10 vm09 bash[22983]: cluster 2026-03-09T15:49:09.049200+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:10.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:10 vm09 bash[22983]: audit 2026-03-09T15:49:09.562904+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:10.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:10 vm09 bash[22983]: audit 2026-03-09T15:49:09.562904+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:10.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:10 vm01 bash[28152]: cluster 2026-03-09T15:49:09.049200+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:10.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:10 vm01 bash[28152]: cluster 2026-03-09T15:49:09.049200+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:10.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:10 vm01 bash[28152]: audit 2026-03-09T15:49:09.562904+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:10.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:10 vm01 bash[28152]: audit 2026-03-09T15:49:09.562904+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:10.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:10 vm01 bash[20728]: cluster 2026-03-09T15:49:09.049200+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:10.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:10 vm01 bash[20728]: cluster 2026-03-09T15:49:09.049200+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:10.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:10 vm01 bash[20728]: audit 2026-03-09T15:49:09.562904+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:10.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:10 vm01 bash[20728]: audit 2026-03-09T15:49:09.562904+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:49:10.933 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:49:10 vm01 bash[21002]: debug 2026-03-09T15:49:10.558+0000 7f2b353e7640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T15:49:12.009 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:49:12.178 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.170+0000 7ff657a1d640 1 -- 192.168.123.101:0/1858350558 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff650105920 msgr2=0x7ff650109970 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:12.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.170+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1858350558 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff650105920 0x7ff650109970 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7ff640009960 tx=0x7ff64002f140 comp rx=0 tx=0).stop 2026-03-09T15:49:12.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- 192.168.123.101:0/1858350558 shutdown_connections 2026-03-09T15:49:12.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1858350558 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff65010a0a0 0x7ff650111c20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1858350558 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff650105920 0x7ff650109970 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1858350558 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff650104f70 0x7ff650105350 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.180 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- 192.168.123.101:0/1858350558 >> 192.168.123.101:0/1858350558 conn(0x7ff650100a70 msgr2=0x7ff650102e90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:49:12.180 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- 192.168.123.101:0/1858350558 shutdown_connections 2026-03-09T15:49:12.180 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- 192.168.123.101:0/1858350558 wait complete. 2026-03-09T15:49:12.180 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 Processor -- start 2026-03-09T15:49:12.180 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- start start 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff650104f70 0x7ff650077600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff650105920 0x7ff650077b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff65010a0a0 0x7ff650078080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7ff650114350 con 0x7ff65010a0a0 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7ff6501141d0 con 0x7ff650104f70 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7ff6501144d0 con 0x7ff650105920 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655792640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff650104f70 0x7ff650077600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655792640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff650104f70 0x7ff650077600 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:50950/0 (socket says 192.168.123.101:50950) 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655792640 1 -- 192.168.123.101:0/1182050751 learned_addr learned my addr 192.168.123.101:0/1182050751 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655792640 1 -- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff650105920 msgr2=0x7ff650077b40 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655f93640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff65010a0a0 0x7ff650078080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:12.181 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff654f91640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff650105920 0x7ff650077b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655792640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff650105920 0x7ff650077b40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655792640 1 -- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff65010a0a0 msgr2=0x7ff650078080 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655792640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff65010a0a0 0x7ff650078080 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655792640 1 -- 192.168.123.101:0/1182050751 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff6501ad140 con 0x7ff650104f70 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655f93640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff65010a0a0 0x7ff650078080 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff655792640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff650104f70 0x7ff650077600 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7ff6380027e0 tx=0x7ff638002cb0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff654f91640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff650105920 0x7ff650077b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff6467fc640 1 -- 192.168.123.101:0/1182050751 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff63800ed40 con 0x7ff650104f70 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff6501ad430 con 0x7ff650104f70 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff6501ad9c0 con 0x7ff650104f70 2026-03-09T15:49:12.182 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff6467fc640 1 -- 192.168.123.101:0/1182050751 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff638010870 con 0x7ff650104f70 2026-03-09T15:49:12.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.174+0000 7ff6467fc640 1 -- 192.168.123.101:0/1182050751 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff63800f620 con 0x7ff650104f70 2026-03-09T15:49:12.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.178+0000 7ff6467fc640 1 -- 192.168.123.101:0/1182050751 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7ff63800f7c0 con 0x7ff650104f70 2026-03-09T15:49:12.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.178+0000 7ff6467fc640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff61803ddf0 0x7ff6180402b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:12.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.178+0000 7ff6467fc640 1 -- 192.168.123.101:0/1182050751 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7ff6380534d0 con 0x7ff650104f70 2026-03-09T15:49:12.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.178+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff61c005180 con 0x7ff650104f70 2026-03-09T15:49:12.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.178+0000 7ff654f91640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff61803ddf0 0x7ff6180402b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:12.185 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.178+0000 7ff654f91640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff61803ddf0 0x7ff6180402b0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7ff640002fd0 tx=0x7ff640002f20 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:49:12.187 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.182+0000 7ff6467fc640 1 -- 192.168.123.101:0/1182050751 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff63803fe20 con 0x7ff650104f70 2026-03-09T15:49:12.286 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.278+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7ff61c005470 con 0x7ff650104f70 2026-03-09T15:49:12.286 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.278+0000 7ff6467fc640 1 -- 192.168.123.101:0/1182050751 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v9) ==== 76+0+289 (secure 0 0 0) 0x7ff638014070 con 0x7ff650104f70 2026-03-09T15:49:12.286 INFO:teuthology.orchestra.run.vm01.stdout:# minimal ceph.conf for 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:12.287 INFO:teuthology.orchestra.run.vm01.stdout:[global] 2026-03-09T15:49:12.287 INFO:teuthology.orchestra.run.vm01.stdout: fsid = 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:49:12.287 INFO:teuthology.orchestra.run.vm01.stdout: mon_host = [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] 2026-03-09T15:49:12.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff61803ddf0 msgr2=0x7ff6180402b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:12.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff61803ddf0 0x7ff6180402b0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7ff640002fd0 tx=0x7ff640002f20 comp rx=0 tx=0).stop 2026-03-09T15:49:12.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff650104f70 msgr2=0x7ff650077600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:12.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff650104f70 0x7ff650077600 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7ff6380027e0 tx=0x7ff638002cb0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 shutdown_connections 2026-03-09T15:49:12.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff65010a0a0 0x7ff650078080 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff61803ddf0 0x7ff6180402b0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff650105920 0x7ff650077b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.290 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 --2- 192.168.123.101:0/1182050751 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff650104f70 0x7ff650077600 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:12.290 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 >> 192.168.123.101:0/1182050751 conn(0x7ff650100a70 msgr2=0x7ff650101f00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:49:12.290 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 shutdown_connections 2026-03-09T15:49:12.290 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:12.282+0000 7ff657a1d640 1 -- 192.168.123.101:0/1182050751 wait complete. 2026-03-09T15:49:12.298 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:12 vm01 bash[20728]: cluster 2026-03-09T15:49:11.049373+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:12.298 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:12 vm01 bash[20728]: cluster 2026-03-09T15:49:11.049373+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:12.299 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:12 vm01 bash[28152]: cluster 2026-03-09T15:49:11.049373+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:12.299 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:12 vm01 bash[28152]: cluster 2026-03-09T15:49:11.049373+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:12.347 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T15:49:12.347 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:49:12.347 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T15:49:12.355 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:49:12.356 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:49:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:12 vm09 bash[22983]: cluster 2026-03-09T15:49:11.049373+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:12 vm09 bash[22983]: cluster 2026-03-09T15:49:11.049373+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:12.407 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:49:12.407 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T15:49:12.414 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:49:12.414 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:49:12.461 INFO:tasks.cephadm:Adding mgr.y on vm01 2026-03-09T15:49:12.461 INFO:tasks.cephadm:Adding mgr.x on vm09 2026-03-09T15:49:12.461 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch apply mgr '2;vm01=y;vm09=x' 2026-03-09T15:49:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:13 vm09 bash[22983]: audit 2026-03-09T15:49:12.286589+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.101:0/1182050751' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:13 vm09 bash[22983]: audit 2026-03-09T15:49:12.286589+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.101:0/1182050751' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:13.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:13 vm01 bash[28152]: audit 2026-03-09T15:49:12.286589+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.101:0/1182050751' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:13.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:13 vm01 bash[28152]: audit 2026-03-09T15:49:12.286589+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.101:0/1182050751' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:13.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:13 vm01 bash[20728]: audit 2026-03-09T15:49:12.286589+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.101:0/1182050751' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:13.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:13 vm01 bash[20728]: audit 2026-03-09T15:49:12.286589+0000 mon.b (mon.1) 4 : audit [DBG] from='client.? 192.168.123.101:0/1182050751' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:14 vm09 bash[22983]: cluster 2026-03-09T15:49:13.049535+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:14 vm09 bash[22983]: cluster 2026-03-09T15:49:13.049535+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:14.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:14 vm01 bash[28152]: cluster 2026-03-09T15:49:13.049535+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:14.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:14 vm01 bash[28152]: cluster 2026-03-09T15:49:13.049535+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:14.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:14 vm01 bash[20728]: cluster 2026-03-09T15:49:13.049535+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:14.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:14 vm01 bash[20728]: cluster 2026-03-09T15:49:13.049535+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:16.109 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:49:16.252 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/1699083559 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5fc4105f80 msgr2=0x7f5fc4106400 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/1699083559 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5fc4105f80 0x7f5fc4106400 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f5fb4009a30 tx=0x7f5fb402f240 comp rx=0 tx=0).stop 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/1699083559 shutdown_connections 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/1699083559 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5fc4106940 0x7f5fc410d1d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/1699083559 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5fc4105f80 0x7f5fc4106400 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/1699083559 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5fc4104d80 0x7f5fc4105180 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/1699083559 >> 192.168.123.109:0/1699083559 conn(0x7f5fc4100510 msgr2=0x7f5fc4102950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/1699083559 shutdown_connections 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/1699083559 wait complete. 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 Processor -- start 2026-03-09T15:49:16.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 -- start start 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5fc4104d80 0x7f5fc419c4d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5fc4105f80 0x7f5fc419ca10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5fc4106940 0x7f5fc41a3a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f5fc410fd40 con 0x7f5fc4104d80 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f5fc410fbc0 con 0x7f5fc4105f80 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fcaeb4640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f5fc410fec0 con 0x7f5fc4106940 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fc8c29640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5fc4104d80 0x7f5fc419c4d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fc8c29640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5fc4104d80 0x7f5fc419c4d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.109:39196/0 (socket says 192.168.123.109:39196) 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.247+0000 7f5fc8c29640 1 -- 192.168.123.109:0/2746359775 learned_addr learned my addr 192.168.123.109:0/2746359775 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc942a640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5fc4106940 0x7f5fc41a3a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:16.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc3fff640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5fc4105f80 0x7f5fc419ca10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:16.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc8c29640 1 -- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5fc4106940 msgr2=0x7f5fc41a3a90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:16.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc8c29640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5fc4106940 0x7f5fc41a3a90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc8c29640 1 -- 192.168.123.109:0/2746359775 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5fc4105f80 msgr2=0x7f5fc419ca10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:16.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc8c29640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5fc4105f80 0x7f5fc419ca10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc8c29640 1 -- 192.168.123.109:0/2746359775 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5fc41a4190 con 0x7f5fc4104d80 2026-03-09T15:49:16.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc3fff640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5fc4105f80 0x7f5fc419ca10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:49:16.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc8c29640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5fc4104d80 0x7f5fc419c4d0 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f5fb000b570 tx=0x7f5fb000ba40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:49:16.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc1ffb640 1 -- 192.168.123.109:0/2746359775 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5fb0013020 con 0x7f5fc4104d80 2026-03-09T15:49:16.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5fc41a4480 con 0x7f5fc4104d80 2026-03-09T15:49:16.256 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5fc41a4a90 con 0x7f5fc4104d80 2026-03-09T15:49:16.256 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc1ffb640 1 -- 192.168.123.109:0/2746359775 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f5fb0004480 con 0x7f5fc4104d80 2026-03-09T15:49:16.257 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc1ffb640 1 -- 192.168.123.109:0/2746359775 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5fb000f9f0 con 0x7f5fc4104d80 2026-03-09T15:49:16.257 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5fc4105180 con 0x7f5fc4104d80 2026-03-09T15:49:16.257 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc1ffb640 1 -- 192.168.123.109:0/2746359775 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f5fb0020020 con 0x7f5fc4104d80 2026-03-09T15:49:16.257 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc1ffb640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5fa403dd60 0x7f5fa4040220 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:16.258 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.251+0000 7f5fc1ffb640 1 -- 192.168.123.109:0/2746359775 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7f5fb00525d0 con 0x7f5fc4104d80 2026-03-09T15:49:16.260 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.255+0000 7f5fc3fff640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5fa403dd60 0x7f5fa4040220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:16.260 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.255+0000 7f5fc1ffb640 1 -- 192.168.123.109:0/2746359775 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f5fb00173b0 con 0x7f5fc4104d80 2026-03-09T15:49:16.261 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.255+0000 7f5fc3fff640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5fa403dd60 0x7f5fa4040220 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f5fb4009950 tx=0x7f5fb40023d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:49:16.358 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.351+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=y;vm09=x", "target": ["mon-mgr", ""]}) -- 0x7f5fc40630c0 con 0x7f5fa403dd60 2026-03-09T15:49:16.367 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fc1ffb640 1 -- 192.168.123.109:0/2746359775 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f5fc40630c0 con 0x7f5fa403dd60 2026-03-09T15:49:16.367 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mgr update... 2026-03-09T15:49:16.370 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5fa403dd60 msgr2=0x7f5fa4040220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:16.370 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5fa403dd60 0x7f5fa4040220 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f5fb4009950 tx=0x7f5fb40023d0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.370 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5fc4104d80 msgr2=0x7f5fc419c4d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:16.370 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5fc4104d80 0x7f5fc419c4d0 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f5fb000b570 tx=0x7f5fb000ba40 comp rx=0 tx=0).stop 2026-03-09T15:49:16.371 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 shutdown_connections 2026-03-09T15:49:16.371 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5fa403dd60 0x7f5fa4040220 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.371 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5fc4106940 0x7f5fc41a3a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.371 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5fc4105f80 0x7f5fc419ca10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.371 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 --2- 192.168.123.109:0/2746359775 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5fc4104d80 0x7f5fc419c4d0 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:16.371 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 >> 192.168.123.109:0/2746359775 conn(0x7f5fc4100510 msgr2=0x7f5fc4102070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:49:16.371 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 shutdown_connections 2026-03-09T15:49:16.371 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:49:16.367+0000 7f5fcaeb4640 1 -- 192.168.123.109:0/2746359775 wait complete. 2026-03-09T15:49:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:16 vm09 bash[22983]: cluster 2026-03-09T15:49:15.049717+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:16 vm09 bash[22983]: cluster 2026-03-09T15:49:15.049717+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:16.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:16 vm01 bash[28152]: cluster 2026-03-09T15:49:15.049717+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:16.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:16 vm01 bash[28152]: cluster 2026-03-09T15:49:15.049717+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:16.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:16 vm01 bash[20728]: cluster 2026-03-09T15:49:15.049717+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:16.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:16 vm01 bash[20728]: cluster 2026-03-09T15:49:15.049717+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:16.450 DEBUG:teuthology.orchestra.run.vm09:mgr.x> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.x.service 2026-03-09T15:49:16.451 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T15:49:16.451 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:49:16.451 DEBUG:teuthology.orchestra.run.vm01:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T15:49:16.454 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:49:16.454 DEBUG:teuthology.orchestra.run.vm01:> ls /dev/[sv]d? 2026-03-09T15:49:16.498 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vda 2026-03-09T15:49:16.498 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdb 2026-03-09T15:49:16.498 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdc 2026-03-09T15:49:16.498 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdd 2026-03-09T15:49:16.498 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vde 2026-03-09T15:49:16.498 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T15:49:16.498 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T15:49:16.498 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdb 2026-03-09T15:49:16.543 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdb 2026-03-09T15:49:16.543 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T15:49:16.543 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T15:49:16.543 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T15:49:16.543 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-09 15:42:38.588290586 +0000 2026-03-09T15:49:16.543 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-09 15:42:37.540290586 +0000 2026-03-09T15:49:16.543 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-09 15:42:37.540290586 +0000 2026-03-09T15:49:16.543 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-09T15:49:16.543 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T15:49:16.591 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-09T15:49:16.591 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-09T15:49:16.591 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000126086 s, 4.1 MB/s 2026-03-09T15:49:16.591 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T15:49:16.640 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdc 2026-03-09T15:49:16.667 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:16 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:16.686 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdc 2026-03-09T15:49:16.686 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T15:49:16.686 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T15:49:16.686 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T15:49:16.686 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-09 15:42:38.604290586 +0000 2026-03-09T15:49:16.686 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-09 15:42:37.556290586 +0000 2026-03-09T15:49:16.686 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-09 15:42:37.556290586 +0000 2026-03-09T15:49:16.686 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-09T15:49:16.686 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T15:49:16.734 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-09T15:49:16.734 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-09T15:49:16.734 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000157935 s, 3.2 MB/s 2026-03-09T15:49:16.734 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T15:49:16.779 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdd 2026-03-09T15:49:16.822 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdd 2026-03-09T15:49:16.822 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T15:49:16.822 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T15:49:16.822 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T15:49:16.822 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-09 15:42:38.584290586 +0000 2026-03-09T15:49:16.822 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-09 15:42:37.552290586 +0000 2026-03-09T15:49:16.822 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-09 15:42:37.552290586 +0000 2026-03-09T15:49:16.822 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-09T15:49:16.822 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T15:49:16.869 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-09T15:49:16.870 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-09T15:49:16.870 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000189595 s, 2.7 MB/s 2026-03-09T15:49:16.870 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T15:49:16.915 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vde 2026-03-09T15:49:16.957 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vde 2026-03-09T15:49:16.957 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T15:49:16.957 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T15:49:16.957 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T15:49:16.957 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-09 15:42:38.600290586 +0000 2026-03-09T15:49:16.957 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-09 15:42:37.520290586 +0000 2026-03-09T15:49:16.957 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-09 15:42:37.520290586 +0000 2026-03-09T15:49:16.957 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-09T15:49:16.958 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T15:49:17.005 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-09T15:49:17.005 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-09T15:49:17.005 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000182201 s, 2.8 MB/s 2026-03-09T15:49:17.005 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T15:49:17.051 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:49:17.051 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T15:49:17.056 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:49:17.056 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-09T15:49:17.101 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-09T15:49:17.102 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-09T15:49:17.102 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-09T15:49:17.102 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-09T15:49:17.102 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-09T15:49:17.102 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T15:49:17.102 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T15:49:17.102 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-09T15:49:17.145 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-09T15:49:17.145 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T15:49:17.145 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T15:49:17.145 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T15:49:17.145 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 15:42:07.569117708 +0000 2026-03-09T15:49:17.145 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 15:42:06.461117708 +0000 2026-03-09T15:49:17.145 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 15:42:06.461117708 +0000 2026-03-09T15:49:17.145 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-09T15:49:17.145 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T15:49:17.193 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T15:49:17.193 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T15:49:17.193 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000143168 s, 3.6 MB/s 2026-03-09T15:49:17.194 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T15:49:17.231 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:16 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:17.231 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:17.231 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:17.231 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:17.231 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:17.231 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:17.232 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:17.236 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-09T15:49:17.281 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-09T15:49:17.281 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T15:49:17.281 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T15:49:17.281 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T15:49:17.281 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 15:42:07.581117708 +0000 2026-03-09T15:49:17.281 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 15:42:06.461117708 +0000 2026-03-09T15:49:17.281 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 15:42:06.461117708 +0000 2026-03-09T15:49:17.281 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-09T15:49:17.281 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T15:49:17.339 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T15:49:17.339 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T15:49:17.339 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000238917 s, 2.1 MB/s 2026-03-09T15:49:17.339 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T15:49:17.398 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-09T15:49:17.458 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-09T15:49:17.458 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T15:49:17.458 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T15:49:17.458 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T15:49:17.459 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 15:42:07.569117708 +0000 2026-03-09T15:49:17.459 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 15:42:06.409117708 +0000 2026-03-09T15:49:17.459 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 15:42:06.409117708 +0000 2026-03-09T15:49:17.459 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-09T15:49:17.459 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 systemd[1]: Started Ceph mgr.x for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 bash[23804]: debug 2026-03-09T15:49:17.435+0000 7f5fed564140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 bash[23804]: debug 2026-03-09T15:49:17.475+0000 7f5fed564140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.357173+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=y;vm09=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.357173+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=y;vm09=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: cephadm 2026-03-09T15:49:16.358330+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm01=y;vm09=x;count:2 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: cephadm 2026-03-09T15:49:16.358330+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm01=y;vm09=x;count:2 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.362798+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.362798+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.365249+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.365249+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.366937+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.366937+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.367862+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.367862+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.372768+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.372768+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.373660+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.373660+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:17.480 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.375875+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.375875+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.376986+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.376986+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.378153+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:16.378153+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: cephadm 2026-03-09T15:49:16.378740+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm09 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: cephadm 2026-03-09T15:49:16.378740+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm09 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.271571+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.271571+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.280227+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.280227+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.286373+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.286373+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.291834+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.291834+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.304991+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.481 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:17 vm09 bash[22983]: audit 2026-03-09T15:49:17.304991+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.496 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T15:49:17.496 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T15:49:17.496 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.00147944 s, 346 kB/s 2026-03-09T15:49:17.497 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T15:49:17.555 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-09T15:49:17.603 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-09T15:49:17.603 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T15:49:17.603 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T15:49:17.603 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T15:49:17.603 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 15:42:07.577117708 +0000 2026-03-09T15:49:17.603 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 15:42:06.421117708 +0000 2026-03-09T15:49:17.603 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 15:42:06.421117708 +0000 2026-03-09T15:49:17.603 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-09T15:49:17.603 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T15:49:17.657 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T15:49:17.657 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T15:49:17.657 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.00026738 s, 1.9 MB/s 2026-03-09T15:49:17.657 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.357173+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=y;vm09=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.357173+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=y;vm09=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: cephadm 2026-03-09T15:49:16.358330+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm01=y;vm09=x;count:2 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: cephadm 2026-03-09T15:49:16.358330+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm01=y;vm09=x;count:2 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.362798+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.362798+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.365249+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.365249+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.366937+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.366937+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.367862+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.367862+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.372768+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.372768+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.373660+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.373660+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.375875+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.375875+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.376986+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.376986+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.378153+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:16.378153+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: cephadm 2026-03-09T15:49:16.378740+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm09 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: cephadm 2026-03-09T15:49:16.378740+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm09 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.271571+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.271571+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.280227+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.280227+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.286373+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.286373+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.291834+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.291834+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.304991+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:17 vm01 bash[28152]: audit 2026-03-09T15:49:17.304991+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.357173+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=y;vm09=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.357173+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=y;vm09=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: cephadm 2026-03-09T15:49:16.358330+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm01=y;vm09=x;count:2 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: cephadm 2026-03-09T15:49:16.358330+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm01=y;vm09=x;count:2 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.362798+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.362798+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.365249+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.365249+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.366937+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.366937+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.367862+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.367862+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.372768+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.372768+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.373660+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.373660+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.375875+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.375875+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.376986+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.376986+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.378153+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:16.378153+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: cephadm 2026-03-09T15:49:16.378740+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm09 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: cephadm 2026-03-09T15:49:16.378740+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm09 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.271571+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.271571+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.280227+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.280227+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.286373+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.286373+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.291834+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.291834+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.304991+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:17 vm01 bash[20728]: audit 2026-03-09T15:49:17.304991+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:17.707 INFO:tasks.cephadm:Deploying osd.0 on vm01 with /dev/vde... 2026-03-09T15:49:17.707 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- lvm zap /dev/vde 2026-03-09T15:49:17.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 bash[23804]: debug 2026-03-09T15:49:17.607+0000 7f5fed564140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T15:49:18.383 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:17 vm09 bash[23804]: debug 2026-03-09T15:49:17.935+0000 7f5fed564140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T15:49:18.669 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:18 vm09 bash[22983]: cluster 2026-03-09T15:49:17.049883+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:18.669 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:18 vm09 bash[22983]: cluster 2026-03-09T15:49:17.049883+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:18.669 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: debug 2026-03-09T15:49:18.439+0000 7f5fed564140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T15:49:18.669 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: debug 2026-03-09T15:49:18.531+0000 7f5fed564140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T15:49:18.669 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T15:49:18.669 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T15:49:18.669 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: from numpy import show_config as show_numpy_config 2026-03-09T15:49:18.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:18 vm01 bash[28152]: cluster 2026-03-09T15:49:17.049883+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:18.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:18 vm01 bash[28152]: cluster 2026-03-09T15:49:17.049883+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:18.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:18 vm01 bash[20728]: cluster 2026-03-09T15:49:17.049883+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:18.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:18 vm01 bash[20728]: cluster 2026-03-09T15:49:17.049883+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:18.950 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: debug 2026-03-09T15:49:18.671+0000 7f5fed564140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T15:49:18.950 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: debug 2026-03-09T15:49:18.819+0000 7f5fed564140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T15:49:18.950 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: debug 2026-03-09T15:49:18.863+0000 7f5fed564140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T15:49:18.950 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: debug 2026-03-09T15:49:18.903+0000 7f5fed564140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T15:49:19.383 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:18 vm09 bash[23804]: debug 2026-03-09T15:49:18.943+0000 7f5fed564140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T15:49:19.383 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:19 vm09 bash[23804]: debug 2026-03-09T15:49:18.995+0000 7f5fed564140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T15:49:19.732 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:19 vm09 bash[23804]: debug 2026-03-09T15:49:19.455+0000 7f5fed564140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T15:49:19.733 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:19 vm09 bash[23804]: debug 2026-03-09T15:49:19.495+0000 7f5fed564140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T15:49:19.733 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:19 vm09 bash[23804]: debug 2026-03-09T15:49:19.535+0000 7f5fed564140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T15:49:19.733 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:19 vm09 bash[23804]: debug 2026-03-09T15:49:19.683+0000 7f5fed564140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T15:49:20.127 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:19 vm09 bash[23804]: debug 2026-03-09T15:49:19.727+0000 7f5fed564140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T15:49:20.127 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:19 vm09 bash[23804]: debug 2026-03-09T15:49:19.767+0000 7f5fed564140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T15:49:20.127 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:19 vm09 bash[23804]: debug 2026-03-09T15:49:19.899+0000 7f5fed564140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:49:20.383 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:20 vm09 bash[23804]: debug 2026-03-09T15:49:20.119+0000 7f5fed564140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T15:49:20.383 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:20 vm09 bash[23804]: debug 2026-03-09T15:49:20.327+0000 7f5fed564140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T15:49:20.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:20 vm01 bash[28152]: cluster 2026-03-09T15:49:19.050048+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:20.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:20 vm01 bash[28152]: cluster 2026-03-09T15:49:19.050048+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:20.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:20 vm01 bash[20728]: cluster 2026-03-09T15:49:19.050048+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:20.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:20 vm01 bash[20728]: cluster 2026-03-09T15:49:19.050048+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:20 vm09 bash[22983]: cluster 2026-03-09T15:49:19.050048+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:20 vm09 bash[22983]: cluster 2026-03-09T15:49:19.050048+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:20.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:20 vm09 bash[23804]: debug 2026-03-09T15:49:20.383+0000 7f5fed564140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T15:49:20.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:20 vm09 bash[23804]: debug 2026-03-09T15:49:20.443+0000 7f5fed564140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T15:49:20.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:20 vm09 bash[23804]: debug 2026-03-09T15:49:20.631+0000 7f5fed564140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:49:21.383 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:49:20 vm09 bash[23804]: debug 2026-03-09T15:49:20.971+0000 7f5fed564140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: cluster 2026-03-09T15:49:20.982660+0000 mon.a (mon.0) 271 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: cluster 2026-03-09T15:49:20.982660+0000 mon.a (mon.0) 271 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: audit 2026-03-09T15:49:20.988473+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: audit 2026-03-09T15:49:20.988473+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: audit 2026-03-09T15:49:20.989425+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: audit 2026-03-09T15:49:20.989425+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: audit 2026-03-09T15:49:20.990589+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: audit 2026-03-09T15:49:20.990589+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: audit 2026-03-09T15:49:20.991503+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:21 vm01 bash[28152]: audit 2026-03-09T15:49:20.991503+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: cluster 2026-03-09T15:49:20.982660+0000 mon.a (mon.0) 271 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: cluster 2026-03-09T15:49:20.982660+0000 mon.a (mon.0) 271 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: audit 2026-03-09T15:49:20.988473+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: audit 2026-03-09T15:49:20.988473+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: audit 2026-03-09T15:49:20.989425+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: audit 2026-03-09T15:49:20.989425+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: audit 2026-03-09T15:49:20.990589+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: audit 2026-03-09T15:49:20.990589+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: audit 2026-03-09T15:49:20.991503+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:49:21.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:21 vm01 bash[20728]: audit 2026-03-09T15:49:20.991503+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: cluster 2026-03-09T15:49:20.982660+0000 mon.a (mon.0) 271 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: cluster 2026-03-09T15:49:20.982660+0000 mon.a (mon.0) 271 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: audit 2026-03-09T15:49:20.988473+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: audit 2026-03-09T15:49:20.988473+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: audit 2026-03-09T15:49:20.989425+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: audit 2026-03-09T15:49:20.989425+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: audit 2026-03-09T15:49:20.990589+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: audit 2026-03-09T15:49:20.990589+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: audit 2026-03-09T15:49:20.991503+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:49:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:21 vm09 bash[22983]: audit 2026-03-09T15:49:20.991503+0000 mon.b (mon.1) 8 : audit [DBG] from='mgr.? 192.168.123.109:0/1308336117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:49:22.327 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: cluster 2026-03-09T15:49:21.050215+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: cluster 2026-03-09T15:49:21.050215+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: cluster 2026-03-09T15:49:21.423063+0000 mon.a (mon.0) 272 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: cluster 2026-03-09T15:49:21.423063+0000 mon.a (mon.0) 272 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:21.423217+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:21.423217+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:21.591808+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:21.591808+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.340600+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.340600+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.379865+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.379865+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.380813+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.380813+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.382292+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.382292+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.392701+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.392701+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.403849+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.403849+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.404325+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.404325+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.404753+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:22 vm01 bash[20728]: audit 2026-03-09T15:49:22.404753+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: cluster 2026-03-09T15:49:21.050215+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: cluster 2026-03-09T15:49:21.050215+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: cluster 2026-03-09T15:49:21.423063+0000 mon.a (mon.0) 272 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: cluster 2026-03-09T15:49:21.423063+0000 mon.a (mon.0) 272 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:21.423217+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:21.423217+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:21.591808+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:21.591808+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.340600+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.340600+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.594 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.379865+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.379865+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.380813+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.380813+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.382292+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.382292+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.392701+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.392701+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.403849+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.403849+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.404325+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.404325+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.404753+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.595 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:22 vm01 bash[28152]: audit 2026-03-09T15:49:22.404753+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: cluster 2026-03-09T15:49:21.050215+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: cluster 2026-03-09T15:49:21.050215+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: cluster 2026-03-09T15:49:21.423063+0000 mon.a (mon.0) 272 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: cluster 2026-03-09T15:49:21.423063+0000 mon.a (mon.0) 272 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:21.423217+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:21.423217+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:21.591808+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:21.591808+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.340600+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.340600+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.379865+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.379865+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.380813+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.380813+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.382292+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.382292+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.392701+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.392701+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.403849+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.403849+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.404325+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.404325+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.404753+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:22 vm09 bash[22983]: audit 2026-03-09T15:49:22.404753+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:23.314 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:49:23.342 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch daemon add osd vm01:/dev/vde 2026-03-09T15:49:23.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:23 vm01 bash[28152]: cephadm 2026-03-09T15:49:22.403694+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T15:49:23.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:23 vm01 bash[28152]: cephadm 2026-03-09T15:49:22.403694+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T15:49:23.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:23 vm01 bash[28152]: cephadm 2026-03-09T15:49:22.405235+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm01 2026-03-09T15:49:23.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:23 vm01 bash[28152]: cephadm 2026-03-09T15:49:22.405235+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm01 2026-03-09T15:49:23.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:23 vm01 bash[20728]: cephadm 2026-03-09T15:49:22.403694+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T15:49:23.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:23 vm01 bash[20728]: cephadm 2026-03-09T15:49:22.403694+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T15:49:23.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:23 vm01 bash[20728]: cephadm 2026-03-09T15:49:22.405235+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm01 2026-03-09T15:49:23.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:23 vm01 bash[20728]: cephadm 2026-03-09T15:49:22.405235+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm01 2026-03-09T15:49:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:23 vm09 bash[22983]: cephadm 2026-03-09T15:49:22.403694+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T15:49:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:23 vm09 bash[22983]: cephadm 2026-03-09T15:49:22.403694+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T15:49:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:23 vm09 bash[22983]: cephadm 2026-03-09T15:49:22.405235+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm01 2026-03-09T15:49:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:23 vm09 bash[22983]: cephadm 2026-03-09T15:49:22.405235+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm01 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: cluster 2026-03-09T15:49:23.050418+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: cluster 2026-03-09T15:49:23.050418+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.529193+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.529193+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.535272+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.535272+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.536342+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.536342+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.537489+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.537489+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.537980+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.537980+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.542227+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:24 vm01 bash[28152]: audit 2026-03-09T15:49:23.542227+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: cluster 2026-03-09T15:49:23.050418+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: cluster 2026-03-09T15:49:23.050418+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.529193+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.529193+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.535272+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.535272+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.536342+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.536342+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.537489+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.537489+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.537980+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:24.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.537980+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:24.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.542227+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:24 vm01 bash[20728]: audit 2026-03-09T15:49:23.542227+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: cluster 2026-03-09T15:49:23.050418+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: cluster 2026-03-09T15:49:23.050418+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.529193+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.529193+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.535272+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.535272+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.536342+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.536342+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.537489+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.537489+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.537980+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.537980+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.542227+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:24 vm09 bash[22983]: audit 2026-03-09T15:49:23.542227+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:26.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:26 vm09 bash[22983]: cluster 2026-03-09T15:49:25.050621+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:26 vm09 bash[22983]: cluster 2026-03-09T15:49:25.050621+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:26.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:26 vm01 bash[28152]: cluster 2026-03-09T15:49:25.050621+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:26.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:26 vm01 bash[28152]: cluster 2026-03-09T15:49:25.050621+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:26.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:26 vm01 bash[20728]: cluster 2026-03-09T15:49:25.050621+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:26.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:26 vm01 bash[20728]: cluster 2026-03-09T15:49:25.050621+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:28.010 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:49:28.193 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 -- 192.168.123.101:0/2990404125 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 msgr2=0x7f4d70107ab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:28.193 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 --2- 192.168.123.101:0/2990404125 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 0x7f4d70107ab0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f4d64009a30 tx=0x7f4d6402f240 comp rx=0 tx=0).stop 2026-03-09T15:49:28.193 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 -- 192.168.123.101:0/2990404125 shutdown_connections 2026-03-09T15:49:28.193 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 --2- 192.168.123.101:0/2990404125 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d7010aa00 0x7f4d7010ce90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:28.193 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 --2- 192.168.123.101:0/2990404125 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d70107ff0 0x7f4d7010a3e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:28.193 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 --2- 192.168.123.101:0/2990404125 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 0x7f4d70107ab0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:28.193 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 -- 192.168.123.101:0/2990404125 >> 192.168.123.101:0/2990404125 conn(0x7f4d700fd120 msgr2=0x7f4d700ff560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:49:28.193 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 -- 192.168.123.101:0/2990404125 shutdown_connections 2026-03-09T15:49:28.193 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 -- 192.168.123.101:0/2990404125 wait complete. 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 Processor -- start 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 -- start start 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 0x7f4d7019c450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d70107ff0 0x7f4d7019c990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6e575640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 0x7f4d7019c450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d7010aa00 0x7f4d701a3a10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f4d7010fc90 con 0x7f4d70107ff0 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f4d7010fb10 con 0x7f4d7010aa00 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d74f97640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f4d7010fe10 con 0x7f4d7006bcd0 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6e575640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 0x7f4d7019c450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:58776/0 (socket says 192.168.123.101:58776) 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6e575640 1 -- 192.168.123.101:0/64213169 learned_addr learned my addr 192.168.123.101:0/64213169 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6e575640 1 -- 192.168.123.101:0/64213169 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d7010aa00 msgr2=0x7f4d701a3a10 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6e575640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d7010aa00 0x7f4d701a3a10 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:28.194 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6e575640 1 -- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d70107ff0 msgr2=0x7f4d7019c990 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:49:28.195 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6dd74640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d70107ff0 0x7f4d7019c990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:28.195 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6e575640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d70107ff0 0x7f4d7019c990 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:28.195 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6e575640 1 -- 192.168.123.101:0/64213169 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4d701a4080 con 0x7f4d7006bcd0 2026-03-09T15:49:28.195 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6e575640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 0x7f4d7019c450 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f4d64009950 tx=0x7f4d64004290 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:49:28.195 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d6dd74640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d70107ff0 0x7f4d7019c990 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:49:28.195 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.186+0000 7f4d577fe640 1 -- 192.168.123.101:0/64213169 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4d64005020 con 0x7f4d7006bcd0 2026-03-09T15:49:28.195 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4d701a4310 con 0x7f4d7006bcd0 2026-03-09T15:49:28.195 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f4d701a47f0 con 0x7f4d7006bcd0 2026-03-09T15:49:28.196 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d577fe640 1 -- 192.168.123.101:0/64213169 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f4d64005600 con 0x7f4d7006bcd0 2026-03-09T15:49:28.196 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d577fe640 1 -- 192.168.123.101:0/64213169 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4d640387b0 con 0x7f4d7006bcd0 2026-03-09T15:49:28.196 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d577fe640 1 -- 192.168.123.101:0/64213169 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 13) ==== 99979+0+0 (secure 0 0 0) 0x7f4d6404a020 con 0x7f4d7006bcd0 2026-03-09T15:49:28.197 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d577fe640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f4d40077680 0x7f4d40079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:49:28.197 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d577fe640 1 -- 192.168.123.101:0/64213169 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7f4d640bd0c0 con 0x7f4d7006bcd0 2026-03-09T15:49:28.197 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4d34005180 con 0x7f4d7006bcd0 2026-03-09T15:49:28.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d6dd74640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f4d40077680 0x7f4d40079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:49:28.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.190+0000 7f4d6dd74640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f4d40077680 0x7f4d40079b40 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f4d7019d970 tx=0x7f4d58005e90 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:49:28.200 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.194+0000 7f4d577fe640 1 -- 192.168.123.101:0/64213169 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4d64088090 con 0x7f4d7006bcd0 2026-03-09T15:49:28.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:28.294+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7f4d34002bf0 con 0x7f4d40077680 2026-03-09T15:49:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:28 vm09 bash[22983]: cluster 2026-03-09T15:49:27.050835+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:28 vm09 bash[22983]: cluster 2026-03-09T15:49:27.050835+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:28 vm09 bash[22983]: audit 2026-03-09T15:49:28.301742+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:49:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:28 vm09 bash[22983]: audit 2026-03-09T15:49:28.301742+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:49:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:28 vm09 bash[22983]: audit 2026-03-09T15:49:28.303412+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:49:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:28 vm09 bash[22983]: audit 2026-03-09T15:49:28.303412+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:49:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:28 vm09 bash[22983]: audit 2026-03-09T15:49:28.303914+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:28 vm09 bash[22983]: audit 2026-03-09T15:49:28.303914+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:28 vm01 bash[20728]: cluster 2026-03-09T15:49:27.050835+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:28 vm01 bash[20728]: cluster 2026-03-09T15:49:27.050835+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:28 vm01 bash[20728]: audit 2026-03-09T15:49:28.301742+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:28 vm01 bash[20728]: audit 2026-03-09T15:49:28.301742+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:28 vm01 bash[20728]: audit 2026-03-09T15:49:28.303412+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:28 vm01 bash[20728]: audit 2026-03-09T15:49:28.303412+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:28 vm01 bash[20728]: audit 2026-03-09T15:49:28.303914+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:28 vm01 bash[20728]: audit 2026-03-09T15:49:28.303914+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:28 vm01 bash[28152]: cluster 2026-03-09T15:49:27.050835+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:28 vm01 bash[28152]: cluster 2026-03-09T15:49:27.050835+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:28 vm01 bash[28152]: audit 2026-03-09T15:49:28.301742+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:28 vm01 bash[28152]: audit 2026-03-09T15:49:28.301742+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:28 vm01 bash[28152]: audit 2026-03-09T15:49:28.303412+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:28 vm01 bash[28152]: audit 2026-03-09T15:49:28.303412+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:28 vm01 bash[28152]: audit 2026-03-09T15:49:28.303914+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:28.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:28 vm01 bash[28152]: audit 2026-03-09T15:49:28.303914+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:29.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:29 vm09 bash[22983]: audit 2026-03-09T15:49:28.300042+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24113 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:29.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:29 vm09 bash[22983]: audit 2026-03-09T15:49:28.300042+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24113 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:29.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:29 vm01 bash[28152]: audit 2026-03-09T15:49:28.300042+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24113 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:29.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:29 vm01 bash[28152]: audit 2026-03-09T15:49:28.300042+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24113 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:29.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:29 vm01 bash[20728]: audit 2026-03-09T15:49:28.300042+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24113 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:29.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:29 vm01 bash[20728]: audit 2026-03-09T15:49:28.300042+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24113 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:49:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:30 vm09 bash[22983]: cluster 2026-03-09T15:49:29.051344+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:30 vm09 bash[22983]: cluster 2026-03-09T15:49:29.051344+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:30.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:30 vm01 bash[28152]: cluster 2026-03-09T15:49:29.051344+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:30.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:30 vm01 bash[28152]: cluster 2026-03-09T15:49:29.051344+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:30.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:30 vm01 bash[20728]: cluster 2026-03-09T15:49:29.051344+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:30.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:30 vm01 bash[20728]: cluster 2026-03-09T15:49:29.051344+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:31 vm09 bash[22983]: cluster 2026-03-09T15:49:31.051564+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:31 vm09 bash[22983]: cluster 2026-03-09T15:49:31.051564+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:31 vm01 bash[28152]: cluster 2026-03-09T15:49:31.051564+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:31 vm01 bash[28152]: cluster 2026-03-09T15:49:31.051564+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:31.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:31 vm01 bash[20728]: cluster 2026-03-09T15:49:31.051564+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:31.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:31 vm01 bash[20728]: cluster 2026-03-09T15:49:31.051564+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: cluster 2026-03-09T15:49:33.051770+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: cluster 2026-03-09T15:49:33.051770+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: audit 2026-03-09T15:49:33.796439+0000 mon.a (mon.0) 292 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]: dispatch 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: audit 2026-03-09T15:49:33.796439+0000 mon.a (mon.0) 292 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]: dispatch 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: audit 2026-03-09T15:49:33.799135+0000 mon.a (mon.0) 293 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]': finished 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: audit 2026-03-09T15:49:33.799135+0000 mon.a (mon.0) 293 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]': finished 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: cluster 2026-03-09T15:49:33.802254+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: cluster 2026-03-09T15:49:33.802254+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: audit 2026-03-09T15:49:33.803382+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:34 vm01 bash[20728]: audit 2026-03-09T15:49:33.803382+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: cluster 2026-03-09T15:49:33.051770+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: cluster 2026-03-09T15:49:33.051770+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: audit 2026-03-09T15:49:33.796439+0000 mon.a (mon.0) 292 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]: dispatch 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: audit 2026-03-09T15:49:33.796439+0000 mon.a (mon.0) 292 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]: dispatch 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: audit 2026-03-09T15:49:33.799135+0000 mon.a (mon.0) 293 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]': finished 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: audit 2026-03-09T15:49:33.799135+0000 mon.a (mon.0) 293 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]': finished 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: cluster 2026-03-09T15:49:33.802254+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: cluster 2026-03-09T15:49:33.802254+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: audit 2026-03-09T15:49:33.803382+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:34.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:34 vm01 bash[28152]: audit 2026-03-09T15:49:33.803382+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: cluster 2026-03-09T15:49:33.051770+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: cluster 2026-03-09T15:49:33.051770+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: audit 2026-03-09T15:49:33.796439+0000 mon.a (mon.0) 292 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]: dispatch 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: audit 2026-03-09T15:49:33.796439+0000 mon.a (mon.0) 292 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]: dispatch 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: audit 2026-03-09T15:49:33.799135+0000 mon.a (mon.0) 293 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]': finished 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: audit 2026-03-09T15:49:33.799135+0000 mon.a (mon.0) 293 : audit [INF] from='client.? 192.168.123.101:0/3330211556' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85259aee-a52d-45ab-8429-e3d0212392b7"}]': finished 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: cluster 2026-03-09T15:49:33.802254+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: cluster 2026-03-09T15:49:33.802254+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: audit 2026-03-09T15:49:33.803382+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:34 vm09 bash[22983]: audit 2026-03-09T15:49:33.803382+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:35 vm09 bash[22983]: audit 2026-03-09T15:49:34.400469+0000 mon.c (mon.2) 3 : audit [DBG] from='client.? 192.168.123.101:0/3168085708' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:49:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:35 vm09 bash[22983]: audit 2026-03-09T15:49:34.400469+0000 mon.c (mon.2) 3 : audit [DBG] from='client.? 192.168.123.101:0/3168085708' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:49:35.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:35 vm01 bash[20728]: audit 2026-03-09T15:49:34.400469+0000 mon.c (mon.2) 3 : audit [DBG] from='client.? 192.168.123.101:0/3168085708' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:49:35.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:35 vm01 bash[20728]: audit 2026-03-09T15:49:34.400469+0000 mon.c (mon.2) 3 : audit [DBG] from='client.? 192.168.123.101:0/3168085708' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:49:35.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:35 vm01 bash[28152]: audit 2026-03-09T15:49:34.400469+0000 mon.c (mon.2) 3 : audit [DBG] from='client.? 192.168.123.101:0/3168085708' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:49:35.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:35 vm01 bash[28152]: audit 2026-03-09T15:49:34.400469+0000 mon.c (mon.2) 3 : audit [DBG] from='client.? 192.168.123.101:0/3168085708' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:49:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:36 vm09 bash[22983]: cluster 2026-03-09T15:49:35.052009+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:36 vm09 bash[22983]: cluster 2026-03-09T15:49:35.052009+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:36.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:36 vm01 bash[28152]: cluster 2026-03-09T15:49:35.052009+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:36.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:36 vm01 bash[28152]: cluster 2026-03-09T15:49:35.052009+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:36.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:36 vm01 bash[20728]: cluster 2026-03-09T15:49:35.052009+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:36.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:36 vm01 bash[20728]: cluster 2026-03-09T15:49:35.052009+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:38.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:38 vm01 bash[28152]: cluster 2026-03-09T15:49:37.052273+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:38.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:38 vm01 bash[28152]: cluster 2026-03-09T15:49:37.052273+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:38.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:38 vm01 bash[20728]: cluster 2026-03-09T15:49:37.052273+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:38.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:38 vm01 bash[20728]: cluster 2026-03-09T15:49:37.052273+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:38 vm09 bash[22983]: cluster 2026-03-09T15:49:37.052273+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:38 vm09 bash[22983]: cluster 2026-03-09T15:49:37.052273+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:40.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:40 vm01 bash[28152]: cluster 2026-03-09T15:49:39.052592+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:40.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:40 vm01 bash[28152]: cluster 2026-03-09T15:49:39.052592+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:40.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:40 vm01 bash[20728]: cluster 2026-03-09T15:49:39.052592+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:40.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:40 vm01 bash[20728]: cluster 2026-03-09T15:49:39.052592+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:40 vm09 bash[22983]: cluster 2026-03-09T15:49:39.052592+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:40 vm09 bash[22983]: cluster 2026-03-09T15:49:39.052592+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:42 vm01 bash[28152]: cluster 2026-03-09T15:49:41.052850+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:42 vm01 bash[28152]: cluster 2026-03-09T15:49:41.052850+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:42 vm01 bash[20728]: cluster 2026-03-09T15:49:41.052850+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:42 vm01 bash[20728]: cluster 2026-03-09T15:49:41.052850+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:42 vm09 bash[22983]: cluster 2026-03-09T15:49:41.052850+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:42 vm09 bash[22983]: cluster 2026-03-09T15:49:41.052850+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:43.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:43 vm01 bash[28152]: audit 2026-03-09T15:49:42.887754+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:49:43.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:43 vm01 bash[28152]: audit 2026-03-09T15:49:42.887754+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:49:43.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:43 vm01 bash[28152]: audit 2026-03-09T15:49:42.888331+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:43.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:43 vm01 bash[28152]: audit 2026-03-09T15:49:42.888331+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:43 vm01 bash[20728]: audit 2026-03-09T15:49:42.887754+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:49:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:43 vm01 bash[20728]: audit 2026-03-09T15:49:42.887754+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:49:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:43 vm01 bash[20728]: audit 2026-03-09T15:49:42.888331+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:43 vm01 bash[20728]: audit 2026-03-09T15:49:42.888331+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:43 vm09 bash[22983]: audit 2026-03-09T15:49:42.887754+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:49:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:43 vm09 bash[22983]: audit 2026-03-09T15:49:42.887754+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:49:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:43 vm09 bash[22983]: audit 2026-03-09T15:49:42.888331+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:43 vm09 bash[22983]: audit 2026-03-09T15:49:42.888331+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:43.711 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:43 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:43.711 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:49:43 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:43.711 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:43 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:43.982 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:43 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:43.982 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:49:43 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:43.982 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:43 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: cephadm 2026-03-09T15:49:42.888776+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm01 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: cephadm 2026-03-09T15:49:42.888776+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm01 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: cluster 2026-03-09T15:49:43.053096+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: cluster 2026-03-09T15:49:43.053096+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: audit 2026-03-09T15:49:43.948814+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: audit 2026-03-09T15:49:43.948814+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: audit 2026-03-09T15:49:43.953151+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: audit 2026-03-09T15:49:43.953151+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: audit 2026-03-09T15:49:43.958004+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:44 vm01 bash[20728]: audit 2026-03-09T15:49:43.958004+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: cephadm 2026-03-09T15:49:42.888776+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm01 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: cephadm 2026-03-09T15:49:42.888776+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm01 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: cluster 2026-03-09T15:49:43.053096+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: cluster 2026-03-09T15:49:43.053096+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: audit 2026-03-09T15:49:43.948814+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: audit 2026-03-09T15:49:43.948814+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: audit 2026-03-09T15:49:43.953151+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: audit 2026-03-09T15:49:43.953151+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: audit 2026-03-09T15:49:43.958004+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.434 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:44 vm01 bash[28152]: audit 2026-03-09T15:49:43.958004+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: cephadm 2026-03-09T15:49:42.888776+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm01 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: cephadm 2026-03-09T15:49:42.888776+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm01 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: cluster 2026-03-09T15:49:43.053096+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: cluster 2026-03-09T15:49:43.053096+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: audit 2026-03-09T15:49:43.948814+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: audit 2026-03-09T15:49:43.948814+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: audit 2026-03-09T15:49:43.953151+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: audit 2026-03-09T15:49:43.953151+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: audit 2026-03-09T15:49:43.958004+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:44 vm09 bash[22983]: audit 2026-03-09T15:49:43.958004+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:46.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:46 vm01 bash[28152]: cluster 2026-03-09T15:49:45.053360+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:46.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:46 vm01 bash[28152]: cluster 2026-03-09T15:49:45.053360+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:46.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:46 vm01 bash[20728]: cluster 2026-03-09T15:49:45.053360+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:46.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:46 vm01 bash[20728]: cluster 2026-03-09T15:49:45.053360+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:46 vm09 bash[22983]: cluster 2026-03-09T15:49:45.053360+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:46 vm09 bash[22983]: cluster 2026-03-09T15:49:45.053360+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:48 vm09 bash[22983]: cluster 2026-03-09T15:49:47.053617+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:48 vm09 bash[22983]: cluster 2026-03-09T15:49:47.053617+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:48 vm09 bash[22983]: audit 2026-03-09T15:49:47.531427+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T15:49:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:48 vm09 bash[22983]: audit 2026-03-09T15:49:47.531427+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T15:49:48.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:48 vm01 bash[28152]: cluster 2026-03-09T15:49:47.053617+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:48.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:48 vm01 bash[28152]: cluster 2026-03-09T15:49:47.053617+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:48.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:48 vm01 bash[28152]: audit 2026-03-09T15:49:47.531427+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T15:49:48.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:48 vm01 bash[28152]: audit 2026-03-09T15:49:47.531427+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T15:49:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:48 vm01 bash[20728]: cluster 2026-03-09T15:49:47.053617+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:48 vm01 bash[20728]: cluster 2026-03-09T15:49:47.053617+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:48 vm01 bash[20728]: audit 2026-03-09T15:49:47.531427+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T15:49:48.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:48 vm01 bash[20728]: audit 2026-03-09T15:49:47.531427+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: audit 2026-03-09T15:49:48.183571+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: audit 2026-03-09T15:49:48.183571+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: cluster 2026-03-09T15:49:48.186077+0000 mon.a (mon.0) 303 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: cluster 2026-03-09T15:49:48.186077+0000 mon.a (mon.0) 303 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: audit 2026-03-09T15:49:48.189064+0000 mon.a (mon.0) 304 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: audit 2026-03-09T15:49:48.189064+0000 mon.a (mon.0) 304 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: audit 2026-03-09T15:49:48.189204+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: audit 2026-03-09T15:49:48.189204+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: audit 2026-03-09T15:49:49.186227+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:49:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:49 vm09 bash[22983]: audit 2026-03-09T15:49:49.186227+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: audit 2026-03-09T15:49:48.183571+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: audit 2026-03-09T15:49:48.183571+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: cluster 2026-03-09T15:49:48.186077+0000 mon.a (mon.0) 303 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: cluster 2026-03-09T15:49:48.186077+0000 mon.a (mon.0) 303 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: audit 2026-03-09T15:49:48.189064+0000 mon.a (mon.0) 304 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: audit 2026-03-09T15:49:48.189064+0000 mon.a (mon.0) 304 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: audit 2026-03-09T15:49:48.189204+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: audit 2026-03-09T15:49:48.189204+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: audit 2026-03-09T15:49:49.186227+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:49 vm01 bash[28152]: audit 2026-03-09T15:49:49.186227+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: audit 2026-03-09T15:49:48.183571+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: audit 2026-03-09T15:49:48.183571+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: cluster 2026-03-09T15:49:48.186077+0000 mon.a (mon.0) 303 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: cluster 2026-03-09T15:49:48.186077+0000 mon.a (mon.0) 303 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: audit 2026-03-09T15:49:48.189064+0000 mon.a (mon.0) 304 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: audit 2026-03-09T15:49:48.189064+0000 mon.a (mon.0) 304 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: audit 2026-03-09T15:49:48.189204+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: audit 2026-03-09T15:49:48.189204+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: audit 2026-03-09T15:49:49.186227+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:49:49.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:49 vm01 bash[20728]: audit 2026-03-09T15:49:49.186227+0000 mon.a (mon.0) 306 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: cluster 2026-03-09T15:49:49.053860+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: cluster 2026-03-09T15:49:49.053860+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: cluster 2026-03-09T15:49:49.191380+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: cluster 2026-03-09T15:49:49.191380+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: audit 2026-03-09T15:49:49.197741+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: audit 2026-03-09T15:49:49.197741+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: audit 2026-03-09T15:49:49.220733+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: audit 2026-03-09T15:49:49.220733+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: audit 2026-03-09T15:49:50.185813+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' 2026-03-09T15:49:50.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:50 vm01 bash[28152]: audit 2026-03-09T15:49:50.185813+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: cluster 2026-03-09T15:49:49.053860+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: cluster 2026-03-09T15:49:49.053860+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: cluster 2026-03-09T15:49:49.191380+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: cluster 2026-03-09T15:49:49.191380+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: audit 2026-03-09T15:49:49.197741+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: audit 2026-03-09T15:49:49.197741+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: audit 2026-03-09T15:49:49.220733+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: audit 2026-03-09T15:49:49.220733+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: audit 2026-03-09T15:49:50.185813+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' 2026-03-09T15:49:50.272 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:50 vm01 bash[20728]: audit 2026-03-09T15:49:50.185813+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: cluster 2026-03-09T15:49:49.053860+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: cluster 2026-03-09T15:49:49.053860+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: cluster 2026-03-09T15:49:49.191380+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: cluster 2026-03-09T15:49:49.191380+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: audit 2026-03-09T15:49:49.197741+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: audit 2026-03-09T15:49:49.197741+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: audit 2026-03-09T15:49:49.220733+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: audit 2026-03-09T15:49:49.220733+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: audit 2026-03-09T15:49:50.185813+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' 2026-03-09T15:49:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:50 vm09 bash[22983]: audit 2026-03-09T15:49:50.185813+0000 mon.a (mon.0) 310 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186]' entity='osd.0' 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: cluster 2026-03-09T15:49:48.509380+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: cluster 2026-03-09T15:49:48.509380+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: cluster 2026-03-09T15:49:48.509434+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: cluster 2026-03-09T15:49:48.509434+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.211226+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.211226+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.289996+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.289996+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.296026+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.296026+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.727962+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.727962+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:51.328 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.728538+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.728538+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.734270+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:50.734270+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:51.200687+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:51 vm01 bash[28152]: audit 2026-03-09T15:49:51.200687+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: cluster 2026-03-09T15:49:48.509380+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: cluster 2026-03-09T15:49:48.509380+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: cluster 2026-03-09T15:49:48.509434+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: cluster 2026-03-09T15:49:48.509434+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.211226+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.211226+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.289996+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.289996+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.296026+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.296026+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.727962+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.727962+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.728538+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.728538+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.734270+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:50.734270+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:51.200687+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.329 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:51 vm01 bash[20728]: audit 2026-03-09T15:49:51.200687+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.399 INFO:teuthology.orchestra.run.vm01.stdout:Created osd(s) 0 on host 'vm01' 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.382+0000 7f4d577fe640 1 -- 192.168.123.101:0/64213169 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f4d34002bf0 con 0x7f4d40077680 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f4d40077680 msgr2=0x7f4d40079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f4d40077680 0x7f4d40079b40 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f4d7019d970 tx=0x7f4d58005e90 comp rx=0 tx=0).stop 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 msgr2=0x7f4d7019c450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 0x7f4d7019c450 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f4d64009950 tx=0x7f4d64004290 comp rx=0 tx=0).stop 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 shutdown_connections 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d7010aa00 0x7f4d701a3a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d70107ff0 0x7f4d7019c990 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f4d40077680 0x7f4d40079b40 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 --2- 192.168.123.101:0/64213169 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d7006bcd0 0x7f4d7019c450 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 >> 192.168.123.101:0/64213169 conn(0x7f4d700fd120 msgr2=0x7f4d70108bf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 shutdown_connections 2026-03-09T15:49:51.400 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:49:51.386+0000 7f4d74f97640 1 -- 192.168.123.101:0/64213169 wait complete. 2026-03-09T15:49:51.475 DEBUG:teuthology.orchestra.run.vm01:osd.0> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.0.service 2026-03-09T15:49:51.475 INFO:tasks.cephadm:Deploying osd.1 on vm01 with /dev/vdd... 2026-03-09T15:49:51.476 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- lvm zap /dev/vdd 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: cluster 2026-03-09T15:49:48.509380+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: cluster 2026-03-09T15:49:48.509380+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: cluster 2026-03-09T15:49:48.509434+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: cluster 2026-03-09T15:49:48.509434+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.211226+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.211226+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.289996+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.289996+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.296026+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.296026+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.727962+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.727962+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.728538+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.728538+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.734270+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:50.734270+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:51.200687+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:51 vm09 bash[22983]: audit 2026-03-09T15:49:51.200687+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: cluster 2026-03-09T15:49:51.054110+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: cluster 2026-03-09T15:49:51.054110+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: cluster 2026-03-09T15:49:51.210607+0000 mon.a (mon.0) 318 : cluster [INF] osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] boot 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: cluster 2026-03-09T15:49:51.210607+0000 mon.a (mon.0) 318 : cluster [INF] osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] boot 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: cluster 2026-03-09T15:49:51.210736+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: cluster 2026-03-09T15:49:51.210736+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: audit 2026-03-09T15:49:51.211583+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: audit 2026-03-09T15:49:51.211583+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: audit 2026-03-09T15:49:51.374747+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: audit 2026-03-09T15:49:51.374747+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: audit 2026-03-09T15:49:51.379405+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: audit 2026-03-09T15:49:51.379405+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: audit 2026-03-09T15:49:51.385765+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:52 vm09 bash[22983]: audit 2026-03-09T15:49:51.385765+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: cluster 2026-03-09T15:49:51.054110+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: cluster 2026-03-09T15:49:51.054110+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: cluster 2026-03-09T15:49:51.210607+0000 mon.a (mon.0) 318 : cluster [INF] osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] boot 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: cluster 2026-03-09T15:49:51.210607+0000 mon.a (mon.0) 318 : cluster [INF] osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] boot 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: cluster 2026-03-09T15:49:51.210736+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: cluster 2026-03-09T15:49:51.210736+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: audit 2026-03-09T15:49:51.211583+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: audit 2026-03-09T15:49:51.211583+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: audit 2026-03-09T15:49:51.374747+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: audit 2026-03-09T15:49:51.374747+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: audit 2026-03-09T15:49:51.379405+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: audit 2026-03-09T15:49:51.379405+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: audit 2026-03-09T15:49:51.385765+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:52 vm01 bash[28152]: audit 2026-03-09T15:49:51.385765+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: cluster 2026-03-09T15:49:51.054110+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: cluster 2026-03-09T15:49:51.054110+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: cluster 2026-03-09T15:49:51.210607+0000 mon.a (mon.0) 318 : cluster [INF] osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] boot 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: cluster 2026-03-09T15:49:51.210607+0000 mon.a (mon.0) 318 : cluster [INF] osd.0 [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] boot 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: cluster 2026-03-09T15:49:51.210736+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: cluster 2026-03-09T15:49:51.210736+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: audit 2026-03-09T15:49:51.211583+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: audit 2026-03-09T15:49:51.211583+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: audit 2026-03-09T15:49:51.374747+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: audit 2026-03-09T15:49:51.374747+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: audit 2026-03-09T15:49:51.379405+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: audit 2026-03-09T15:49:51.379405+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: audit 2026-03-09T15:49:51.385765+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:52 vm01 bash[20728]: audit 2026-03-09T15:49:51.385765+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:53.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:53 vm01 bash[28152]: cluster 2026-03-09T15:49:52.389405+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T15:49:53.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:53 vm01 bash[28152]: cluster 2026-03-09T15:49:52.389405+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T15:49:53.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:53 vm01 bash[20728]: cluster 2026-03-09T15:49:52.389405+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T15:49:53.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:53 vm01 bash[20728]: cluster 2026-03-09T15:49:52.389405+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T15:49:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:53 vm09 bash[22983]: cluster 2026-03-09T15:49:52.389405+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T15:49:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:53 vm09 bash[22983]: cluster 2026-03-09T15:49:52.389405+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T15:49:54.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:54 vm01 bash[28152]: cluster 2026-03-09T15:49:53.054418+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:54.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:54 vm01 bash[28152]: cluster 2026-03-09T15:49:53.054418+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:54.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:54 vm01 bash[20728]: cluster 2026-03-09T15:49:53.054418+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:54.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:54 vm01 bash[20728]: cluster 2026-03-09T15:49:53.054418+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:54.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:54 vm09 bash[22983]: cluster 2026-03-09T15:49:53.054418+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:54.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:54 vm09 bash[22983]: cluster 2026-03-09T15:49:53.054418+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:56.152 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:49:56.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:56 vm01 bash[28152]: cluster 2026-03-09T15:49:55.054685+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:56.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:56 vm01 bash[28152]: cluster 2026-03-09T15:49:55.054685+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:56.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:56 vm01 bash[20728]: cluster 2026-03-09T15:49:55.054685+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:56.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:56 vm01 bash[20728]: cluster 2026-03-09T15:49:55.054685+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:56.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:56 vm09 bash[22983]: cluster 2026-03-09T15:49:55.054685+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:56.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:56 vm09 bash[22983]: cluster 2026-03-09T15:49:55.054685+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:57.008 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:49:57.025 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch daemon add osd vm01:/dev/vdd 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: cluster 2026-03-09T15:49:57.054939+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: cluster 2026-03-09T15:49:57.054939+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.744924+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.744924+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.749999+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.749999+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.751016+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.751016+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.751998+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.751998+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.752438+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.752438+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.756402+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:58 vm01 bash[28152]: audit 2026-03-09T15:49:57.756402+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: cluster 2026-03-09T15:49:57.054939+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: cluster 2026-03-09T15:49:57.054939+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.744924+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.744924+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.749999+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.749999+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.751016+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.751016+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.751998+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.751998+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.752438+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.752438+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.756402+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:58 vm01 bash[20728]: audit 2026-03-09T15:49:57.756402+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: cluster 2026-03-09T15:49:57.054939+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: cluster 2026-03-09T15:49:57.054939+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.744924+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.744924+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.749999+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.749999+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.751016+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.751016+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.751998+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.751998+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.752438+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.752438+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.756402+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:58 vm09 bash[22983]: audit 2026-03-09T15:49:57.756402+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:49:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:59 vm01 bash[28152]: cephadm 2026-03-09T15:49:57.739236+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:49:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:49:59 vm01 bash[28152]: cephadm 2026-03-09T15:49:57.739236+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:49:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:59 vm01 bash[20728]: cephadm 2026-03-09T15:49:57.739236+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:49:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:49:59 vm01 bash[20728]: cephadm 2026-03-09T15:49:57.739236+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:49:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:59 vm09 bash[22983]: cephadm 2026-03-09T15:49:57.739236+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:49:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:49:59 vm09 bash[22983]: cephadm 2026-03-09T15:49:57.739236+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:50:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:00 vm01 bash[28152]: cluster 2026-03-09T15:49:59.055247+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:00 vm01 bash[28152]: cluster 2026-03-09T15:49:59.055247+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:00 vm01 bash[28152]: cluster 2026-03-09T15:50:00.000186+0000 mon.a (mon.0) 331 : cluster [INF] overall HEALTH_OK 2026-03-09T15:50:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:00 vm01 bash[28152]: cluster 2026-03-09T15:50:00.000186+0000 mon.a (mon.0) 331 : cluster [INF] overall HEALTH_OK 2026-03-09T15:50:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:00 vm01 bash[20728]: cluster 2026-03-09T15:49:59.055247+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:00 vm01 bash[20728]: cluster 2026-03-09T15:49:59.055247+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:00 vm01 bash[20728]: cluster 2026-03-09T15:50:00.000186+0000 mon.a (mon.0) 331 : cluster [INF] overall HEALTH_OK 2026-03-09T15:50:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:00 vm01 bash[20728]: cluster 2026-03-09T15:50:00.000186+0000 mon.a (mon.0) 331 : cluster [INF] overall HEALTH_OK 2026-03-09T15:50:00.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:00 vm09 bash[22983]: cluster 2026-03-09T15:49:59.055247+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:00 vm09 bash[22983]: cluster 2026-03-09T15:49:59.055247+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:00 vm09 bash[22983]: cluster 2026-03-09T15:50:00.000186+0000 mon.a (mon.0) 331 : cluster [INF] overall HEALTH_OK 2026-03-09T15:50:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:00 vm09 bash[22983]: cluster 2026-03-09T15:50:00.000186+0000 mon.a (mon.0) 331 : cluster [INF] overall HEALTH_OK 2026-03-09T15:50:01.691 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 -- 192.168.123.101:0/556724236 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c103590 msgr2=0x7f1e1c109e20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 --2- 192.168.123.101:0/556724236 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c103590 0x7f1e1c109e20 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f1e1800b3e0 tx=0x7f1e1802f690 comp rx=0 tx=0).stop 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 -- 192.168.123.101:0/556724236 shutdown_connections 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 --2- 192.168.123.101:0/556724236 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c103590 0x7f1e1c109e20 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 --2- 192.168.123.101:0/556724236 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1e1c102bd0 0x7f1e1c103050 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 --2- 192.168.123.101:0/556724236 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1e1c1019d0 0x7f1e1c101dd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 -- 192.168.123.101:0/556724236 >> 192.168.123.101:0/556724236 conn(0x7f1e1c0fd180 msgr2=0x7f1e1c0ff5a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 -- 192.168.123.101:0/556724236 shutdown_connections 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 -- 192.168.123.101:0/556724236 wait complete. 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 Processor -- start 2026-03-09T15:50:01.866 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 -- start start 2026-03-09T15:50:01.867 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e231ce640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c1019d0 0x7f1e1c19c410 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:50:01.867 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e20f43640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c1019d0 0x7f1e1c19c410 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:50:01.867 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.858+0000 7f1e20f43640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c1019d0 0x7f1e1c19c410 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:35200/0 (socket says 192.168.123.101:35200) 2026-03-09T15:50:01.867 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e231ce640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1e1c102bd0 0x7f1e1c19c950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e231ce640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1e1c103590 0x7f1e1c1a39d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e231ce640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f1e1c10da70 con 0x7f1e1c1019d0 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e231ce640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f1e1c10d8f0 con 0x7f1e1c103590 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e20f43640 1 -- 192.168.123.101:0/965825948 learned_addr learned my addr 192.168.123.101:0/965825948 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f1e1c10dbf0 con 0x7f1e1c102bd0 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e13fff640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1e1c102bd0 0x7f1e1c19c950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e21744640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1e1c103590 0x7f1e1c1a39d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e13fff640 1 -- 192.168.123.101:0/965825948 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1e1c103590 msgr2=0x7f1e1c1a39d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e13fff640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1e1c103590 0x7f1e1c1a39d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e13fff640 1 -- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c1019d0 msgr2=0x7f1e1c19c410 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e13fff640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c1019d0 0x7f1e1c19c410 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e13fff640 1 -- 192.168.123.101:0/965825948 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1e1c1a40d0 con 0x7f1e1c102bd0 2026-03-09T15:50:01.868 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e13fff640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1e1c102bd0 0x7f1e1c19c950 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f1e0c00b550 tx=0x7f1e0c00ba20 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:50:01.869 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e11ffb640 1 -- 192.168.123.101:0/965825948 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1e0c013020 con 0x7f1e1c102bd0 2026-03-09T15:50:01.869 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e11ffb640 1 -- 192.168.123.101:0/965825948 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f1e0c004480 con 0x7f1e1c102bd0 2026-03-09T15:50:01.869 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e11ffb640 1 -- 192.168.123.101:0/965825948 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1e0c00f990 con 0x7f1e1c102bd0 2026-03-09T15:50:01.869 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f1e1c1a43c0 con 0x7f1e1c102bd0 2026-03-09T15:50:01.870 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e20f43640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c1019d0 0x7f1e1c19c410 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:50:01.871 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f1e1c1a4880 con 0x7f1e1c102bd0 2026-03-09T15:50:01.871 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e11ffb640 1 -- 192.168.123.101:0/965825948 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 13) ==== 99979+0+0 (secure 0 0 0) 0x7f1e0c0026e0 con 0x7f1e1c102bd0 2026-03-09T15:50:01.871 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.862+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1de4005180 con 0x7f1e1c102bd0 2026-03-09T15:50:01.871 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.866+0000 7f1e11ffb640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f1df4077540 0x7f1df4079a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:50:01.874 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.866+0000 7f1e20f43640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f1df4077540 0x7f1df4079a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:50:01.874 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.866+0000 7f1e20f43640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f1df4077540 0x7f1df4079a00 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f1e0400adb0 tx=0x7f1e0400a310 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:50:01.875 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.866+0000 7f1e11ffb640 1 -- 192.168.123.101:0/965825948 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(9..9 src has 1..9) ==== 1757+0+0 (secure 0 0 0) 0x7f1e0c05d0b0 con 0x7f1e1c102bd0 2026-03-09T15:50:01.875 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.866+0000 7f1e11ffb640 1 -- 192.168.123.101:0/965825948 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f1e0c061410 con 0x7f1e1c102bd0 2026-03-09T15:50:01.975 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:01.966+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7f1de4002bf0 con 0x7f1df4077540 2026-03-09T15:50:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:02 vm09 bash[22983]: cluster 2026-03-09T15:50:01.055514+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:02 vm09 bash[22983]: cluster 2026-03-09T15:50:01.055514+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:02 vm09 bash[22983]: audit 2026-03-09T15:50:01.974886+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:02 vm09 bash[22983]: audit 2026-03-09T15:50:01.974886+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:02 vm09 bash[22983]: audit 2026-03-09T15:50:01.976339+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:02 vm09 bash[22983]: audit 2026-03-09T15:50:01.976339+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:02 vm09 bash[22983]: audit 2026-03-09T15:50:01.976799+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:02 vm09 bash[22983]: audit 2026-03-09T15:50:01.976799+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:02 vm01 bash[28152]: cluster 2026-03-09T15:50:01.055514+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:02 vm01 bash[28152]: cluster 2026-03-09T15:50:01.055514+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:02 vm01 bash[28152]: audit 2026-03-09T15:50:01.974886+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:02 vm01 bash[28152]: audit 2026-03-09T15:50:01.974886+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:02 vm01 bash[28152]: audit 2026-03-09T15:50:01.976339+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:02 vm01 bash[28152]: audit 2026-03-09T15:50:01.976339+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:02 vm01 bash[28152]: audit 2026-03-09T15:50:01.976799+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:02 vm01 bash[28152]: audit 2026-03-09T15:50:01.976799+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:02 vm01 bash[20728]: cluster 2026-03-09T15:50:01.055514+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:02 vm01 bash[20728]: cluster 2026-03-09T15:50:01.055514+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:02 vm01 bash[20728]: audit 2026-03-09T15:50:01.974886+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:02 vm01 bash[20728]: audit 2026-03-09T15:50:01.974886+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:02 vm01 bash[20728]: audit 2026-03-09T15:50:01.976339+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:02 vm01 bash[20728]: audit 2026-03-09T15:50:01.976339+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:02 vm01 bash[20728]: audit 2026-03-09T15:50:01.976799+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:02 vm01 bash[20728]: audit 2026-03-09T15:50:01.976799+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:03 vm09 bash[22983]: audit 2026-03-09T15:50:01.973552+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:03 vm09 bash[22983]: audit 2026-03-09T15:50:01.973552+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:03.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:03 vm01 bash[28152]: audit 2026-03-09T15:50:01.973552+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:03.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:03 vm01 bash[28152]: audit 2026-03-09T15:50:01.973552+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:03.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:03 vm01 bash[20728]: audit 2026-03-09T15:50:01.973552+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:03.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:03 vm01 bash[20728]: audit 2026-03-09T15:50:01.973552+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:04 vm09 bash[22983]: cluster 2026-03-09T15:50:03.055765+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:04 vm09 bash[22983]: cluster 2026-03-09T15:50:03.055765+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:04.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:04 vm01 bash[28152]: cluster 2026-03-09T15:50:03.055765+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:04.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:04 vm01 bash[28152]: cluster 2026-03-09T15:50:03.055765+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:04.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:04 vm01 bash[20728]: cluster 2026-03-09T15:50:03.055765+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:04.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:04 vm01 bash[20728]: cluster 2026-03-09T15:50:03.055765+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:06.766 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:06 vm01 bash[28152]: cluster 2026-03-09T15:50:05.056026+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:06.766 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:06 vm01 bash[28152]: cluster 2026-03-09T15:50:05.056026+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:06.766 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:06 vm01 bash[20728]: cluster 2026-03-09T15:50:05.056026+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:06.766 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:06 vm01 bash[20728]: cluster 2026-03-09T15:50:05.056026+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:06.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:06 vm09 bash[22983]: cluster 2026-03-09T15:50:05.056026+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:06.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:06 vm09 bash[22983]: cluster 2026-03-09T15:50:05.056026+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: audit 2026-03-09T15:50:07.325226+0000 mon.c (mon.2) 4 : audit [INF] from='client.? 192.168.123.101:0/3425005126' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: audit 2026-03-09T15:50:07.325226+0000 mon.c (mon.2) 4 : audit [INF] from='client.? 192.168.123.101:0/3425005126' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: audit 2026-03-09T15:50:07.325664+0000 mon.a (mon.0) 335 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: audit 2026-03-09T15:50:07.325664+0000 mon.a (mon.0) 335 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: audit 2026-03-09T15:50:07.328653+0000 mon.a (mon.0) 336 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]': finished 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: audit 2026-03-09T15:50:07.328653+0000 mon.a (mon.0) 336 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]': finished 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: cluster 2026-03-09T15:50:07.331718+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: cluster 2026-03-09T15:50:07.331718+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: audit 2026-03-09T15:50:07.331920+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:07 vm01 bash[28152]: audit 2026-03-09T15:50:07.331920+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: audit 2026-03-09T15:50:07.325226+0000 mon.c (mon.2) 4 : audit [INF] from='client.? 192.168.123.101:0/3425005126' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: audit 2026-03-09T15:50:07.325226+0000 mon.c (mon.2) 4 : audit [INF] from='client.? 192.168.123.101:0/3425005126' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: audit 2026-03-09T15:50:07.325664+0000 mon.a (mon.0) 335 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: audit 2026-03-09T15:50:07.325664+0000 mon.a (mon.0) 335 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: audit 2026-03-09T15:50:07.328653+0000 mon.a (mon.0) 336 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]': finished 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: audit 2026-03-09T15:50:07.328653+0000 mon.a (mon.0) 336 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]': finished 2026-03-09T15:50:07.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: cluster 2026-03-09T15:50:07.331718+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T15:50:07.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: cluster 2026-03-09T15:50:07.331718+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T15:50:07.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: audit 2026-03-09T15:50:07.331920+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:07.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:07 vm01 bash[20728]: audit 2026-03-09T15:50:07.331920+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: audit 2026-03-09T15:50:07.325226+0000 mon.c (mon.2) 4 : audit [INF] from='client.? 192.168.123.101:0/3425005126' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: audit 2026-03-09T15:50:07.325226+0000 mon.c (mon.2) 4 : audit [INF] from='client.? 192.168.123.101:0/3425005126' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: audit 2026-03-09T15:50:07.325664+0000 mon.a (mon.0) 335 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: audit 2026-03-09T15:50:07.325664+0000 mon.a (mon.0) 335 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]: dispatch 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: audit 2026-03-09T15:50:07.328653+0000 mon.a (mon.0) 336 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]': finished 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: audit 2026-03-09T15:50:07.328653+0000 mon.a (mon.0) 336 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e7c85482-6eb5-4953-8a19-029686ffe773"}]': finished 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: cluster 2026-03-09T15:50:07.331718+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: cluster 2026-03-09T15:50:07.331718+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: audit 2026-03-09T15:50:07.331920+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:07 vm09 bash[22983]: audit 2026-03-09T15:50:07.331920+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:08 vm09 bash[22983]: cluster 2026-03-09T15:50:07.056267+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:08 vm09 bash[22983]: cluster 2026-03-09T15:50:07.056267+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:08 vm09 bash[22983]: audit 2026-03-09T15:50:07.966611+0000 mon.a (mon.0) 339 : audit [DBG] from='client.? 192.168.123.101:0/464630295' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:08 vm09 bash[22983]: audit 2026-03-09T15:50:07.966611+0000 mon.a (mon.0) 339 : audit [DBG] from='client.? 192.168.123.101:0/464630295' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:08 vm01 bash[28152]: cluster 2026-03-09T15:50:07.056267+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:08 vm01 bash[28152]: cluster 2026-03-09T15:50:07.056267+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:08 vm01 bash[28152]: audit 2026-03-09T15:50:07.966611+0000 mon.a (mon.0) 339 : audit [DBG] from='client.? 192.168.123.101:0/464630295' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:08 vm01 bash[28152]: audit 2026-03-09T15:50:07.966611+0000 mon.a (mon.0) 339 : audit [DBG] from='client.? 192.168.123.101:0/464630295' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:08.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:08 vm01 bash[20728]: cluster 2026-03-09T15:50:07.056267+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:08.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:08 vm01 bash[20728]: cluster 2026-03-09T15:50:07.056267+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:08.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:08 vm01 bash[20728]: audit 2026-03-09T15:50:07.966611+0000 mon.a (mon.0) 339 : audit [DBG] from='client.? 192.168.123.101:0/464630295' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:08.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:08 vm01 bash[20728]: audit 2026-03-09T15:50:07.966611+0000 mon.a (mon.0) 339 : audit [DBG] from='client.? 192.168.123.101:0/464630295' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:10.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:10 vm09 bash[22983]: cluster 2026-03-09T15:50:09.056525+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:10.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:10 vm09 bash[22983]: cluster 2026-03-09T15:50:09.056525+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:10.902 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:10 vm01 bash[20728]: cluster 2026-03-09T15:50:09.056525+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:10.902 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:10 vm01 bash[20728]: cluster 2026-03-09T15:50:09.056525+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:10.902 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:10 vm01 bash[28152]: cluster 2026-03-09T15:50:09.056525+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:10.903 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:10 vm01 bash[28152]: cluster 2026-03-09T15:50:09.056525+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:11.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:11 vm09 bash[22983]: cluster 2026-03-09T15:50:11.056819+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:11.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:11 vm09 bash[22983]: cluster 2026-03-09T15:50:11.056819+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:11.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:11 vm01 bash[20728]: cluster 2026-03-09T15:50:11.056819+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:11.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:11 vm01 bash[20728]: cluster 2026-03-09T15:50:11.056819+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:11.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:11 vm01 bash[28152]: cluster 2026-03-09T15:50:11.056819+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:11.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:11 vm01 bash[28152]: cluster 2026-03-09T15:50:11.056819+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:14 vm09 bash[22983]: cluster 2026-03-09T15:50:13.057076+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:14 vm09 bash[22983]: cluster 2026-03-09T15:50:13.057076+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:14.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:14 vm01 bash[20728]: cluster 2026-03-09T15:50:13.057076+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:14.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:14 vm01 bash[20728]: cluster 2026-03-09T15:50:13.057076+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:14.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:14 vm01 bash[28152]: cluster 2026-03-09T15:50:13.057076+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:14.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:14 vm01 bash[28152]: cluster 2026-03-09T15:50:13.057076+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:16 vm09 bash[22983]: cluster 2026-03-09T15:50:15.057358+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:16 vm09 bash[22983]: cluster 2026-03-09T15:50:15.057358+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:16.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:16 vm01 bash[28152]: cluster 2026-03-09T15:50:15.057358+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:16.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:16 vm01 bash[28152]: cluster 2026-03-09T15:50:15.057358+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:16.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:16 vm01 bash[20728]: cluster 2026-03-09T15:50:15.057358+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:16.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:16 vm01 bash[20728]: cluster 2026-03-09T15:50:15.057358+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:17.038 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:16 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:17.038 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:50:16 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:17.038 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:16 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:17.038 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:50:16 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:50:17 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:17 vm01 bash[28152]: audit 2026-03-09T15:50:16.191519+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:17 vm01 bash[28152]: audit 2026-03-09T15:50:16.191519+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:17 vm01 bash[28152]: audit 2026-03-09T15:50:16.192186+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:17 vm01 bash[28152]: audit 2026-03-09T15:50:16.192186+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:17 vm01 bash[28152]: cephadm 2026-03-09T15:50:16.192735+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm01 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:17 vm01 bash[28152]: cephadm 2026-03-09T15:50:16.192735+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm01 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:17 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:17.300 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:50:17 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:17 vm01 bash[20728]: audit 2026-03-09T15:50:16.191519+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:17 vm01 bash[20728]: audit 2026-03-09T15:50:16.191519+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:17 vm01 bash[20728]: audit 2026-03-09T15:50:16.192186+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:17 vm01 bash[20728]: audit 2026-03-09T15:50:16.192186+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:17 vm01 bash[20728]: cephadm 2026-03-09T15:50:16.192735+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm01 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:17 vm01 bash[20728]: cephadm 2026-03-09T15:50:16.192735+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm01 2026-03-09T15:50:17.300 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:17 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:17 vm09 bash[22983]: audit 2026-03-09T15:50:16.191519+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:50:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:17 vm09 bash[22983]: audit 2026-03-09T15:50:16.191519+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:50:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:17 vm09 bash[22983]: audit 2026-03-09T15:50:16.192186+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:17 vm09 bash[22983]: audit 2026-03-09T15:50:16.192186+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:17 vm09 bash[22983]: cephadm 2026-03-09T15:50:16.192735+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm01 2026-03-09T15:50:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:17 vm09 bash[22983]: cephadm 2026-03-09T15:50:16.192735+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm01 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:18 vm01 bash[28152]: cluster 2026-03-09T15:50:17.057611+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:18 vm01 bash[28152]: cluster 2026-03-09T15:50:17.057611+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:18 vm01 bash[28152]: audit 2026-03-09T15:50:17.287374+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:18 vm01 bash[28152]: audit 2026-03-09T15:50:17.287374+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:18 vm01 bash[28152]: audit 2026-03-09T15:50:17.293099+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:18 vm01 bash[28152]: audit 2026-03-09T15:50:17.293099+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:18 vm01 bash[28152]: audit 2026-03-09T15:50:17.307938+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:18 vm01 bash[28152]: audit 2026-03-09T15:50:17.307938+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:18 vm01 bash[20728]: cluster 2026-03-09T15:50:17.057611+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:18 vm01 bash[20728]: cluster 2026-03-09T15:50:17.057611+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:18 vm01 bash[20728]: audit 2026-03-09T15:50:17.287374+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:18 vm01 bash[20728]: audit 2026-03-09T15:50:17.287374+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:18 vm01 bash[20728]: audit 2026-03-09T15:50:17.293099+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:18 vm01 bash[20728]: audit 2026-03-09T15:50:17.293099+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:18 vm01 bash[20728]: audit 2026-03-09T15:50:17.307938+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:18 vm01 bash[20728]: audit 2026-03-09T15:50:17.307938+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:18 vm09 bash[22983]: cluster 2026-03-09T15:50:17.057611+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:18 vm09 bash[22983]: cluster 2026-03-09T15:50:17.057611+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:18 vm09 bash[22983]: audit 2026-03-09T15:50:17.287374+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:18 vm09 bash[22983]: audit 2026-03-09T15:50:17.287374+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:18 vm09 bash[22983]: audit 2026-03-09T15:50:17.293099+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:18 vm09 bash[22983]: audit 2026-03-09T15:50:17.293099+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:18 vm09 bash[22983]: audit 2026-03-09T15:50:17.307938+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:18 vm09 bash[22983]: audit 2026-03-09T15:50:17.307938+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:20.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:20 vm01 bash[28152]: cluster 2026-03-09T15:50:19.057855+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:20.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:20 vm01 bash[28152]: cluster 2026-03-09T15:50:19.057855+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:20.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:20 vm01 bash[20728]: cluster 2026-03-09T15:50:19.057855+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:20.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:20 vm01 bash[20728]: cluster 2026-03-09T15:50:19.057855+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:20 vm09 bash[22983]: cluster 2026-03-09T15:50:19.057855+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:20.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:20 vm09 bash[22983]: cluster 2026-03-09T15:50:19.057855+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:22 vm01 bash[28152]: cluster 2026-03-09T15:50:21.058159+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:22 vm01 bash[28152]: cluster 2026-03-09T15:50:21.058159+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:22 vm01 bash[28152]: audit 2026-03-09T15:50:21.245095+0000 mon.c (mon.2) 5 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:22 vm01 bash[28152]: audit 2026-03-09T15:50:21.245095+0000 mon.c (mon.2) 5 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:22 vm01 bash[28152]: audit 2026-03-09T15:50:21.245333+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:22 vm01 bash[28152]: audit 2026-03-09T15:50:21.245333+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:22 vm01 bash[20728]: cluster 2026-03-09T15:50:21.058159+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:22 vm01 bash[20728]: cluster 2026-03-09T15:50:21.058159+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:22 vm01 bash[20728]: audit 2026-03-09T15:50:21.245095+0000 mon.c (mon.2) 5 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:22 vm01 bash[20728]: audit 2026-03-09T15:50:21.245095+0000 mon.c (mon.2) 5 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:22 vm01 bash[20728]: audit 2026-03-09T15:50:21.245333+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:22 vm01 bash[20728]: audit 2026-03-09T15:50:21.245333+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:22 vm09 bash[22983]: cluster 2026-03-09T15:50:21.058159+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:22 vm09 bash[22983]: cluster 2026-03-09T15:50:21.058159+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:22 vm09 bash[22983]: audit 2026-03-09T15:50:21.245095+0000 mon.c (mon.2) 5 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:22 vm09 bash[22983]: audit 2026-03-09T15:50:21.245095+0000 mon.c (mon.2) 5 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:22 vm09 bash[22983]: audit 2026-03-09T15:50:21.245333+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:22 vm09 bash[22983]: audit 2026-03-09T15:50:21.245333+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: audit 2026-03-09T15:50:22.149460+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: audit 2026-03-09T15:50:22.149460+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: cluster 2026-03-09T15:50:22.151222+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: cluster 2026-03-09T15:50:22.151222+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: audit 2026-03-09T15:50:22.152946+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: audit 2026-03-09T15:50:22.152946+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: audit 2026-03-09T15:50:22.153317+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: audit 2026-03-09T15:50:22.153317+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: audit 2026-03-09T15:50:22.153784+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:23 vm01 bash[28152]: audit 2026-03-09T15:50:22.153784+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: audit 2026-03-09T15:50:22.149460+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: audit 2026-03-09T15:50:22.149460+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: cluster 2026-03-09T15:50:22.151222+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: cluster 2026-03-09T15:50:22.151222+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: audit 2026-03-09T15:50:22.152946+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: audit 2026-03-09T15:50:22.152946+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: audit 2026-03-09T15:50:22.153317+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: audit 2026-03-09T15:50:22.153317+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: audit 2026-03-09T15:50:22.153784+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:23 vm01 bash[20728]: audit 2026-03-09T15:50:22.153784+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: audit 2026-03-09T15:50:22.149460+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: audit 2026-03-09T15:50:22.149460+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: cluster 2026-03-09T15:50:22.151222+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: cluster 2026-03-09T15:50:22.151222+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: audit 2026-03-09T15:50:22.152946+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: audit 2026-03-09T15:50:22.152946+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: audit 2026-03-09T15:50:22.153317+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: audit 2026-03-09T15:50:22.153317+0000 mon.c (mon.2) 6 : audit [INF] from='osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: audit 2026-03-09T15:50:22.153784+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:23 vm09 bash[22983]: audit 2026-03-09T15:50:22.153784+0000 mon.a (mon.0) 349 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: cluster 2026-03-09T15:50:23.058470+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: cluster 2026-03-09T15:50:23.058470+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.152381+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.152381+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: cluster 2026-03-09T15:50:23.157395+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: cluster 2026-03-09T15:50:23.157395+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.159854+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.159854+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.171386+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.171386+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.493605+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.493605+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.505068+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.505068+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.506209+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.506209+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.506852+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.506852+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.515154+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:23.515154+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:24.160778+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:24 vm01 bash[20728]: audit 2026-03-09T15:50:24.160778+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: cluster 2026-03-09T15:50:23.058470+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: cluster 2026-03-09T15:50:23.058470+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.152381+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.152381+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: cluster 2026-03-09T15:50:23.157395+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: cluster 2026-03-09T15:50:23.157395+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.159854+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.159854+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.171386+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.171386+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.493605+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.493605+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.505068+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.505068+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.506209+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.506209+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.506852+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.506852+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.515154+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:23.515154+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:24.160778+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.767 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:24 vm01 bash[28152]: audit 2026-03-09T15:50:24.160778+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.863 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.854+0000 7f1e11ffb640 1 -- 192.168.123.101:0/965825948 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f1de4002bf0 con 0x7f1df4077540 2026-03-09T15:50:24.877 INFO:teuthology.orchestra.run.vm01.stdout:Created osd(s) 1 on host 'vm01' 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.862+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f1df4077540 msgr2=0x7f1df4079a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.862+0000 7f1e231ce640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f1df4077540 0x7f1df4079a00 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f1e0400adb0 tx=0x7f1e0400a310 comp rx=0 tx=0).stop 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.862+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1e1c102bd0 msgr2=0x7f1e1c19c950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.862+0000 7f1e231ce640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1e1c102bd0 0x7f1e1c19c950 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f1e0c00b550 tx=0x7f1e0c00ba20 comp rx=0 tx=0).stop 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.866+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 shutdown_connections 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.866+0000 7f1e231ce640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1e1c103590 0x7f1e1c1a39d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.866+0000 7f1e231ce640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f1df4077540 0x7f1df4079a00 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.866+0000 7f1e231ce640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1e1c102bd0 0x7f1e1c19c950 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.866+0000 7f1e231ce640 1 --2- 192.168.123.101:0/965825948 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1e1c1019d0 0x7f1e1c19c410 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.866+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 >> 192.168.123.101:0/965825948 conn(0x7f1e1c0fd180 msgr2=0x7f1e1c0fec80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.866+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 shutdown_connections 2026-03-09T15:50:24.878 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:24.866+0000 7f1e231ce640 1 -- 192.168.123.101:0/965825948 wait complete. 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: cluster 2026-03-09T15:50:23.058470+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: cluster 2026-03-09T15:50:23.058470+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.152381+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.152381+0000 mon.a (mon.0) 350 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: cluster 2026-03-09T15:50:23.157395+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: cluster 2026-03-09T15:50:23.157395+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.159854+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.159854+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.171386+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.171386+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.493605+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.493605+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.505068+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.505068+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.506209+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.506209+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.506852+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.506852+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.515154+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:23.515154+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:24.160778+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:24 vm09 bash[22983]: audit 2026-03-09T15:50:24.160778+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:25.019 DEBUG:teuthology.orchestra.run.vm01:osd.1> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.1.service 2026-03-09T15:50:25.022 INFO:tasks.cephadm:Deploying osd.2 on vm01 with /dev/vdc... 2026-03-09T15:50:25.022 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- lvm zap /dev/vdc 2026-03-09T15:50:25.417 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:50:25 vm01 bash[36842]: debug 2026-03-09T15:50:25.070+0000 7f43dd853640 -1 osd.1 0 waiting for initial osdmap 2026-03-09T15:50:25.418 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:50:25 vm01 bash[36842]: debug 2026-03-09T15:50:25.110+0000 7f43d8669640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: cluster 2026-03-09T15:50:22.208164+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: cluster 2026-03-09T15:50:22.208164+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: cluster 2026-03-09T15:50:22.208222+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: cluster 2026-03-09T15:50:22.208222+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:24.839538+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:24.839538+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:24.848225+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:24.848225+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:24.854410+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:24.854410+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:25.080528+0000 mon.a (mon.0) 363 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:25.080528+0000 mon.a (mon.0) 363 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:25.160667+0000 mon.a (mon.0) 364 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:25 vm01 bash[28152]: audit 2026-03-09T15:50:25.160667+0000 mon.a (mon.0) 364 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: cluster 2026-03-09T15:50:22.208164+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: cluster 2026-03-09T15:50:22.208164+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: cluster 2026-03-09T15:50:22.208222+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: cluster 2026-03-09T15:50:22.208222+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:24.839538+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:24.839538+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:24.848225+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:24.848225+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:24.854410+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:24.854410+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:25.080528+0000 mon.a (mon.0) 363 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T15:50:25.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:25.080528+0000 mon.a (mon.0) 363 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T15:50:25.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:25.160667+0000 mon.a (mon.0) 364 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:25.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:25 vm01 bash[20728]: audit 2026-03-09T15:50:25.160667+0000 mon.a (mon.0) 364 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: cluster 2026-03-09T15:50:22.208164+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: cluster 2026-03-09T15:50:22.208164+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: cluster 2026-03-09T15:50:22.208222+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: cluster 2026-03-09T15:50:22.208222+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:24.839538+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:24.839538+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:24.848225+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:24.848225+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:24.854410+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:24.854410+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:25.080528+0000 mon.a (mon.0) 363 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:25.080528+0000 mon.a (mon.0) 363 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:25.160667+0000 mon.a (mon.0) 364 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:25 vm09 bash[22983]: audit 2026-03-09T15:50:25.160667+0000 mon.a (mon.0) 364 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:26 vm01 bash[28152]: cluster 2026-03-09T15:50:25.058722+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:26 vm01 bash[28152]: cluster 2026-03-09T15:50:25.058722+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:26 vm01 bash[28152]: cluster 2026-03-09T15:50:26.085323+0000 mon.a (mon.0) 365 : cluster [INF] osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] boot 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:26 vm01 bash[28152]: cluster 2026-03-09T15:50:26.085323+0000 mon.a (mon.0) 365 : cluster [INF] osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] boot 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:26 vm01 bash[28152]: cluster 2026-03-09T15:50:26.085465+0000 mon.a (mon.0) 366 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:26 vm01 bash[28152]: cluster 2026-03-09T15:50:26.085465+0000 mon.a (mon.0) 366 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:26 vm01 bash[28152]: audit 2026-03-09T15:50:26.085576+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:26 vm01 bash[28152]: audit 2026-03-09T15:50:26.085576+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:26 vm01 bash[20728]: cluster 2026-03-09T15:50:25.058722+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:26 vm01 bash[20728]: cluster 2026-03-09T15:50:25.058722+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:26 vm01 bash[20728]: cluster 2026-03-09T15:50:26.085323+0000 mon.a (mon.0) 365 : cluster [INF] osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] boot 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:26 vm01 bash[20728]: cluster 2026-03-09T15:50:26.085323+0000 mon.a (mon.0) 365 : cluster [INF] osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] boot 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:26 vm01 bash[20728]: cluster 2026-03-09T15:50:26.085465+0000 mon.a (mon.0) 366 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:26 vm01 bash[20728]: cluster 2026-03-09T15:50:26.085465+0000 mon.a (mon.0) 366 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:26 vm01 bash[20728]: audit 2026-03-09T15:50:26.085576+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:26.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:26 vm01 bash[20728]: audit 2026-03-09T15:50:26.085576+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:26 vm09 bash[22983]: cluster 2026-03-09T15:50:25.058722+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:26 vm09 bash[22983]: cluster 2026-03-09T15:50:25.058722+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T15:50:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:26 vm09 bash[22983]: cluster 2026-03-09T15:50:26.085323+0000 mon.a (mon.0) 365 : cluster [INF] osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] boot 2026-03-09T15:50:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:26 vm09 bash[22983]: cluster 2026-03-09T15:50:26.085323+0000 mon.a (mon.0) 365 : cluster [INF] osd.1 [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] boot 2026-03-09T15:50:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:26 vm09 bash[22983]: cluster 2026-03-09T15:50:26.085465+0000 mon.a (mon.0) 366 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T15:50:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:26 vm09 bash[22983]: cluster 2026-03-09T15:50:26.085465+0000 mon.a (mon.0) 366 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T15:50:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:26 vm09 bash[22983]: audit 2026-03-09T15:50:26.085576+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:26 vm09 bash[22983]: audit 2026-03-09T15:50:26.085576+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:50:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:28 vm09 bash[22983]: cluster 2026-03-09T15:50:27.059008+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail 2026-03-09T15:50:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:28 vm09 bash[22983]: cluster 2026-03-09T15:50:27.059008+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail 2026-03-09T15:50:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:28 vm09 bash[22983]: cluster 2026-03-09T15:50:27.089512+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T15:50:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:28 vm09 bash[22983]: cluster 2026-03-09T15:50:27.089512+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T15:50:28.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:28 vm01 bash[28152]: cluster 2026-03-09T15:50:27.059008+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail 2026-03-09T15:50:28.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:28 vm01 bash[28152]: cluster 2026-03-09T15:50:27.059008+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail 2026-03-09T15:50:28.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:28 vm01 bash[28152]: cluster 2026-03-09T15:50:27.089512+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T15:50:28.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:28 vm01 bash[28152]: cluster 2026-03-09T15:50:27.089512+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T15:50:28.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:28 vm01 bash[20728]: cluster 2026-03-09T15:50:27.059008+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail 2026-03-09T15:50:28.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:28 vm01 bash[20728]: cluster 2026-03-09T15:50:27.059008+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail 2026-03-09T15:50:28.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:28 vm01 bash[20728]: cluster 2026-03-09T15:50:27.089512+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T15:50:28.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:28 vm01 bash[20728]: cluster 2026-03-09T15:50:27.089512+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T15:50:29.678 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:50:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:30 vm09 bash[22983]: cluster 2026-03-09T15:50:29.059255+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:30 vm09 bash[22983]: cluster 2026-03-09T15:50:29.059255+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:30.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:30 vm01 bash[28152]: cluster 2026-03-09T15:50:29.059255+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:30.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:30 vm01 bash[28152]: cluster 2026-03-09T15:50:29.059255+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:30.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:30 vm01 bash[20728]: cluster 2026-03-09T15:50:29.059255+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:30.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:30 vm01 bash[20728]: cluster 2026-03-09T15:50:29.059255+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:31.503 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:50:31.520 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch daemon add osd vm01:/dev/vdc 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: cephadm 2026-03-09T15:50:30.675628+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: cephadm 2026-03-09T15:50:30.675628+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.681874+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.681874+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.695964+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.695964+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.697958+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.697958+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.698853+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.698853+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.699314+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.699314+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.706652+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: audit 2026-03-09T15:50:30.706652+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: cluster 2026-03-09T15:50:31.059532+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:31 vm01 bash[28152]: cluster 2026-03-09T15:50:31.059532+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: cephadm 2026-03-09T15:50:30.675628+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: cephadm 2026-03-09T15:50:30.675628+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.681874+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.681874+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.695964+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.695964+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.697958+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.697958+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.698853+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.698853+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.699314+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.699314+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.706652+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: audit 2026-03-09T15:50:30.706652+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: cluster 2026-03-09T15:50:31.059532+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:31 vm01 bash[20728]: cluster 2026-03-09T15:50:31.059532+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: cephadm 2026-03-09T15:50:30.675628+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: cephadm 2026-03-09T15:50:30.675628+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.681874+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.681874+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.695964+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.695964+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.697958+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.697958+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.698853+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.698853+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.699314+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.699314+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.706652+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: audit 2026-03-09T15:50:30.706652+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: cluster 2026-03-09T15:50:31.059532+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:32.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:31 vm09 bash[22983]: cluster 2026-03-09T15:50:31.059532+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:34 vm09 bash[22983]: cluster 2026-03-09T15:50:33.059815+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:34 vm09 bash[22983]: cluster 2026-03-09T15:50:33.059815+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:34.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:34 vm01 bash[28152]: cluster 2026-03-09T15:50:33.059815+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:34.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:34 vm01 bash[28152]: cluster 2026-03-09T15:50:33.059815+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:34.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:34 vm01 bash[20728]: cluster 2026-03-09T15:50:33.059815+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:34.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:34 vm01 bash[20728]: cluster 2026-03-09T15:50:33.059815+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:36.149 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:50:36.301 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 -- 192.168.123.101:0/264692566 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f96640770a0 msgr2=0x7f9664075500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 --2- 192.168.123.101:0/264692566 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f96640770a0 0x7f9664075500 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f9654009a30 tx=0x7f965402f240 comp rx=0 tx=0).stop 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 -- 192.168.123.101:0/264692566 shutdown_connections 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 --2- 192.168.123.101:0/264692566 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f96641064a0 0x7f96641113b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 --2- 192.168.123.101:0/264692566 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9664075a40 0x7f9664075ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 --2- 192.168.123.101:0/264692566 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f96640770a0 0x7f9664075500 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 -- 192.168.123.101:0/264692566 >> 192.168.123.101:0/264692566 conn(0x7f96640fe290 msgr2=0x7f96641006b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 -- 192.168.123.101:0/264692566 shutdown_connections 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 -- 192.168.123.101:0/264692566 wait complete. 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 Processor -- start 2026-03-09T15:50:36.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 -- start start 2026-03-09T15:50:36.303 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f966ab1c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9664075a40 0x7f96641a08b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:50:36.303 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.294+0000 7f9668891640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9664075a40 0x7f96641a08b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9668891640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9664075a40 0x7f96641a08b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:48302/0 (socket says 192.168.123.101:48302) 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f966ab1c640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f96640770a0 0x7f96641a0df0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f966ab1c640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f96641064a0 0x7f96641a7e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f966ab1c640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f9664114150 con 0x7f9664075a40 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f966ab1c640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f9664113fd0 con 0x7f96641064a0 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f966ab1c640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f96641142d0 con 0x7f96640770a0 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9668891640 1 -- 192.168.123.101:0/174865661 learned_addr learned my addr 192.168.123.101:0/174865661 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9668891640 1 -- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f96640770a0 msgr2=0x7f96641a0df0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f965bfff640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f96640770a0 0x7f96641a0df0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9669092640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f96641064a0 0x7f96641a7e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9668891640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f96640770a0 0x7f96641a0df0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9668891640 1 -- 192.168.123.101:0/174865661 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f96641064a0 msgr2=0x7f96641a7e70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9668891640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f96641064a0 0x7f96641a7e70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9668891640 1 -- 192.168.123.101:0/174865661 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f96641a84e0 con 0x7f9664075a40 2026-03-09T15:50:36.304 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f965bfff640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f96640770a0 0x7f96641a0df0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:50:36.305 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9669092640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f96641064a0 0x7f96641a7e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:50:36.305 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9668891640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9664075a40 0x7f96641a08b0 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f9654002410 tx=0x7f9654004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:50:36.305 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9659ffb640 1 -- 192.168.123.101:0/174865661 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f96540047c0 con 0x7f9664075a40 2026-03-09T15:50:36.305 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f96641a8770 con 0x7f9664075a40 2026-03-09T15:50:36.305 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f96641a8c50 con 0x7f9664075a40 2026-03-09T15:50:36.307 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9659ffb640 1 -- 192.168.123.101:0/174865661 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f9654004da0 con 0x7f9664075a40 2026-03-09T15:50:36.307 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9659ffb640 1 -- 192.168.123.101:0/174865661 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9654038510 con 0x7f9664075a40 2026-03-09T15:50:36.308 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9659ffb640 1 -- 192.168.123.101:0/174865661 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 13) ==== 99979+0+0 (secure 0 0 0) 0x7f96540386b0 con 0x7f9664075a40 2026-03-09T15:50:36.308 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9659ffb640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f962c077680 0x7f962c079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:50:36.308 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f9659ffb640 1 -- 192.168.123.101:0/174865661 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(14..14 src has 1..14) ==== 2189+0+0 (secure 0 0 0) 0x7f96540be3a0 con 0x7f9664075a40 2026-03-09T15:50:36.308 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.298+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9630005180 con 0x7f9664075a40 2026-03-09T15:50:36.311 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.302+0000 7f965bfff640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f962c077680 0x7f962c079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:50:36.311 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.302+0000 7f965bfff640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f962c077680 0x7f962c079b40 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f96641a1dd0 tx=0x7f964c005e90 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:50:36.311 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.302+0000 7f9659ffb640 1 -- 192.168.123.101:0/174865661 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9654047050 con 0x7f9664075a40 2026-03-09T15:50:36.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:36.402+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7f9630002bf0 con 0x7f962c077680 2026-03-09T15:50:36.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:36 vm01 bash[28152]: cluster 2026-03-09T15:50:35.060097+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:36.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:36 vm01 bash[28152]: cluster 2026-03-09T15:50:35.060097+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:36.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:36 vm01 bash[20728]: cluster 2026-03-09T15:50:35.060097+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:36.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:36 vm01 bash[20728]: cluster 2026-03-09T15:50:35.060097+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:36 vm09 bash[22983]: cluster 2026-03-09T15:50:35.060097+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:36 vm09 bash[22983]: cluster 2026-03-09T15:50:35.060097+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:37 vm01 bash[28152]: audit 2026-03-09T15:50:36.407180+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:37 vm01 bash[28152]: audit 2026-03-09T15:50:36.407180+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:37 vm01 bash[28152]: audit 2026-03-09T15:50:36.408951+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:37 vm01 bash[28152]: audit 2026-03-09T15:50:36.408951+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:37 vm01 bash[28152]: audit 2026-03-09T15:50:36.410568+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:37 vm01 bash[28152]: audit 2026-03-09T15:50:36.410568+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:37 vm01 bash[28152]: audit 2026-03-09T15:50:36.411042+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:37 vm01 bash[28152]: audit 2026-03-09T15:50:36.411042+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:37 vm01 bash[20728]: audit 2026-03-09T15:50:36.407180+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:37 vm01 bash[20728]: audit 2026-03-09T15:50:36.407180+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:37 vm01 bash[20728]: audit 2026-03-09T15:50:36.408951+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:37 vm01 bash[20728]: audit 2026-03-09T15:50:36.408951+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:37 vm01 bash[20728]: audit 2026-03-09T15:50:36.410568+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:37 vm01 bash[20728]: audit 2026-03-09T15:50:36.410568+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:37 vm01 bash[20728]: audit 2026-03-09T15:50:36.411042+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:37.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:37 vm01 bash[20728]: audit 2026-03-09T15:50:36.411042+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:37 vm09 bash[22983]: audit 2026-03-09T15:50:36.407180+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:37 vm09 bash[22983]: audit 2026-03-09T15:50:36.407180+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:50:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:37 vm09 bash[22983]: audit 2026-03-09T15:50:36.408951+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:37 vm09 bash[22983]: audit 2026-03-09T15:50:36.408951+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:50:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:37 vm09 bash[22983]: audit 2026-03-09T15:50:36.410568+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:37 vm09 bash[22983]: audit 2026-03-09T15:50:36.410568+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:50:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:37 vm09 bash[22983]: audit 2026-03-09T15:50:36.411042+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:37 vm09 bash[22983]: audit 2026-03-09T15:50:36.411042+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:38.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:38 vm01 bash[28152]: cluster 2026-03-09T15:50:37.060374+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:38.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:38 vm01 bash[28152]: cluster 2026-03-09T15:50:37.060374+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:38.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:38 vm01 bash[20728]: cluster 2026-03-09T15:50:37.060374+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:38.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:38 vm01 bash[20728]: cluster 2026-03-09T15:50:37.060374+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:38 vm09 bash[22983]: cluster 2026-03-09T15:50:37.060374+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:38 vm09 bash[22983]: cluster 2026-03-09T15:50:37.060374+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:40.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:40 vm01 bash[28152]: cluster 2026-03-09T15:50:39.060615+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:40.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:40 vm01 bash[28152]: cluster 2026-03-09T15:50:39.060615+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:40.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:40 vm01 bash[20728]: cluster 2026-03-09T15:50:39.060615+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:40.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:40 vm01 bash[20728]: cluster 2026-03-09T15:50:39.060615+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:40 vm09 bash[22983]: cluster 2026-03-09T15:50:39.060615+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:40 vm09 bash[22983]: cluster 2026-03-09T15:50:39.060615+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: cluster 2026-03-09T15:50:41.060871+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: cluster 2026-03-09T15:50:41.060871+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: audit 2026-03-09T15:50:41.797714+0000 mon.a (mon.0) 378 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]: dispatch 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: audit 2026-03-09T15:50:41.797714+0000 mon.a (mon.0) 378 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]: dispatch 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: audit 2026-03-09T15:50:41.800593+0000 mon.a (mon.0) 379 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]': finished 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: audit 2026-03-09T15:50:41.800593+0000 mon.a (mon.0) 379 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]': finished 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: cluster 2026-03-09T15:50:41.803307+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: cluster 2026-03-09T15:50:41.803307+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: audit 2026-03-09T15:50:41.804415+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:42 vm01 bash[28152]: audit 2026-03-09T15:50:41.804415+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: cluster 2026-03-09T15:50:41.060871+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: cluster 2026-03-09T15:50:41.060871+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: audit 2026-03-09T15:50:41.797714+0000 mon.a (mon.0) 378 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]: dispatch 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: audit 2026-03-09T15:50:41.797714+0000 mon.a (mon.0) 378 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]: dispatch 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: audit 2026-03-09T15:50:41.800593+0000 mon.a (mon.0) 379 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]': finished 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: audit 2026-03-09T15:50:41.800593+0000 mon.a (mon.0) 379 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]': finished 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: cluster 2026-03-09T15:50:41.803307+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: cluster 2026-03-09T15:50:41.803307+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: audit 2026-03-09T15:50:41.804415+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:42.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:42 vm01 bash[20728]: audit 2026-03-09T15:50:41.804415+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: cluster 2026-03-09T15:50:41.060871+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: cluster 2026-03-09T15:50:41.060871+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: audit 2026-03-09T15:50:41.797714+0000 mon.a (mon.0) 378 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]: dispatch 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: audit 2026-03-09T15:50:41.797714+0000 mon.a (mon.0) 378 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]: dispatch 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: audit 2026-03-09T15:50:41.800593+0000 mon.a (mon.0) 379 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]': finished 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: audit 2026-03-09T15:50:41.800593+0000 mon.a (mon.0) 379 : audit [INF] from='client.? 192.168.123.101:0/3798228428' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1982b6d-a77c-466e-996b-c1ff61952b4b"}]': finished 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: cluster 2026-03-09T15:50:41.803307+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: cluster 2026-03-09T15:50:41.803307+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: audit 2026-03-09T15:50:41.804415+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:42 vm09 bash[22983]: audit 2026-03-09T15:50:41.804415+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:43.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:43 vm01 bash[28152]: audit 2026-03-09T15:50:42.444548+0000 mon.c (mon.2) 7 : audit [DBG] from='client.? 192.168.123.101:0/3391595385' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:43.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:43 vm01 bash[28152]: audit 2026-03-09T15:50:42.444548+0000 mon.c (mon.2) 7 : audit [DBG] from='client.? 192.168.123.101:0/3391595385' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:43 vm01 bash[20728]: audit 2026-03-09T15:50:42.444548+0000 mon.c (mon.2) 7 : audit [DBG] from='client.? 192.168.123.101:0/3391595385' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:43.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:43 vm01 bash[20728]: audit 2026-03-09T15:50:42.444548+0000 mon.c (mon.2) 7 : audit [DBG] from='client.? 192.168.123.101:0/3391595385' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:43 vm09 bash[22983]: audit 2026-03-09T15:50:42.444548+0000 mon.c (mon.2) 7 : audit [DBG] from='client.? 192.168.123.101:0/3391595385' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:43 vm09 bash[22983]: audit 2026-03-09T15:50:42.444548+0000 mon.c (mon.2) 7 : audit [DBG] from='client.? 192.168.123.101:0/3391595385' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:50:44.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:44 vm01 bash[28152]: cluster 2026-03-09T15:50:43.061191+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:44.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:44 vm01 bash[28152]: cluster 2026-03-09T15:50:43.061191+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:44 vm01 bash[20728]: cluster 2026-03-09T15:50:43.061191+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:44.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:44 vm01 bash[20728]: cluster 2026-03-09T15:50:43.061191+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:44 vm09 bash[22983]: cluster 2026-03-09T15:50:43.061191+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:44 vm09 bash[22983]: cluster 2026-03-09T15:50:43.061191+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:46.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:46 vm01 bash[28152]: cluster 2026-03-09T15:50:45.061474+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:46.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:46 vm01 bash[28152]: cluster 2026-03-09T15:50:45.061474+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:46.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:46 vm01 bash[20728]: cluster 2026-03-09T15:50:45.061474+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:46.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:46 vm01 bash[20728]: cluster 2026-03-09T15:50:45.061474+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:46 vm09 bash[22983]: cluster 2026-03-09T15:50:45.061474+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:46 vm09 bash[22983]: cluster 2026-03-09T15:50:45.061474+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:48.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:48 vm01 bash[28152]: cluster 2026-03-09T15:50:47.061787+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:48.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:48 vm01 bash[28152]: cluster 2026-03-09T15:50:47.061787+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:48.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:48 vm01 bash[20728]: cluster 2026-03-09T15:50:47.061787+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:48.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:48 vm01 bash[20728]: cluster 2026-03-09T15:50:47.061787+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:48 vm09 bash[22983]: cluster 2026-03-09T15:50:47.061787+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:48 vm09 bash[22983]: cluster 2026-03-09T15:50:47.061787+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:50.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:50 vm01 bash[28152]: cluster 2026-03-09T15:50:49.062096+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:50.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:50 vm01 bash[28152]: cluster 2026-03-09T15:50:49.062096+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:50.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:50 vm01 bash[20728]: cluster 2026-03-09T15:50:49.062096+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:50.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:50 vm01 bash[20728]: cluster 2026-03-09T15:50:49.062096+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:50 vm09 bash[22983]: cluster 2026-03-09T15:50:49.062096+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:50 vm09 bash[22983]: cluster 2026-03-09T15:50:49.062096+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:51.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:51 vm01 bash[20728]: audit 2026-03-09T15:50:50.649753+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:50:51.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:51 vm01 bash[20728]: audit 2026-03-09T15:50:50.649753+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:50:51.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:51 vm01 bash[20728]: audit 2026-03-09T15:50:50.650328+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:51.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:51 vm01 bash[20728]: audit 2026-03-09T15:50:50.650328+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:51.482 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:51 vm01 bash[28152]: audit 2026-03-09T15:50:50.649753+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:50:51.482 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:51 vm01 bash[28152]: audit 2026-03-09T15:50:50.649753+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:50:51.482 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:51 vm01 bash[28152]: audit 2026-03-09T15:50:50.650328+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:51.482 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:51 vm01 bash[28152]: audit 2026-03-09T15:50:50.650328+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:51 vm09 bash[22983]: audit 2026-03-09T15:50:50.649753+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:50:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:51 vm09 bash[22983]: audit 2026-03-09T15:50:50.649753+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:50:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:51 vm09 bash[22983]: audit 2026-03-09T15:50:50.650328+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:51 vm09 bash[22983]: audit 2026-03-09T15:50:50.650328+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:51.779 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:51.779 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:51.779 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:51.780 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:51.780 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:51.780 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:51.780 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:51.780 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:51.780 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:51.780 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:50:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: cephadm 2026-03-09T15:50:50.650811+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm01 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: cephadm 2026-03-09T15:50:50.650811+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm01 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: cluster 2026-03-09T15:50:51.062342+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: cluster 2026-03-09T15:50:51.062342+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: audit 2026-03-09T15:50:51.804974+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: audit 2026-03-09T15:50:51.804974+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: audit 2026-03-09T15:50:51.810292+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: audit 2026-03-09T15:50:51.810292+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: audit 2026-03-09T15:50:51.815674+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:52 vm01 bash[28152]: audit 2026-03-09T15:50:51.815674+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: cephadm 2026-03-09T15:50:50.650811+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm01 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: cephadm 2026-03-09T15:50:50.650811+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm01 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: cluster 2026-03-09T15:50:51.062342+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: cluster 2026-03-09T15:50:51.062342+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: audit 2026-03-09T15:50:51.804974+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: audit 2026-03-09T15:50:51.804974+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: audit 2026-03-09T15:50:51.810292+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: audit 2026-03-09T15:50:51.810292+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: audit 2026-03-09T15:50:51.815674+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.435 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:52 vm01 bash[20728]: audit 2026-03-09T15:50:51.815674+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: cephadm 2026-03-09T15:50:50.650811+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm01 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: cephadm 2026-03-09T15:50:50.650811+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm01 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: cluster 2026-03-09T15:50:51.062342+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: cluster 2026-03-09T15:50:51.062342+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: audit 2026-03-09T15:50:51.804974+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: audit 2026-03-09T15:50:51.804974+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: audit 2026-03-09T15:50:51.810292+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: audit 2026-03-09T15:50:51.810292+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: audit 2026-03-09T15:50:51.815674+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:52 vm09 bash[22983]: audit 2026-03-09T15:50:51.815674+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:54 vm09 bash[22983]: cluster 2026-03-09T15:50:53.062579+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:54 vm09 bash[22983]: cluster 2026-03-09T15:50:53.062579+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:54.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:54 vm01 bash[28152]: cluster 2026-03-09T15:50:53.062579+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:54.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:54 vm01 bash[28152]: cluster 2026-03-09T15:50:53.062579+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:54.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:54 vm01 bash[20728]: cluster 2026-03-09T15:50:53.062579+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:54.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:54 vm01 bash[20728]: cluster 2026-03-09T15:50:53.062579+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:56 vm09 bash[22983]: cluster 2026-03-09T15:50:55.062807+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:56 vm09 bash[22983]: cluster 2026-03-09T15:50:55.062807+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:56 vm09 bash[22983]: audit 2026-03-09T15:50:55.442568+0000 mon.c (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:56 vm09 bash[22983]: audit 2026-03-09T15:50:55.442568+0000 mon.c (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:56 vm09 bash[22983]: audit 2026-03-09T15:50:55.442852+0000 mon.a (mon.0) 387 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:56 vm09 bash[22983]: audit 2026-03-09T15:50:55.442852+0000 mon.a (mon.0) 387 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:56 vm01 bash[28152]: cluster 2026-03-09T15:50:55.062807+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:56 vm01 bash[28152]: cluster 2026-03-09T15:50:55.062807+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:56 vm01 bash[28152]: audit 2026-03-09T15:50:55.442568+0000 mon.c (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:56 vm01 bash[28152]: audit 2026-03-09T15:50:55.442568+0000 mon.c (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:56 vm01 bash[28152]: audit 2026-03-09T15:50:55.442852+0000 mon.a (mon.0) 387 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:56 vm01 bash[28152]: audit 2026-03-09T15:50:55.442852+0000 mon.a (mon.0) 387 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:56 vm01 bash[20728]: cluster 2026-03-09T15:50:55.062807+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:56 vm01 bash[20728]: cluster 2026-03-09T15:50:55.062807+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:56 vm01 bash[20728]: audit 2026-03-09T15:50:55.442568+0000 mon.c (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:56 vm01 bash[20728]: audit 2026-03-09T15:50:55.442568+0000 mon.c (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:56 vm01 bash[20728]: audit 2026-03-09T15:50:55.442852+0000 mon.a (mon.0) 387 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:56.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:56 vm01 bash[20728]: audit 2026-03-09T15:50:55.442852+0000 mon.a (mon.0) 387 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:56.275524+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:56.275524+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: cluster 2026-03-09T15:50:56.280495+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: cluster 2026-03-09T15:50:56.280495+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:56.281099+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:56.281099+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:56.281555+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:56.281555+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:56.281782+0000 mon.a (mon.0) 391 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:56.281782+0000 mon.a (mon.0) 391 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:57.278390+0000 mon.a (mon.0) 392 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:57.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:57.278390+0000 mon.a (mon.0) 392 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:57.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: cluster 2026-03-09T15:50:57.282004+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T15:50:57.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: cluster 2026-03-09T15:50:57.282004+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T15:50:57.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:57.282158+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:57.282158+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:57.288441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:57 vm09 bash[22983]: audit 2026-03-09T15:50:57.288441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:56.275524+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:56.275524+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: cluster 2026-03-09T15:50:56.280495+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: cluster 2026-03-09T15:50:56.280495+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:56.281099+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:56.281099+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:56.281555+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:56.281555+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:56.281782+0000 mon.a (mon.0) 391 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:56.281782+0000 mon.a (mon.0) 391 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:57.278390+0000 mon.a (mon.0) 392 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:57.278390+0000 mon.a (mon.0) 392 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: cluster 2026-03-09T15:50:57.282004+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: cluster 2026-03-09T15:50:57.282004+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:57.282158+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:57.282158+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:57.288441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:57 vm01 bash[28152]: audit 2026-03-09T15:50:57.288441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:56.275524+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:56.275524+0000 mon.a (mon.0) 388 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: cluster 2026-03-09T15:50:56.280495+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: cluster 2026-03-09T15:50:56.280495+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:56.281099+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:56.281099+0000 mon.c (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:56.281555+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:56.281555+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:56.281782+0000 mon.a (mon.0) 391 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:56.281782+0000 mon.a (mon.0) 391 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:57.278390+0000 mon.a (mon.0) 392 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:57.278390+0000 mon.a (mon.0) 392 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: cluster 2026-03-09T15:50:57.282004+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: cluster 2026-03-09T15:50:57.282004+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:57.282158+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:57.282158+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:57.288441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:57.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:57 vm01 bash[20728]: audit 2026-03-09T15:50:57.288441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: cluster 2026-03-09T15:50:57.062967+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: cluster 2026-03-09T15:50:57.062967+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: audit 2026-03-09T15:50:58.179186+0000 mon.a (mon.0) 396 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: audit 2026-03-09T15:50:58.179186+0000 mon.a (mon.0) 396 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: audit 2026-03-09T15:50:58.220133+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: audit 2026-03-09T15:50:58.220133+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: audit 2026-03-09T15:50:58.226862+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: audit 2026-03-09T15:50:58.226862+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: audit 2026-03-09T15:50:58.293598+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:58 vm09 bash[22983]: audit 2026-03-09T15:50:58.293598+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: cluster 2026-03-09T15:50:57.062967+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: cluster 2026-03-09T15:50:57.062967+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: audit 2026-03-09T15:50:58.179186+0000 mon.a (mon.0) 396 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: audit 2026-03-09T15:50:58.179186+0000 mon.a (mon.0) 396 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: audit 2026-03-09T15:50:58.220133+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: audit 2026-03-09T15:50:58.220133+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: audit 2026-03-09T15:50:58.226862+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: audit 2026-03-09T15:50:58.226862+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: audit 2026-03-09T15:50:58.293598+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:58 vm01 bash[28152]: audit 2026-03-09T15:50:58.293598+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: cluster 2026-03-09T15:50:57.062967+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: cluster 2026-03-09T15:50:57.062967+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: audit 2026-03-09T15:50:58.179186+0000 mon.a (mon.0) 396 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: audit 2026-03-09T15:50:58.179186+0000 mon.a (mon.0) 396 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: audit 2026-03-09T15:50:58.220133+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: audit 2026-03-09T15:50:58.220133+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: audit 2026-03-09T15:50:58.226862+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: audit 2026-03-09T15:50:58.226862+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: audit 2026-03-09T15:50:58.293598+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:58 vm01 bash[20728]: audit 2026-03-09T15:50:58.293598+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:59.345 INFO:teuthology.orchestra.run.vm01.stdout:Created osd(s) 2 on host 'vm01' 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f9659ffb640 1 -- 192.168.123.101:0/174865661 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f9630002bf0 con 0x7f962c077680 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f962c077680 msgr2=0x7f962c079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f962c077680 0x7f962c079b40 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f96641a1dd0 tx=0x7f964c005e90 comp rx=0 tx=0).stop 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9664075a40 msgr2=0x7f96641a08b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9664075a40 0x7f96641a08b0 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f9654002410 tx=0x7f9654004290 comp rx=0 tx=0).stop 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 shutdown_connections 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f96641064a0 0x7f96641a7e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f962c077680 0x7f962c079b40 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f96640770a0 0x7f96641a0df0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 --2- 192.168.123.101:0/174865661 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9664075a40 0x7f96641a08b0 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.334+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 >> 192.168.123.101:0/174865661 conn(0x7f96640fe290 msgr2=0x7f9664102300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.338+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 shutdown_connections 2026-03-09T15:50:59.346 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:50:59.338+0000 7f966ab1c640 1 -- 192.168.123.101:0/174865661 wait complete. 2026-03-09T15:50:59.426 DEBUG:teuthology.orchestra.run.vm01:osd.2> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.2.service 2026-03-09T15:50:59.427 INFO:tasks.cephadm:Deploying osd.3 on vm01 with /dev/vdb... 2026-03-09T15:50:59.427 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- lvm zap /dev/vdb 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: cluster 2026-03-09T15:50:56.403657+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: cluster 2026-03-09T15:50:56.403657+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: cluster 2026-03-09T15:50:56.403699+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: cluster 2026-03-09T15:50:56.403699+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:58.666060+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:58.666060+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:58.666588+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:58.666588+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:58.671204+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:58.671204+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: cluster 2026-03-09T15:50:59.184197+0000 mon.a (mon.0) 403 : cluster [INF] osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] boot 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: cluster 2026-03-09T15:50:59.184197+0000 mon.a (mon.0) 403 : cluster [INF] osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] boot 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: cluster 2026-03-09T15:50:59.185462+0000 mon.a (mon.0) 404 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: cluster 2026-03-09T15:50:59.185462+0000 mon.a (mon.0) 404 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:59.187898+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:59.187898+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:59.309677+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:50:59 vm09 bash[22983]: audit 2026-03-09T15:50:59.309677+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: cluster 2026-03-09T15:50:56.403657+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: cluster 2026-03-09T15:50:56.403657+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: cluster 2026-03-09T15:50:56.403699+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: cluster 2026-03-09T15:50:56.403699+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:58.666060+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:58.666060+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:58.666588+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:58.666588+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:58.671204+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:58.671204+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: cluster 2026-03-09T15:50:59.184197+0000 mon.a (mon.0) 403 : cluster [INF] osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] boot 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: cluster 2026-03-09T15:50:59.184197+0000 mon.a (mon.0) 403 : cluster [INF] osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] boot 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: cluster 2026-03-09T15:50:59.185462+0000 mon.a (mon.0) 404 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: cluster 2026-03-09T15:50:59.185462+0000 mon.a (mon.0) 404 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:59.187898+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:59.187898+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:59.309677+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:50:59 vm01 bash[28152]: audit 2026-03-09T15:50:59.309677+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: cluster 2026-03-09T15:50:56.403657+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: cluster 2026-03-09T15:50:56.403657+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: cluster 2026-03-09T15:50:56.403699+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: cluster 2026-03-09T15:50:56.403699+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:58.666060+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:58.666060+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:58.666588+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:58.666588+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:58.671204+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:58.671204+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: cluster 2026-03-09T15:50:59.184197+0000 mon.a (mon.0) 403 : cluster [INF] osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] boot 2026-03-09T15:50:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: cluster 2026-03-09T15:50:59.184197+0000 mon.a (mon.0) 403 : cluster [INF] osd.2 [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] boot 2026-03-09T15:50:59.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: cluster 2026-03-09T15:50:59.185462+0000 mon.a (mon.0) 404 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T15:50:59.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: cluster 2026-03-09T15:50:59.185462+0000 mon.a (mon.0) 404 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T15:50:59.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:59.187898+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:59.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:59.187898+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:50:59.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:59.309677+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:50:59.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:50:59 vm01 bash[20728]: audit 2026-03-09T15:50:59.309677+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:00 vm09 bash[22983]: cluster 2026-03-09T15:50:59.063189+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:51:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:00 vm09 bash[22983]: cluster 2026-03-09T15:50:59.063189+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:51:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:00 vm09 bash[22983]: audit 2026-03-09T15:50:59.327391+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:00 vm09 bash[22983]: audit 2026-03-09T15:50:59.327391+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:00 vm09 bash[22983]: audit 2026-03-09T15:50:59.335884+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:00 vm09 bash[22983]: audit 2026-03-09T15:50:59.335884+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:00 vm01 bash[28152]: cluster 2026-03-09T15:50:59.063189+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:00 vm01 bash[28152]: cluster 2026-03-09T15:50:59.063189+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:00 vm01 bash[28152]: audit 2026-03-09T15:50:59.327391+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:00 vm01 bash[28152]: audit 2026-03-09T15:50:59.327391+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:00 vm01 bash[28152]: audit 2026-03-09T15:50:59.335884+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:00 vm01 bash[28152]: audit 2026-03-09T15:50:59.335884+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:00 vm01 bash[20728]: cluster 2026-03-09T15:50:59.063189+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:00 vm01 bash[20728]: cluster 2026-03-09T15:50:59.063189+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:00 vm01 bash[20728]: audit 2026-03-09T15:50:59.327391+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:00 vm01 bash[20728]: audit 2026-03-09T15:50:59.327391+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:00 vm01 bash[20728]: audit 2026-03-09T15:50:59.335884+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:00 vm01 bash[20728]: audit 2026-03-09T15:50:59.335884+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:01 vm09 bash[22983]: cluster 2026-03-09T15:51:00.340448+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T15:51:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:01 vm09 bash[22983]: cluster 2026-03-09T15:51:00.340448+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T15:51:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:01 vm09 bash[22983]: audit 2026-03-09T15:51:01.101510+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:01 vm09 bash[22983]: audit 2026-03-09T15:51:01.101510+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:01.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:01 vm01 bash[20728]: cluster 2026-03-09T15:51:00.340448+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T15:51:01.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:01 vm01 bash[20728]: cluster 2026-03-09T15:51:00.340448+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T15:51:01.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:01 vm01 bash[20728]: audit 2026-03-09T15:51:01.101510+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:01.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:01 vm01 bash[20728]: audit 2026-03-09T15:51:01.101510+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:01.694 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:01 vm01 bash[28152]: cluster 2026-03-09T15:51:00.340448+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T15:51:01.694 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:01 vm01 bash[28152]: cluster 2026-03-09T15:51:00.340448+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T15:51:01.694 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:01 vm01 bash[28152]: audit 2026-03-09T15:51:01.101510+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:01.694 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:01 vm01 bash[28152]: audit 2026-03-09T15:51:01.101510+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:02 vm09 bash[22983]: cluster 2026-03-09T15:51:01.063450+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:02 vm09 bash[22983]: cluster 2026-03-09T15:51:01.063450+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:02 vm09 bash[22983]: audit 2026-03-09T15:51:01.345063+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:02 vm09 bash[22983]: audit 2026-03-09T15:51:01.345063+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:02 vm09 bash[22983]: cluster 2026-03-09T15:51:01.355311+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T15:51:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:02 vm09 bash[22983]: cluster 2026-03-09T15:51:01.355311+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T15:51:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:02 vm09 bash[22983]: audit 2026-03-09T15:51:01.356333+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:02 vm09 bash[22983]: audit 2026-03-09T15:51:01.356333+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:02 vm01 bash[28152]: cluster 2026-03-09T15:51:01.063450+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:02 vm01 bash[28152]: cluster 2026-03-09T15:51:01.063450+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:02 vm01 bash[28152]: audit 2026-03-09T15:51:01.345063+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:02 vm01 bash[28152]: audit 2026-03-09T15:51:01.345063+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:02 vm01 bash[28152]: cluster 2026-03-09T15:51:01.355311+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:02 vm01 bash[28152]: cluster 2026-03-09T15:51:01.355311+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:02 vm01 bash[28152]: audit 2026-03-09T15:51:01.356333+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:02 vm01 bash[28152]: audit 2026-03-09T15:51:01.356333+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:02 vm01 bash[20728]: cluster 2026-03-09T15:51:01.063450+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:02 vm01 bash[20728]: cluster 2026-03-09T15:51:01.063450+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:02 vm01 bash[20728]: audit 2026-03-09T15:51:01.345063+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:02 vm01 bash[20728]: audit 2026-03-09T15:51:01.345063+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:02 vm01 bash[20728]: cluster 2026-03-09T15:51:01.355311+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:02 vm01 bash[20728]: cluster 2026-03-09T15:51:01.355311+0000 mon.a (mon.0) 412 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:02 vm01 bash[20728]: audit 2026-03-09T15:51:01.356333+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:02.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:02 vm01 bash[20728]: audit 2026-03-09T15:51:01.356333+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.347995+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.347995+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: cluster 2026-03-09T15:51:02.351615+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: cluster 2026-03-09T15:51:02.351615+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.465094+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.465094+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.465245+0000 mon.a (mon.0) 417 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.465245+0000 mon.a (mon.0) 417 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.482886+0000 mon.a (mon.0) 418 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.482886+0000 mon.a (mon.0) 418 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.483010+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.483010+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.483407+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.483407+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.485238+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.485238+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.485283+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.485283+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.485322+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.485322+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.487954+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.487954+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.507673+0000 mon.c (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.507673+0000 mon.c (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.508139+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.508139+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.508259+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.508259+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.508365+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.508365+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.508432+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.508432+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.527106+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:03 vm09 bash[22983]: audit 2026-03-09T15:51:02.527106+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.347995+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.347995+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: cluster 2026-03-09T15:51:02.351615+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: cluster 2026-03-09T15:51:02.351615+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.465094+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.465094+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.465245+0000 mon.a (mon.0) 417 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.465245+0000 mon.a (mon.0) 417 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.482886+0000 mon.a (mon.0) 418 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.482886+0000 mon.a (mon.0) 418 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.483010+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.483010+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.483407+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.483407+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.485238+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.485238+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.485283+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.485283+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.485322+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.485322+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.487954+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.487954+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.507673+0000 mon.c (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.507673+0000 mon.c (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.508139+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.508139+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.508259+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.508259+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.508365+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.508365+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.508432+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.508432+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.527106+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:03 vm01 bash[28152]: audit 2026-03-09T15:51:02.527106+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.347995+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.347995+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: cluster 2026-03-09T15:51:02.351615+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: cluster 2026-03-09T15:51:02.351615+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.465094+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.465094+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.465245+0000 mon.a (mon.0) 417 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.465245+0000 mon.a (mon.0) 417 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.482886+0000 mon.a (mon.0) 418 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.482886+0000 mon.a (mon.0) 418 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.483010+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.483010+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.483407+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.483407+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.485238+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.485238+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.485283+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.485283+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.485322+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.485322+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.487954+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.487954+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.507673+0000 mon.c (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.507673+0000 mon.c (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.508139+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.508139+0000 mon.b (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.508259+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.508259+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.508365+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.508365+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.508432+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.508432+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.527106+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:03.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:03 vm01 bash[20728]: audit 2026-03-09T15:51:02.527106+0000 mon.c (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T15:51:04.104 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:04 vm01 bash[28152]: cluster 2026-03-09T15:51:03.063756+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:04 vm01 bash[28152]: cluster 2026-03-09T15:51:03.063756+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:04 vm01 bash[28152]: cluster 2026-03-09T15:51:03.378661+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:04 vm01 bash[28152]: cluster 2026-03-09T15:51:03.378661+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:04 vm01 bash[28152]: cluster 2026-03-09T15:51:03.384001+0000 mon.a (mon.0) 428 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:04 vm01 bash[28152]: cluster 2026-03-09T15:51:03.384001+0000 mon.a (mon.0) 428 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:04 vm01 bash[20728]: cluster 2026-03-09T15:51:03.063756+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:04 vm01 bash[20728]: cluster 2026-03-09T15:51:03.063756+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:04 vm01 bash[20728]: cluster 2026-03-09T15:51:03.378661+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:04 vm01 bash[20728]: cluster 2026-03-09T15:51:03.378661+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:04 vm01 bash[20728]: cluster 2026-03-09T15:51:03.384001+0000 mon.a (mon.0) 428 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-09T15:51:04.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:04 vm01 bash[20728]: cluster 2026-03-09T15:51:03.384001+0000 mon.a (mon.0) 428 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-09T15:51:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:04 vm09 bash[22983]: cluster 2026-03-09T15:51:03.063756+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:04 vm09 bash[22983]: cluster 2026-03-09T15:51:03.063756+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:04 vm09 bash[22983]: cluster 2026-03-09T15:51:03.378661+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T15:51:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:04 vm09 bash[22983]: cluster 2026-03-09T15:51:03.378661+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T15:51:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:04 vm09 bash[22983]: cluster 2026-03-09T15:51:03.384001+0000 mon.a (mon.0) 428 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-09T15:51:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:04 vm09 bash[22983]: cluster 2026-03-09T15:51:03.384001+0000 mon.a (mon.0) 428 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-09T15:51:05.750 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:51:05.765 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch daemon add osd vm01:/dev/vdb 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: cephadm 2026-03-09T15:51:04.980212+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: cephadm 2026-03-09T15:51:04.980212+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.986243+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.986243+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.994197+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.994197+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.995301+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.995301+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.996091+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.996091+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:05.994 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.999802+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: cephadm 2026-03-09T15:51:04.980212+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: cephadm 2026-03-09T15:51:04.980212+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.986243+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.986243+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.994197+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.994197+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.995301+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.995301+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.996091+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.996091+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.999802+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:04.999802+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:05.035218+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: audit 2026-03-09T15:51:05.035218+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: cluster 2026-03-09T15:51:05.064041+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:06.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:05 vm01 bash[28152]: cluster 2026-03-09T15:51:05.064041+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:06.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:04.999802+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:06.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:05.035218+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: audit 2026-03-09T15:51:05.035218+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: cluster 2026-03-09T15:51:05.064041+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:06.184 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:05 vm01 bash[20728]: cluster 2026-03-09T15:51:05.064041+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: cephadm 2026-03-09T15:51:04.980212+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: cephadm 2026-03-09T15:51:04.980212+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.986243+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.986243+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.994197+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.994197+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.995301+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.995301+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.996091+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.996091+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.999802+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:04.999802+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:05.035218+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: audit 2026-03-09T15:51:05.035218+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: cluster 2026-03-09T15:51:05.064041+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:05 vm09 bash[22983]: cluster 2026-03-09T15:51:05.064041+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:08 vm09 bash[22983]: cluster 2026-03-09T15:51:07.064388+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:08 vm09 bash[22983]: cluster 2026-03-09T15:51:07.064388+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:08.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:08 vm01 bash[20728]: cluster 2026-03-09T15:51:07.064388+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:08.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:08 vm01 bash[20728]: cluster 2026-03-09T15:51:07.064388+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:08.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:08 vm01 bash[28152]: cluster 2026-03-09T15:51:07.064388+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:08.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:08 vm01 bash[28152]: cluster 2026-03-09T15:51:07.064388+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:10.388 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:51:10.514 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:10 vm01 bash[28152]: cluster 2026-03-09T15:51:09.064727+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:10.514 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:10 vm01 bash[28152]: cluster 2026-03-09T15:51:09.064727+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:10.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:10 vm01 bash[20728]: cluster 2026-03-09T15:51:09.064727+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:10.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:10 vm01 bash[20728]: cluster 2026-03-09T15:51:09.064727+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:10.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/2400753219 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 msgr2=0x7ff6a0102f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:51:10.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/2400753219 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 0x7ff6a0102f10 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7ff690009a30 tx=0x7ff69002f260 comp rx=0 tx=0).stop 2026-03-09T15:51:10.582 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/2400753219 shutdown_connections 2026-03-09T15:51:10.582 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/2400753219 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff6a01046d0 0x7ff6a010af60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:10.582 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/2400753219 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff6a0103d10 0x7ff6a0104190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:10.582 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/2400753219 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 0x7ff6a0102f10 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:10.582 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/2400753219 >> 192.168.123.101:0/2400753219 conn(0x7ff6a00fe2c0 msgr2=0x7ff6a01006e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:51:10.582 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/2400753219 shutdown_connections 2026-03-09T15:51:10.582 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/2400753219 wait complete. 2026-03-09T15:51:10.582 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 Processor -- start 2026-03-09T15:51:10.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 -- start start 2026-03-09T15:51:10.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 0x7ff6a019c460 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:51:10.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff6a0103d10 0x7ff6a019c9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:51:10.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff6a01046d0 0x7ff6a01a39b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:51:10.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7ff6a010d920 con 0x7ff6a01046d0 2026-03-09T15:51:10.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7ff6a010d7a0 con 0x7ff6a0102b10 2026-03-09T15:51:10.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a7a1a640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7ff6a010daa0 con 0x7ff6a0103d10 2026-03-09T15:51:10.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.574+0000 7ff6a578f640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 0x7ff6a019c460 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:51:10.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a578f640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 0x7ff6a019c460 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:34198/0 (socket says 192.168.123.101:34198) 2026-03-09T15:51:10.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a578f640 1 -- 192.168.123.101:0/4240815618 learned_addr learned my addr 192.168.123.101:0/4240815618 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:51:10.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a5f90640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff6a01046d0 0x7ff6a01a39b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:51:10.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a4f8e640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff6a0103d10 0x7ff6a019c9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:51:10.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a578f640 1 -- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff6a0103d10 msgr2=0x7ff6a019c9a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a578f640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff6a0103d10 0x7ff6a019c9a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a578f640 1 -- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff6a01046d0 msgr2=0x7ff6a01a39b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a578f640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff6a01046d0 0x7ff6a01a39b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a578f640 1 -- 192.168.123.101:0/4240815618 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff6a01a40b0 con 0x7ff6a0102b10 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a5f90640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff6a01046d0 0x7ff6a01a39b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a4f8e640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff6a0103d10 0x7ff6a019c9a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a578f640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 0x7ff6a019c460 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7ff690009950 tx=0x7ff690002f60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6967fc640 1 -- 192.168.123.101:0/4240815618 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff690004260 con 0x7ff6a0102b10 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff6a01a4340 con 0x7ff6a0102b10 2026-03-09T15:51:10.585 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff6a01a4870 con 0x7ff6a0102b10 2026-03-09T15:51:10.586 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6967fc640 1 -- 192.168.123.101:0/4240815618 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff690004400 con 0x7ff6a0102b10 2026-03-09T15:51:10.586 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6967fc640 1 -- 192.168.123.101:0/4240815618 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff690041620 con 0x7ff6a0102b10 2026-03-09T15:51:10.587 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.578+0000 7ff6967fc640 1 -- 192.168.123.101:0/4240815618 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7ff6900417c0 con 0x7ff6a0102b10 2026-03-09T15:51:10.587 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.582+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff6a0102f10 con 0x7ff6a0102b10 2026-03-09T15:51:10.588 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.582+0000 7ff6967fc640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff670077640 0x7ff670079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:51:10.588 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.582+0000 7ff6a4f8e640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff670077640 0x7ff670079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:51:10.588 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.582+0000 7ff6967fc640 1 -- 192.168.123.101:0/4240815618 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(22..22 src has 1..22) ==== 3007+0+0 (secure 0 0 0) 0x7ff690038510 con 0x7ff6a0102b10 2026-03-09T15:51:10.588 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.582+0000 7ff6a4f8e640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff670077640 0x7ff670079b00 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7ff6a019d910 tx=0x7ff688007450 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:51:10.591 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.582+0000 7ff6967fc640 1 -- 192.168.123.101:0/4240815618 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff690088270 con 0x7ff6a0102b10 2026-03-09T15:51:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:10 vm09 bash[22983]: cluster 2026-03-09T15:51:09.064727+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:10 vm09 bash[22983]: cluster 2026-03-09T15:51:09.064727+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:10.708 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:10.702+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7ff6a01a4b90 con 0x7ff670077640 2026-03-09T15:51:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:11 vm09 bash[22983]: audit 2026-03-09T15:51:10.708800+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:11 vm09 bash[22983]: audit 2026-03-09T15:51:10.708800+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:11 vm09 bash[22983]: audit 2026-03-09T15:51:10.710568+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:11 vm09 bash[22983]: audit 2026-03-09T15:51:10.710568+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:11 vm09 bash[22983]: audit 2026-03-09T15:51:10.711195+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:11 vm09 bash[22983]: audit 2026-03-09T15:51:10.711195+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:11 vm01 bash[28152]: audit 2026-03-09T15:51:10.708800+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:11 vm01 bash[28152]: audit 2026-03-09T15:51:10.708800+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:11 vm01 bash[28152]: audit 2026-03-09T15:51:10.710568+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:11 vm01 bash[28152]: audit 2026-03-09T15:51:10.710568+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:11 vm01 bash[28152]: audit 2026-03-09T15:51:10.711195+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:11 vm01 bash[28152]: audit 2026-03-09T15:51:10.711195+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:11 vm01 bash[20728]: audit 2026-03-09T15:51:10.708800+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:11 vm01 bash[20728]: audit 2026-03-09T15:51:10.708800+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:11 vm01 bash[20728]: audit 2026-03-09T15:51:10.710568+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:11 vm01 bash[20728]: audit 2026-03-09T15:51:10.710568+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:11 vm01 bash[20728]: audit 2026-03-09T15:51:10.711195+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:11.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:11 vm01 bash[20728]: audit 2026-03-09T15:51:10.711195+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:12 vm09 bash[22983]: audit 2026-03-09T15:51:10.706977+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:12 vm09 bash[22983]: audit 2026-03-09T15:51:10.706977+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:12 vm09 bash[22983]: cluster 2026-03-09T15:51:11.065020+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:12 vm09 bash[22983]: cluster 2026-03-09T15:51:11.065020+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:12.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:12 vm01 bash[28152]: audit 2026-03-09T15:51:10.706977+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:12.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:12 vm01 bash[28152]: audit 2026-03-09T15:51:10.706977+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:12.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:12 vm01 bash[28152]: cluster 2026-03-09T15:51:11.065020+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:12.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:12 vm01 bash[28152]: cluster 2026-03-09T15:51:11.065020+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:12.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:12 vm01 bash[20728]: audit 2026-03-09T15:51:10.706977+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:12.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:12 vm01 bash[20728]: audit 2026-03-09T15:51:10.706977+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:12.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:12 vm01 bash[20728]: cluster 2026-03-09T15:51:11.065020+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:12.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:12 vm01 bash[20728]: cluster 2026-03-09T15:51:11.065020+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:14 vm09 bash[22983]: cluster 2026-03-09T15:51:13.065280+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:14 vm09 bash[22983]: cluster 2026-03-09T15:51:13.065280+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:14.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:14 vm01 bash[28152]: cluster 2026-03-09T15:51:13.065280+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:14.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:14 vm01 bash[28152]: cluster 2026-03-09T15:51:13.065280+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:14.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:14 vm01 bash[20728]: cluster 2026-03-09T15:51:13.065280+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:14.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:14 vm01 bash[20728]: cluster 2026-03-09T15:51:13.065280+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:16.515 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: cluster 2026-03-09T15:51:15.065545+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:16.515 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: cluster 2026-03-09T15:51:15.065545+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:16.515 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: audit 2026-03-09T15:51:16.132540+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.101:0/1684982897' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.515 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: audit 2026-03-09T15:51:16.132540+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.101:0/1684982897' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: audit 2026-03-09T15:51:16.132906+0000 mon.a (mon.0) 438 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: audit 2026-03-09T15:51:16.132906+0000 mon.a (mon.0) 438 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: audit 2026-03-09T15:51:16.136438+0000 mon.a (mon.0) 439 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]': finished 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: audit 2026-03-09T15:51:16.136438+0000 mon.a (mon.0) 439 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]': finished 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: cluster 2026-03-09T15:51:16.141396+0000 mon.a (mon.0) 440 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: cluster 2026-03-09T15:51:16.141396+0000 mon.a (mon.0) 440 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: audit 2026-03-09T15:51:16.141574+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:16 vm01 bash[20728]: audit 2026-03-09T15:51:16.141574+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: cluster 2026-03-09T15:51:15.065545+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: cluster 2026-03-09T15:51:15.065545+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: audit 2026-03-09T15:51:16.132540+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.101:0/1684982897' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: audit 2026-03-09T15:51:16.132540+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.101:0/1684982897' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: audit 2026-03-09T15:51:16.132906+0000 mon.a (mon.0) 438 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: audit 2026-03-09T15:51:16.132906+0000 mon.a (mon.0) 438 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: audit 2026-03-09T15:51:16.136438+0000 mon.a (mon.0) 439 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]': finished 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: audit 2026-03-09T15:51:16.136438+0000 mon.a (mon.0) 439 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]': finished 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: cluster 2026-03-09T15:51:16.141396+0000 mon.a (mon.0) 440 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: cluster 2026-03-09T15:51:16.141396+0000 mon.a (mon.0) 440 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: audit 2026-03-09T15:51:16.141574+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:16.516 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:16 vm01 bash[28152]: audit 2026-03-09T15:51:16.141574+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: cluster 2026-03-09T15:51:15.065545+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: cluster 2026-03-09T15:51:15.065545+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: audit 2026-03-09T15:51:16.132540+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.101:0/1684982897' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: audit 2026-03-09T15:51:16.132540+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.101:0/1684982897' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: audit 2026-03-09T15:51:16.132906+0000 mon.a (mon.0) 438 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: audit 2026-03-09T15:51:16.132906+0000 mon.a (mon.0) 438 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]: dispatch 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: audit 2026-03-09T15:51:16.136438+0000 mon.a (mon.0) 439 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]': finished 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: audit 2026-03-09T15:51:16.136438+0000 mon.a (mon.0) 439 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "59646c31-d8a8-4171-8402-970963810d37"}]': finished 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: cluster 2026-03-09T15:51:16.141396+0000 mon.a (mon.0) 440 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: cluster 2026-03-09T15:51:16.141396+0000 mon.a (mon.0) 440 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: audit 2026-03-09T15:51:16.141574+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:16 vm09 bash[22983]: audit 2026-03-09T15:51:16.141574+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:17.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:17 vm09 bash[22983]: audit 2026-03-09T15:51:16.757525+0000 mon.a (mon.0) 442 : audit [DBG] from='client.? 192.168.123.101:0/2768566181' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:17.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:17 vm09 bash[22983]: audit 2026-03-09T15:51:16.757525+0000 mon.a (mon.0) 442 : audit [DBG] from='client.? 192.168.123.101:0/2768566181' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:17 vm01 bash[28152]: audit 2026-03-09T15:51:16.757525+0000 mon.a (mon.0) 442 : audit [DBG] from='client.? 192.168.123.101:0/2768566181' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:17.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:17 vm01 bash[28152]: audit 2026-03-09T15:51:16.757525+0000 mon.a (mon.0) 442 : audit [DBG] from='client.? 192.168.123.101:0/2768566181' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:17.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:17 vm01 bash[20728]: audit 2026-03-09T15:51:16.757525+0000 mon.a (mon.0) 442 : audit [DBG] from='client.? 192.168.123.101:0/2768566181' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:17.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:17 vm01 bash[20728]: audit 2026-03-09T15:51:16.757525+0000 mon.a (mon.0) 442 : audit [DBG] from='client.? 192.168.123.101:0/2768566181' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:18 vm09 bash[22983]: cluster 2026-03-09T15:51:17.065811+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:18 vm09 bash[22983]: cluster 2026-03-09T15:51:17.065811+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:18.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:18 vm01 bash[28152]: cluster 2026-03-09T15:51:17.065811+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:18.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:18 vm01 bash[28152]: cluster 2026-03-09T15:51:17.065811+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:18.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:18 vm01 bash[20728]: cluster 2026-03-09T15:51:17.065811+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:18.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:18 vm01 bash[20728]: cluster 2026-03-09T15:51:17.065811+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:20.328 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:20 vm01 bash[20728]: cluster 2026-03-09T15:51:19.066060+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:20.328 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:20 vm01 bash[20728]: cluster 2026-03-09T15:51:19.066060+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:20.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:20 vm09 bash[22983]: cluster 2026-03-09T15:51:19.066060+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:20.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:20 vm09 bash[22983]: cluster 2026-03-09T15:51:19.066060+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:20.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:20 vm01 bash[28152]: cluster 2026-03-09T15:51:19.066060+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:20.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:20 vm01 bash[28152]: cluster 2026-03-09T15:51:19.066060+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:22 vm09 bash[22983]: cluster 2026-03-09T15:51:21.066372+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:22 vm09 bash[22983]: cluster 2026-03-09T15:51:21.066372+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:22.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:22 vm01 bash[28152]: cluster 2026-03-09T15:51:21.066372+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:22.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:22 vm01 bash[28152]: cluster 2026-03-09T15:51:21.066372+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:22.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:22 vm01 bash[20728]: cluster 2026-03-09T15:51:21.066372+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:22.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:22 vm01 bash[20728]: cluster 2026-03-09T15:51:21.066372+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:24 vm09 bash[22983]: cluster 2026-03-09T15:51:23.066638+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:24 vm09 bash[22983]: cluster 2026-03-09T15:51:23.066638+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:24 vm01 bash[28152]: cluster 2026-03-09T15:51:23.066638+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:24.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:24 vm01 bash[28152]: cluster 2026-03-09T15:51:23.066638+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:24 vm01 bash[20728]: cluster 2026-03-09T15:51:23.066638+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:24 vm01 bash[20728]: cluster 2026-03-09T15:51:23.066638+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:25.531 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:25 vm01 bash[28152]: audit 2026-03-09T15:51:25.274072+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:51:25.531 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:25 vm01 bash[28152]: audit 2026-03-09T15:51:25.274072+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:51:25.531 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:25 vm01 bash[28152]: audit 2026-03-09T15:51:25.274786+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:25.531 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:25 vm01 bash[28152]: audit 2026-03-09T15:51:25.274786+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:25.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:25 vm01 bash[20728]: audit 2026-03-09T15:51:25.274072+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:51:25.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:25 vm01 bash[20728]: audit 2026-03-09T15:51:25.274072+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:51:25.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:25 vm01 bash[20728]: audit 2026-03-09T15:51:25.274786+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:25.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:25 vm01 bash[20728]: audit 2026-03-09T15:51:25.274786+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:25 vm09 bash[22983]: audit 2026-03-09T15:51:25.274072+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:51:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:25 vm09 bash[22983]: audit 2026-03-09T15:51:25.274072+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:51:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:25 vm09 bash[22983]: audit 2026-03-09T15:51:25.274786+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:25 vm09 bash[22983]: audit 2026-03-09T15:51:25.274786+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:26.147 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.147 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.147 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.147 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.147 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.147 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:26 vm01 bash[20728]: cluster 2026-03-09T15:51:25.066906+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:26 vm01 bash[20728]: cluster 2026-03-09T15:51:25.066906+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:26 vm01 bash[20728]: cephadm 2026-03-09T15:51:25.275347+0000 mgr.y (mgr.14150) 134 : cephadm [INF] Deploying daemon osd.3 on vm01 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:26 vm01 bash[20728]: cephadm 2026-03-09T15:51:25.275347+0000 mgr.y (mgr.14150) 134 : cephadm [INF] Deploying daemon osd.3 on vm01 2026-03-09T15:51:26.432 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.432 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:26 vm01 bash[28152]: cluster 2026-03-09T15:51:25.066906+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:26 vm01 bash[28152]: cluster 2026-03-09T15:51:25.066906+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:26 vm01 bash[28152]: cephadm 2026-03-09T15:51:25.275347+0000 mgr.y (mgr.14150) 134 : cephadm [INF] Deploying daemon osd.3 on vm01 2026-03-09T15:51:26.432 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:26 vm01 bash[28152]: cephadm 2026-03-09T15:51:25.275347+0000 mgr.y (mgr.14150) 134 : cephadm [INF] Deploying daemon osd.3 on vm01 2026-03-09T15:51:26.432 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:51:26 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:51:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:26 vm09 bash[22983]: cluster 2026-03-09T15:51:25.066906+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:26 vm09 bash[22983]: cluster 2026-03-09T15:51:25.066906+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:26 vm09 bash[22983]: cephadm 2026-03-09T15:51:25.275347+0000 mgr.y (mgr.14150) 134 : cephadm [INF] Deploying daemon osd.3 on vm01 2026-03-09T15:51:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:26 vm09 bash[22983]: cephadm 2026-03-09T15:51:25.275347+0000 mgr.y (mgr.14150) 134 : cephadm [INF] Deploying daemon osd.3 on vm01 2026-03-09T15:51:27.636 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:27 vm01 bash[28152]: audit 2026-03-09T15:51:26.385799+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:27 vm01 bash[28152]: audit 2026-03-09T15:51:26.385799+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:27 vm01 bash[28152]: audit 2026-03-09T15:51:26.390079+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:27 vm01 bash[28152]: audit 2026-03-09T15:51:26.390079+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:27 vm01 bash[28152]: audit 2026-03-09T15:51:26.394846+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:27 vm01 bash[28152]: audit 2026-03-09T15:51:26.394846+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:27 vm01 bash[20728]: audit 2026-03-09T15:51:26.385799+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:27 vm01 bash[20728]: audit 2026-03-09T15:51:26.385799+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:27 vm01 bash[20728]: audit 2026-03-09T15:51:26.390079+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:27 vm01 bash[20728]: audit 2026-03-09T15:51:26.390079+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:27 vm01 bash[20728]: audit 2026-03-09T15:51:26.394846+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.637 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:27 vm01 bash[20728]: audit 2026-03-09T15:51:26.394846+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:27 vm09 bash[22983]: audit 2026-03-09T15:51:26.385799+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:27.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:27 vm09 bash[22983]: audit 2026-03-09T15:51:26.385799+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:27.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:27 vm09 bash[22983]: audit 2026-03-09T15:51:26.390079+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:27 vm09 bash[22983]: audit 2026-03-09T15:51:26.390079+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:27 vm09 bash[22983]: audit 2026-03-09T15:51:26.394846+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:27.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:27 vm09 bash[22983]: audit 2026-03-09T15:51:26.394846+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:28.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:28 vm01 bash[28152]: cluster 2026-03-09T15:51:27.067241+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:28.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:28 vm01 bash[28152]: cluster 2026-03-09T15:51:27.067241+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:28.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:28 vm01 bash[20728]: cluster 2026-03-09T15:51:27.067241+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:28.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:28 vm01 bash[20728]: cluster 2026-03-09T15:51:27.067241+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:28 vm09 bash[22983]: cluster 2026-03-09T15:51:27.067241+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:28 vm09 bash[22983]: cluster 2026-03-09T15:51:27.067241+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:30.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:30 vm01 bash[28152]: cluster 2026-03-09T15:51:29.067462+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:30.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:30 vm01 bash[28152]: cluster 2026-03-09T15:51:29.067462+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:30.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:30 vm01 bash[28152]: audit 2026-03-09T15:51:30.270658+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:51:30.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:30 vm01 bash[28152]: audit 2026-03-09T15:51:30.270658+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:51:30.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:30 vm01 bash[20728]: cluster 2026-03-09T15:51:29.067462+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:30.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:30 vm01 bash[20728]: cluster 2026-03-09T15:51:29.067462+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:30.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:30 vm01 bash[20728]: audit 2026-03-09T15:51:30.270658+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:51:30.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:30 vm01 bash[20728]: audit 2026-03-09T15:51:30.270658+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:51:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:30 vm09 bash[22983]: cluster 2026-03-09T15:51:29.067462+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:30 vm09 bash[22983]: cluster 2026-03-09T15:51:29.067462+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:30 vm09 bash[22983]: audit 2026-03-09T15:51:30.270658+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:51:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:30 vm09 bash[22983]: audit 2026-03-09T15:51:30.270658+0000 mon.a (mon.0) 448 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: audit 2026-03-09T15:51:30.414978+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: audit 2026-03-09T15:51:30.414978+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: cluster 2026-03-09T15:51:30.419950+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: cluster 2026-03-09T15:51:30.419950+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: audit 2026-03-09T15:51:30.421617+0000 mon.a (mon.0) 451 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: audit 2026-03-09T15:51:30.421617+0000 mon.a (mon.0) 451 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: audit 2026-03-09T15:51:30.421750+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: audit 2026-03-09T15:51:30.421750+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: audit 2026-03-09T15:51:31.418072+0000 mon.a (mon.0) 453 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: audit 2026-03-09T15:51:31.418072+0000 mon.a (mon.0) 453 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: cluster 2026-03-09T15:51:31.423740+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T15:51:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:31 vm09 bash[22983]: cluster 2026-03-09T15:51:31.423740+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: audit 2026-03-09T15:51:30.414978+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: audit 2026-03-09T15:51:30.414978+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: cluster 2026-03-09T15:51:30.419950+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: cluster 2026-03-09T15:51:30.419950+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: audit 2026-03-09T15:51:30.421617+0000 mon.a (mon.0) 451 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: audit 2026-03-09T15:51:30.421617+0000 mon.a (mon.0) 451 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: audit 2026-03-09T15:51:30.421750+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: audit 2026-03-09T15:51:30.421750+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: audit 2026-03-09T15:51:31.418072+0000 mon.a (mon.0) 453 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: audit 2026-03-09T15:51:31.418072+0000 mon.a (mon.0) 453 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: cluster 2026-03-09T15:51:31.423740+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:31 vm01 bash[28152]: cluster 2026-03-09T15:51:31.423740+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: audit 2026-03-09T15:51:30.414978+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: audit 2026-03-09T15:51:30.414978+0000 mon.a (mon.0) 449 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: cluster 2026-03-09T15:51:30.419950+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T15:51:31.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: cluster 2026-03-09T15:51:30.419950+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T15:51:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: audit 2026-03-09T15:51:30.421617+0000 mon.a (mon.0) 451 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:51:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: audit 2026-03-09T15:51:30.421617+0000 mon.a (mon.0) 451 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T15:51:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: audit 2026-03-09T15:51:30.421750+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: audit 2026-03-09T15:51:30.421750+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: audit 2026-03-09T15:51:31.418072+0000 mon.a (mon.0) 453 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:51:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: audit 2026-03-09T15:51:31.418072+0000 mon.a (mon.0) 453 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T15:51:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: cluster 2026-03-09T15:51:31.423740+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T15:51:31.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:31 vm01 bash[20728]: cluster 2026-03-09T15:51:31.423740+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: cluster 2026-03-09T15:51:31.067681+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: cluster 2026-03-09T15:51:31.067681+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: audit 2026-03-09T15:51:31.424280+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: audit 2026-03-09T15:51:31.424280+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: audit 2026-03-09T15:51:31.428862+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: audit 2026-03-09T15:51:31.428862+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: audit 2026-03-09T15:51:32.340968+0000 mon.a (mon.0) 457 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: audit 2026-03-09T15:51:32.340968+0000 mon.a (mon.0) 457 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: audit 2026-03-09T15:51:32.432968+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:32 vm01 bash[20728]: audit 2026-03-09T15:51:32.432968+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: cluster 2026-03-09T15:51:31.067681+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:32.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: cluster 2026-03-09T15:51:31.067681+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:32.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: audit 2026-03-09T15:51:31.424280+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: audit 2026-03-09T15:51:31.424280+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: audit 2026-03-09T15:51:31.428862+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: audit 2026-03-09T15:51:31.428862+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: audit 2026-03-09T15:51:32.340968+0000 mon.a (mon.0) 457 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' 2026-03-09T15:51:32.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: audit 2026-03-09T15:51:32.340968+0000 mon.a (mon.0) 457 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' 2026-03-09T15:51:32.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: audit 2026-03-09T15:51:32.432968+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:32 vm01 bash[28152]: audit 2026-03-09T15:51:32.432968+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: cluster 2026-03-09T15:51:31.067681+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: cluster 2026-03-09T15:51:31.067681+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: audit 2026-03-09T15:51:31.424280+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: audit 2026-03-09T15:51:31.424280+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: audit 2026-03-09T15:51:31.428862+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: audit 2026-03-09T15:51:31.428862+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: audit 2026-03-09T15:51:32.340968+0000 mon.a (mon.0) 457 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: audit 2026-03-09T15:51:32.340968+0000 mon.a (mon.0) 457 : audit [INF] from='osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283]' entity='osd.3' 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: audit 2026-03-09T15:51:32.432968+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:32 vm09 bash[22983]: audit 2026-03-09T15:51:32.432968+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:33.561 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.550+0000 7ff6967fc640 1 -- 192.168.123.101:0/4240815618 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7ff6a01a4b90 con 0x7ff670077640 2026-03-09T15:51:33.561 INFO:teuthology.orchestra.run.vm01.stdout:Created osd(s) 3 on host 'vm01' 2026-03-09T15:51:33.567 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff670077640 msgr2=0x7ff670079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:51:33.567 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff670077640 0x7ff670079b00 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7ff6a019d910 tx=0x7ff688007450 comp rx=0 tx=0).stop 2026-03-09T15:51:33.567 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 msgr2=0x7ff6a019c460 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:51:33.567 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 0x7ff6a019c460 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7ff690009950 tx=0x7ff690002f60 comp rx=0 tx=0).stop 2026-03-09T15:51:33.567 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 shutdown_connections 2026-03-09T15:51:33.567 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff6a01046d0 0x7ff6a01a39b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:33.567 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff670077640 0x7ff670079b00 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:33.567 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff6a0103d10 0x7ff6a019c9a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:33.568 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 --2- 192.168.123.101:0/4240815618 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff6a0102b10 0x7ff6a019c460 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:33.568 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 >> 192.168.123.101:0/4240815618 conn(0x7ff6a00fe2c0 msgr2=0x7ff6a00ffef0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:51:33.568 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 shutdown_connections 2026-03-09T15:51:33.568 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:51:33.554+0000 7ff6a7a1a640 1 -- 192.168.123.101:0/4240815618 wait complete. 2026-03-09T15:51:33.628 DEBUG:teuthology.orchestra.run.vm01:osd.3> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.3.service 2026-03-09T15:51:33.629 INFO:tasks.cephadm:Deploying osd.4 on vm09 with /dev/vde... 2026-03-09T15:51:33.629 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- lvm zap /dev/vde 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:31.273077+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:31.273077+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:31.273144+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:31.273144+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.607248+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.607248+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.613721+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.613721+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.615175+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.615175+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.615703+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.615703+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.620240+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:32.620240+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:33.067955+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:33.067955+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:33.346565+0000 mon.a (mon.0) 464 : cluster [INF] osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] boot 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:33.346565+0000 mon.a (mon.0) 464 : cluster [INF] osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] boot 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:33.346733+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: cluster 2026-03-09T15:51:33.346733+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:33.346824+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:33.346824+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:33.542247+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:33.542247+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:33.547283+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:33.547283+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:33.553366+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:33 vm09 bash[22983]: audit 2026-03-09T15:51:33.553366+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:31.273077+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:31.273077+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:31.273144+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:31.273144+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.607248+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.607248+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.613721+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.613721+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.615175+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.615175+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.615703+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.615703+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.620240+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:32.620240+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:33.067955+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:33.067955+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:33.346565+0000 mon.a (mon.0) 464 : cluster [INF] osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] boot 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:33.346565+0000 mon.a (mon.0) 464 : cluster [INF] osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] boot 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:33.346733+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: cluster 2026-03-09T15:51:33.346733+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:33.346824+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:33.346824+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:33.542247+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:33.542247+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:33.547283+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:33.547283+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:33.553366+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:33 vm01 bash[28152]: audit 2026-03-09T15:51:33.553366+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:31.273077+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:31.273077+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:31.273144+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:31.273144+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.607248+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.607248+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.613721+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.613721+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.615175+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.615175+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.615703+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.615703+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.620240+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:32.620240+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:33.067955+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:33.067955+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:33.346565+0000 mon.a (mon.0) 464 : cluster [INF] osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] boot 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:33.346565+0000 mon.a (mon.0) 464 : cluster [INF] osd.3 [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] boot 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:33.346733+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: cluster 2026-03-09T15:51:33.346733+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:33.346824+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:33.346824+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:33.542247+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:33.542247+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:33.547283+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:33.547283+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:33.553366+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:33.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:33 vm01 bash[20728]: audit 2026-03-09T15:51:33.553366+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:35.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:35 vm01 bash[28152]: cluster 2026-03-09T15:51:34.639034+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T15:51:35.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:35 vm01 bash[28152]: cluster 2026-03-09T15:51:34.639034+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T15:51:35.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:35 vm01 bash[28152]: cluster 2026-03-09T15:51:35.068260+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:35.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:35 vm01 bash[28152]: cluster 2026-03-09T15:51:35.068260+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:35.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:35 vm01 bash[20728]: cluster 2026-03-09T15:51:34.639034+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T15:51:35.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:35 vm01 bash[20728]: cluster 2026-03-09T15:51:34.639034+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T15:51:35.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:35 vm01 bash[20728]: cluster 2026-03-09T15:51:35.068260+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:35.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:35 vm01 bash[20728]: cluster 2026-03-09T15:51:35.068260+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:35 vm09 bash[22983]: cluster 2026-03-09T15:51:34.639034+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T15:51:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:35 vm09 bash[22983]: cluster 2026-03-09T15:51:34.639034+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T15:51:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:35 vm09 bash[22983]: cluster 2026-03-09T15:51:35.068260+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:35 vm09 bash[22983]: cluster 2026-03-09T15:51:35.068260+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:38.248 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:51:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:38 vm09 bash[22983]: cluster 2026-03-09T15:51:37.068612+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:38 vm09 bash[22983]: cluster 2026-03-09T15:51:37.068612+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:38.405 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:38 vm01 bash[20728]: cluster 2026-03-09T15:51:37.068612+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:38.405 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:38 vm01 bash[20728]: cluster 2026-03-09T15:51:37.068612+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:38.406 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:38 vm01 bash[28152]: cluster 2026-03-09T15:51:37.068612+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:38.406 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:38 vm01 bash[28152]: cluster 2026-03-09T15:51:37.068612+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:39.123 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:51:39.151 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch daemon add osd vm09:/dev/vde 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: cluster 2026-03-09T15:51:39.068928+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: cluster 2026-03-09T15:51:39.068928+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: cephadm 2026-03-09T15:51:39.327541+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: cephadm 2026-03-09T15:51:39.327541+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.334069+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.334069+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.341049+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.341049+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.343700+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.343700+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.344435+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.344435+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.344898+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.344898+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.348741+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:40 vm09 bash[22983]: audit 2026-03-09T15:51:39.348741+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: cluster 2026-03-09T15:51:39.068928+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:40.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: cluster 2026-03-09T15:51:39.068928+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:40.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: cephadm 2026-03-09T15:51:39.327541+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: cephadm 2026-03-09T15:51:39.327541+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.334069+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.334069+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.341049+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.341049+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.343700+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.343700+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.344435+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.344435+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.344898+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.344898+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.348741+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:40 vm01 bash[28152]: audit 2026-03-09T15:51:39.348741+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: cluster 2026-03-09T15:51:39.068928+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: cluster 2026-03-09T15:51:39.068928+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: cephadm 2026-03-09T15:51:39.327541+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: cephadm 2026-03-09T15:51:39.327541+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.334069+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.334069+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.341049+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.341049+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.343700+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.343700+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.344435+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.344435+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.344898+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.344898+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.348741+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:40.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:40 vm01 bash[20728]: audit 2026-03-09T15:51:39.348741+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:51:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:42 vm09 bash[22983]: cluster 2026-03-09T15:51:41.069328+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:42 vm09 bash[22983]: cluster 2026-03-09T15:51:41.069328+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:42.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:42 vm01 bash[28152]: cluster 2026-03-09T15:51:41.069328+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:42.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:42 vm01 bash[28152]: cluster 2026-03-09T15:51:41.069328+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:42.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:42 vm01 bash[20728]: cluster 2026-03-09T15:51:41.069328+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:42.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:42 vm01 bash[20728]: cluster 2026-03-09T15:51:41.069328+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:43.778 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 -- 192.168.123.109:0/4190073538 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4078cf0 msgr2=0x7f0ef4079150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 --2- 192.168.123.109:0/4190073538 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4078cf0 0x7f0ef4079150 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f0ee4009a80 tx=0x7f0ee402f290 comp rx=0 tx=0).stop 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 -- 192.168.123.109:0/4190073538 shutdown_connections 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 --2- 192.168.123.109:0/4190073538 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0ef4079690 0x7f0ef4079f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 --2- 192.168.123.109:0/4190073538 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4078cf0 0x7f0ef4079150 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 --2- 192.168.123.109:0/4190073538 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f0ef4077aa0 0x7f0ef4077ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 -- 192.168.123.109:0/4190073538 >> 192.168.123.109:0/4190073538 conn(0x7f0ef4100510 msgr2=0x7f0ef4102950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 -- 192.168.123.109:0/4190073538 shutdown_connections 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 -- 192.168.123.109:0/4190073538 wait complete. 2026-03-09T15:51:43.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 Processor -- start 2026-03-09T15:51:43.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 -- start start 2026-03-09T15:51:43.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4077aa0 0x7f0ef41a0920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:51:43.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0ef3fff640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4077aa0 0x7f0ef41a0920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:51:43.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0ef3fff640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4077aa0 0x7f0ef41a0920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:56774/0 (socket says 192.168.123.109:56774) 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.943+0000 7f0efa672640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f0ef4078cf0 0x7f0ef41a0e60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0efa672640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0ef4079690 0x7f0ef41a7ee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0efa672640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f0ef41141c0 con 0x7f0ef4079690 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0efa672640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f0ef4114040 con 0x7f0ef4077aa0 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0efa672640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f0ef4114340 con 0x7f0ef4078cf0 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef3fff640 1 -- 192.168.123.109:0/1427482183 learned_addr learned my addr 192.168.123.109:0/1427482183 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef3fff640 1 -- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f0ef4078cf0 msgr2=0x7f0ef41a0e60 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef8be8640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0ef4079690 0x7f0ef41a7ee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef37fe640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f0ef4078cf0 0x7f0ef41a0e60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef3fff640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f0ef4078cf0 0x7f0ef41a0e60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef3fff640 1 -- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0ef4079690 msgr2=0x7f0ef41a7ee0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef3fff640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0ef4079690 0x7f0ef41a7ee0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef3fff640 1 -- 192.168.123.109:0/1427482183 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0ef41a85e0 con 0x7f0ef4077aa0 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef37fe640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f0ef4078cf0 0x7f0ef41a0e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:51:43.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef3fff640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4077aa0 0x7f0ef41a0920 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f0ee000d950 tx=0x7f0ee000de20 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:51:43.952 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef17fa640 1 -- 192.168.123.109:0/1427482183 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0ee0014070 con 0x7f0ef4077aa0 2026-03-09T15:51:43.952 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef17fa640 1 -- 192.168.123.109:0/1427482183 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f0ee00044e0 con 0x7f0ef4077aa0 2026-03-09T15:51:43.952 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef17fa640 1 -- 192.168.123.109:0/1427482183 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0ee0004e30 con 0x7f0ef4077aa0 2026-03-09T15:51:43.952 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0ef41a88d0 con 0x7f0ef4077aa0 2026-03-09T15:51:43.953 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0ef41a8dc0 con 0x7f0ef4077aa0 2026-03-09T15:51:43.954 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0ef17fa640 1 -- 192.168.123.109:0/1427482183 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f0ee0020020 con 0x7f0ef4077aa0 2026-03-09T15:51:43.954 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.947+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0ec0005180 con 0x7f0ef4077aa0 2026-03-09T15:51:43.957 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.951+0000 7f0ef17fa640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f0ed4077640 0x7f0ed4079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:51:43.957 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.951+0000 7f0ef17fa640 1 -- 192.168.123.109:0/1427482183 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(27..27 src has 1..27) ==== 3439+0+0 (secure 0 0 0) 0x7f0ee0098ac0 con 0x7f0ef4077aa0 2026-03-09T15:51:43.958 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.951+0000 7f0ef37fe640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f0ed4077640 0x7f0ed4079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:51:43.958 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.951+0000 7f0ef37fe640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f0ed4077640 0x7f0ed4079b00 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f0ef41a1e40 tx=0x7f0ee40023d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:51:43.958 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:43.951+0000 7f0ef17fa640 1 -- 192.168.123.109:0/1427482183 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0ee0062b10 con 0x7f0ef4077aa0 2026-03-09T15:51:44.069 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:51:44.063+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7f0ec0002bf0 con 0x7f0ed4077640 2026-03-09T15:51:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:44 vm09 bash[22983]: cluster 2026-03-09T15:51:43.069627+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:44 vm09 bash[22983]: cluster 2026-03-09T15:51:43.069627+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:44 vm09 bash[22983]: audit 2026-03-09T15:51:44.070441+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:44 vm09 bash[22983]: audit 2026-03-09T15:51:44.070441+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:44 vm09 bash[22983]: audit 2026-03-09T15:51:44.071876+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:44 vm09 bash[22983]: audit 2026-03-09T15:51:44.071876+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:44 vm09 bash[22983]: audit 2026-03-09T15:51:44.072354+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:44 vm09 bash[22983]: audit 2026-03-09T15:51:44.072354+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:44.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:44 vm01 bash[28152]: cluster 2026-03-09T15:51:43.069627+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:44.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:44 vm01 bash[28152]: cluster 2026-03-09T15:51:43.069627+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:44.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:44 vm01 bash[28152]: audit 2026-03-09T15:51:44.070441+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:44.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:44 vm01 bash[28152]: audit 2026-03-09T15:51:44.070441+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:44.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:44 vm01 bash[28152]: audit 2026-03-09T15:51:44.071876+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:44.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:44 vm01 bash[28152]: audit 2026-03-09T15:51:44.071876+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:44.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:44 vm01 bash[28152]: audit 2026-03-09T15:51:44.072354+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:44.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:44 vm01 bash[28152]: audit 2026-03-09T15:51:44.072354+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:44.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:44 vm01 bash[20728]: cluster 2026-03-09T15:51:43.069627+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:44.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:44 vm01 bash[20728]: cluster 2026-03-09T15:51:43.069627+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:44.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:44 vm01 bash[20728]: audit 2026-03-09T15:51:44.070441+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:44.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:44 vm01 bash[20728]: audit 2026-03-09T15:51:44.070441+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:51:44.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:44 vm01 bash[20728]: audit 2026-03-09T15:51:44.071876+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:44.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:44 vm01 bash[20728]: audit 2026-03-09T15:51:44.071876+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:51:44.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:44 vm01 bash[20728]: audit 2026-03-09T15:51:44.072354+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:44.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:44 vm01 bash[20728]: audit 2026-03-09T15:51:44.072354+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:45 vm09 bash[22983]: audit 2026-03-09T15:51:44.068195+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:45 vm09 bash[22983]: audit 2026-03-09T15:51:44.068195+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:45.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:45 vm01 bash[28152]: audit 2026-03-09T15:51:44.068195+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:45.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:45 vm01 bash[28152]: audit 2026-03-09T15:51:44.068195+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:45.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:45 vm01 bash[20728]: audit 2026-03-09T15:51:44.068195+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:45.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:45 vm01 bash[20728]: audit 2026-03-09T15:51:44.068195+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:51:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:46 vm09 bash[22983]: cluster 2026-03-09T15:51:45.070003+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:46 vm09 bash[22983]: cluster 2026-03-09T15:51:45.070003+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:46.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:46 vm01 bash[28152]: cluster 2026-03-09T15:51:45.070003+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:46.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:46 vm01 bash[28152]: cluster 2026-03-09T15:51:45.070003+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:46.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:46 vm01 bash[20728]: cluster 2026-03-09T15:51:45.070003+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:46.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:46 vm01 bash[20728]: cluster 2026-03-09T15:51:45.070003+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:48 vm09 bash[22983]: cluster 2026-03-09T15:51:47.070300+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:48 vm09 bash[22983]: cluster 2026-03-09T15:51:47.070300+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:48.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:48 vm01 bash[28152]: cluster 2026-03-09T15:51:47.070300+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:48.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:48 vm01 bash[28152]: cluster 2026-03-09T15:51:47.070300+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:48 vm01 bash[20728]: cluster 2026-03-09T15:51:47.070300+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:48.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:48 vm01 bash[20728]: cluster 2026-03-09T15:51:47.070300+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: cluster 2026-03-09T15:51:49.070598+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: cluster 2026-03-09T15:51:49.070598+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:49.496999+0000 mon.a (mon.0) 480 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:49.496999+0000 mon.a (mon.0) 480 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:49.499066+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.109:0/32145001' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:49.499066+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.109:0/32145001' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:49.499960+0000 mon.a (mon.0) 481 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]': finished 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:49.499960+0000 mon.a (mon.0) 481 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]': finished 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: cluster 2026-03-09T15:51:49.503687+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: cluster 2026-03-09T15:51:49.503687+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:49.504167+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:49.504167+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:50.143027+0000 mon.c (mon.2) 13 : audit [DBG] from='client.? 192.168.123.109:0/1984357110' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:50 vm09 bash[22983]: audit 2026-03-09T15:51:50.143027+0000 mon.c (mon.2) 13 : audit [DBG] from='client.? 192.168.123.109:0/1984357110' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: cluster 2026-03-09T15:51:49.070598+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: cluster 2026-03-09T15:51:49.070598+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:49.496999+0000 mon.a (mon.0) 480 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:49.496999+0000 mon.a (mon.0) 480 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:49.499066+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.109:0/32145001' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:49.499066+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.109:0/32145001' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:49.499960+0000 mon.a (mon.0) 481 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]': finished 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:49.499960+0000 mon.a (mon.0) 481 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]': finished 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: cluster 2026-03-09T15:51:49.503687+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: cluster 2026-03-09T15:51:49.503687+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:49.504167+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:49.504167+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:50.143027+0000 mon.c (mon.2) 13 : audit [DBG] from='client.? 192.168.123.109:0/1984357110' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:50 vm01 bash[28152]: audit 2026-03-09T15:51:50.143027+0000 mon.c (mon.2) 13 : audit [DBG] from='client.? 192.168.123.109:0/1984357110' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: cluster 2026-03-09T15:51:49.070598+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: cluster 2026-03-09T15:51:49.070598+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:49.496999+0000 mon.a (mon.0) 480 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:49.496999+0000 mon.a (mon.0) 480 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:49.499066+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.109:0/32145001' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:49.499066+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.109:0/32145001' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]: dispatch 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:49.499960+0000 mon.a (mon.0) 481 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]': finished 2026-03-09T15:51:50.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:49.499960+0000 mon.a (mon.0) 481 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "642a6d0d-91ea-4433-b755-50a0d7442acf"}]': finished 2026-03-09T15:51:50.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: cluster 2026-03-09T15:51:49.503687+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T15:51:50.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: cluster 2026-03-09T15:51:49.503687+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T15:51:50.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:49.504167+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:51:50.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:49.504167+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:51:50.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:50.143027+0000 mon.c (mon.2) 13 : audit [DBG] from='client.? 192.168.123.109:0/1984357110' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:50.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:50 vm01 bash[20728]: audit 2026-03-09T15:51:50.143027+0000 mon.c (mon.2) 13 : audit [DBG] from='client.? 192.168.123.109:0/1984357110' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:51:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:52 vm01 bash[28152]: cluster 2026-03-09T15:51:51.070933+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:52.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:52 vm01 bash[28152]: cluster 2026-03-09T15:51:51.070933+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:52 vm01 bash[20728]: cluster 2026-03-09T15:51:51.070933+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:52.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:52 vm01 bash[20728]: cluster 2026-03-09T15:51:51.070933+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:52 vm09 bash[22983]: cluster 2026-03-09T15:51:51.070933+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:52 vm09 bash[22983]: cluster 2026-03-09T15:51:51.070933+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:54.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:54 vm01 bash[28152]: cluster 2026-03-09T15:51:53.071208+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:54.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:54 vm01 bash[28152]: cluster 2026-03-09T15:51:53.071208+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:54.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:54 vm01 bash[20728]: cluster 2026-03-09T15:51:53.071208+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:54.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:54 vm01 bash[20728]: cluster 2026-03-09T15:51:53.071208+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:54.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:54 vm09 bash[22983]: cluster 2026-03-09T15:51:53.071208+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:54.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:54 vm09 bash[22983]: cluster 2026-03-09T15:51:53.071208+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:56.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:56 vm01 bash[28152]: cluster 2026-03-09T15:51:55.071467+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:56.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:56 vm01 bash[28152]: cluster 2026-03-09T15:51:55.071467+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:56.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:56 vm01 bash[20728]: cluster 2026-03-09T15:51:55.071467+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:56.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:56 vm01 bash[20728]: cluster 2026-03-09T15:51:55.071467+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:56.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:56 vm09 bash[22983]: cluster 2026-03-09T15:51:55.071467+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:56.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:56 vm09 bash[22983]: cluster 2026-03-09T15:51:55.071467+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:58 vm01 bash[28152]: cluster 2026-03-09T15:51:57.071752+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:58.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:58 vm01 bash[28152]: cluster 2026-03-09T15:51:57.071752+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:58 vm01 bash[20728]: cluster 2026-03-09T15:51:57.071752+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:58.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:58 vm01 bash[20728]: cluster 2026-03-09T15:51:57.071752+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:58.736 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:58 vm09 bash[22983]: cluster 2026-03-09T15:51:57.071752+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:58.736 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:58 vm09 bash[22983]: cluster 2026-03-09T15:51:57.071752+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:51:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:59 vm01 bash[28152]: audit 2026-03-09T15:51:59.123023+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:51:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:59 vm01 bash[28152]: audit 2026-03-09T15:51:59.123023+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:51:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:59 vm01 bash[28152]: audit 2026-03-09T15:51:59.123598+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:59.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:51:59 vm01 bash[28152]: audit 2026-03-09T15:51:59.123598+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:59.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:59 vm01 bash[20728]: audit 2026-03-09T15:51:59.123023+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:51:59.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:59 vm01 bash[20728]: audit 2026-03-09T15:51:59.123023+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:51:59.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:59 vm01 bash[20728]: audit 2026-03-09T15:51:59.123598+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:59.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:51:59 vm01 bash[20728]: audit 2026-03-09T15:51:59.123598+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:59.718 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:59 vm09 bash[22983]: audit 2026-03-09T15:51:59.123023+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:51:59.718 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:59 vm09 bash[22983]: audit 2026-03-09T15:51:59.123023+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:51:59.718 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:59 vm09 bash[22983]: audit 2026-03-09T15:51:59.123598+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:51:59.718 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:51:59 vm09 bash[22983]: audit 2026-03-09T15:51:59.123598+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:00.381 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:00 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:00.381 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:00 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:00.381 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:52:00 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:00.381 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:52:00 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:00 vm09 bash[22983]: cluster 2026-03-09T15:51:59.072034+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:00 vm09 bash[22983]: cluster 2026-03-09T15:51:59.072034+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:00 vm09 bash[22983]: cephadm 2026-03-09T15:51:59.124160+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-09T15:52:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:00 vm09 bash[22983]: cephadm 2026-03-09T15:51:59.124160+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-09T15:52:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:00 vm01 bash[28152]: cluster 2026-03-09T15:51:59.072034+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:00 vm01 bash[28152]: cluster 2026-03-09T15:51:59.072034+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:00 vm01 bash[28152]: cephadm 2026-03-09T15:51:59.124160+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-09T15:52:00.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:00 vm01 bash[28152]: cephadm 2026-03-09T15:51:59.124160+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-09T15:52:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:00 vm01 bash[20728]: cluster 2026-03-09T15:51:59.072034+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:00.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:00 vm01 bash[20728]: cluster 2026-03-09T15:51:59.072034+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:00.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:00 vm01 bash[20728]: cephadm 2026-03-09T15:51:59.124160+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-09T15:52:00.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:00 vm01 bash[20728]: cephadm 2026-03-09T15:51:59.124160+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-09T15:52:01.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:01 vm01 bash[28152]: audit 2026-03-09T15:52:00.422989+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:01.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:01 vm01 bash[28152]: audit 2026-03-09T15:52:00.422989+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:01.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:01 vm01 bash[28152]: audit 2026-03-09T15:52:00.428081+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:01 vm01 bash[28152]: audit 2026-03-09T15:52:00.428081+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:01 vm01 bash[28152]: audit 2026-03-09T15:52:00.435097+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:01 vm01 bash[28152]: audit 2026-03-09T15:52:00.435097+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:01 vm01 bash[20728]: audit 2026-03-09T15:52:00.422989+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:01.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:01 vm01 bash[20728]: audit 2026-03-09T15:52:00.422989+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:01.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:01 vm01 bash[20728]: audit 2026-03-09T15:52:00.428081+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:01 vm01 bash[20728]: audit 2026-03-09T15:52:00.428081+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:01 vm01 bash[20728]: audit 2026-03-09T15:52:00.435097+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.684 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:01 vm01 bash[20728]: audit 2026-03-09T15:52:00.435097+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.687 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:01 vm09 bash[22983]: audit 2026-03-09T15:52:00.422989+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:01.687 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:01 vm09 bash[22983]: audit 2026-03-09T15:52:00.422989+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:01.687 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:01 vm09 bash[22983]: audit 2026-03-09T15:52:00.428081+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.687 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:01 vm09 bash[22983]: audit 2026-03-09T15:52:00.428081+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.687 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:01 vm09 bash[22983]: audit 2026-03-09T15:52:00.435097+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:01.687 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:01 vm09 bash[22983]: audit 2026-03-09T15:52:00.435097+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:02.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:02 vm09 bash[22983]: cluster 2026-03-09T15:52:01.072522+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:02.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:02 vm09 bash[22983]: cluster 2026-03-09T15:52:01.072522+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:02 vm01 bash[28152]: cluster 2026-03-09T15:52:01.072522+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:02.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:02 vm01 bash[28152]: cluster 2026-03-09T15:52:01.072522+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:02 vm01 bash[20728]: cluster 2026-03-09T15:52:01.072522+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:02.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:02 vm01 bash[20728]: cluster 2026-03-09T15:52:01.072522+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:04 vm09 bash[22983]: cluster 2026-03-09T15:52:03.072865+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:04 vm09 bash[22983]: cluster 2026-03-09T15:52:03.072865+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:04 vm09 bash[22983]: audit 2026-03-09T15:52:04.061841+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:52:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:04 vm09 bash[22983]: audit 2026-03-09T15:52:04.061841+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:52:04.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:04 vm01 bash[28152]: cluster 2026-03-09T15:52:03.072865+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:04.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:04 vm01 bash[28152]: cluster 2026-03-09T15:52:03.072865+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:04.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:04 vm01 bash[28152]: audit 2026-03-09T15:52:04.061841+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:52:04.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:04 vm01 bash[28152]: audit 2026-03-09T15:52:04.061841+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:52:04.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:04 vm01 bash[20728]: cluster 2026-03-09T15:52:03.072865+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:04.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:04 vm01 bash[20728]: cluster 2026-03-09T15:52:03.072865+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:04.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:04 vm01 bash[20728]: audit 2026-03-09T15:52:04.061841+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:52:04.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:04 vm01 bash[20728]: audit 2026-03-09T15:52:04.061841+0000 mon.a (mon.0) 489 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:52:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:05 vm09 bash[22983]: audit 2026-03-09T15:52:04.505813+0000 mon.a (mon.0) 490 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T15:52:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:05 vm09 bash[22983]: audit 2026-03-09T15:52:04.505813+0000 mon.a (mon.0) 490 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T15:52:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:05 vm09 bash[22983]: cluster 2026-03-09T15:52:04.509868+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T15:52:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:05 vm09 bash[22983]: cluster 2026-03-09T15:52:04.509868+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T15:52:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:05 vm09 bash[22983]: audit 2026-03-09T15:52:04.510104+0000 mon.a (mon.0) 492 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:05 vm09 bash[22983]: audit 2026-03-09T15:52:04.510104+0000 mon.a (mon.0) 492 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:05 vm09 bash[22983]: audit 2026-03-09T15:52:04.510223+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:05 vm09 bash[22983]: audit 2026-03-09T15:52:04.510223+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:05.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:05 vm01 bash[28152]: audit 2026-03-09T15:52:04.505813+0000 mon.a (mon.0) 490 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:05 vm01 bash[28152]: audit 2026-03-09T15:52:04.505813+0000 mon.a (mon.0) 490 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:05 vm01 bash[28152]: cluster 2026-03-09T15:52:04.509868+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:05 vm01 bash[28152]: cluster 2026-03-09T15:52:04.509868+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:05 vm01 bash[28152]: audit 2026-03-09T15:52:04.510104+0000 mon.a (mon.0) 492 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:05 vm01 bash[28152]: audit 2026-03-09T15:52:04.510104+0000 mon.a (mon.0) 492 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:05 vm01 bash[28152]: audit 2026-03-09T15:52:04.510223+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:05 vm01 bash[28152]: audit 2026-03-09T15:52:04.510223+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:05 vm01 bash[20728]: audit 2026-03-09T15:52:04.505813+0000 mon.a (mon.0) 490 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:05 vm01 bash[20728]: audit 2026-03-09T15:52:04.505813+0000 mon.a (mon.0) 490 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:05 vm01 bash[20728]: cluster 2026-03-09T15:52:04.509868+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T15:52:05.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:05 vm01 bash[20728]: cluster 2026-03-09T15:52:04.509868+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T15:52:05.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:05 vm01 bash[20728]: audit 2026-03-09T15:52:04.510104+0000 mon.a (mon.0) 492 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:05.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:05 vm01 bash[20728]: audit 2026-03-09T15:52:04.510104+0000 mon.a (mon.0) 492 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:05.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:05 vm01 bash[20728]: audit 2026-03-09T15:52:04.510223+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:05.935 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:05 vm01 bash[20728]: audit 2026-03-09T15:52:04.510223+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.653 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: cluster 2026-03-09T15:52:05.073179+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: cluster 2026-03-09T15:52:05.073179+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:05.509694+0000 mon.a (mon.0) 494 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:05.509694+0000 mon.a (mon.0) 494 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: cluster 2026-03-09T15:52:05.513816+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: cluster 2026-03-09T15:52:05.513816+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:05.516264+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:05.516264+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:05.519003+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:05.519003+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:06.486133+0000 mon.a (mon.0) 498 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:06.486133+0000 mon.a (mon.0) 498 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:06.519601+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.654 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:06 vm09 bash[22983]: audit 2026-03-09T15:52:06.519601+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: cluster 2026-03-09T15:52:05.073179+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:06.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: cluster 2026-03-09T15:52:05.073179+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:06.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:05.509694+0000 mon.a (mon.0) 494 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:06.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:05.509694+0000 mon.a (mon.0) 494 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:06.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: cluster 2026-03-09T15:52:05.513816+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T15:52:06.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: cluster 2026-03-09T15:52:05.513816+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:05.516264+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:05.516264+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:05.519003+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:05.519003+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:06.486133+0000 mon.a (mon.0) 498 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:06.486133+0000 mon.a (mon.0) 498 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:06.519601+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:06 vm01 bash[28152]: audit 2026-03-09T15:52:06.519601+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: cluster 2026-03-09T15:52:05.073179+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: cluster 2026-03-09T15:52:05.073179+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:05.509694+0000 mon.a (mon.0) 494 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:05.509694+0000 mon.a (mon.0) 494 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: cluster 2026-03-09T15:52:05.513816+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: cluster 2026-03-09T15:52:05.513816+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:05.516264+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:05.516264+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:05.519003+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:05.519003+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:06.486133+0000 mon.a (mon.0) 498 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:06.486133+0000 mon.a (mon.0) 498 : audit [INF] from='osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856]' entity='osd.4' 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:06.519601+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:06.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:06 vm01 bash[20728]: audit 2026-03-09T15:52:06.519601+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:05.018792+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:05.018792+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:05.018837+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:05.018837+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:06.537629+0000 mon.a (mon.0) 500 : cluster [INF] osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] boot 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:06.537629+0000 mon.a (mon.0) 500 : cluster [INF] osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] boot 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:06.537687+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:06.537687+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:06.538784+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:06.538784+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:06.648374+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:06.648374+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:06.653491+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:06.653491+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:07.073454+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v133: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:07.073454+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v133: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:07.089044+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:07.089044+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:07.090866+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:07.090866+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:07.096500+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: audit 2026-03-09T15:52:07.096500+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:07.536909+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T15:52:07.686 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:07 vm09 bash[22983]: cluster 2026-03-09T15:52:07.536909+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T15:52:07.750 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.743+0000 7f0ef17fa640 1 -- 192.168.123.109:0/1427482183 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f0ec0002bf0 con 0x7f0ed4077640 2026-03-09T15:52:07.750 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 4 on host 'vm09' 2026-03-09T15:52:07.754 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f0ed4077640 msgr2=0x7f0ed4079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:52:07.754 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f0ed4077640 0x7f0ed4079b00 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f0ef41a1e40 tx=0x7f0ee40023d0 comp rx=0 tx=0).stop 2026-03-09T15:52:07.754 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4077aa0 msgr2=0x7f0ef41a0920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:52:07.754 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4077aa0 0x7f0ef41a0920 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f0ee000d950 tx=0x7f0ee000de20 comp rx=0 tx=0).stop 2026-03-09T15:52:07.755 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 shutdown_connections 2026-03-09T15:52:07.755 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f0ef4079690 0x7f0ef41a7ee0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:07.755 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f0ed4077640 0x7f0ed4079b00 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:07.755 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f0ef4078cf0 0x7f0ef41a0e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:07.755 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 --2- 192.168.123.109:0/1427482183 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0ef4077aa0 0x7f0ef41a0920 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:07.755 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 >> 192.168.123.109:0/1427482183 conn(0x7f0ef4100510 msgr2=0x7f0ef4102090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:52:07.755 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 shutdown_connections 2026-03-09T15:52:07.755 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:07.747+0000 7f0efa672640 1 -- 192.168.123.109:0/1427482183 wait complete. 2026-03-09T15:52:07.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:05.018792+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:07.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:05.018792+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:07.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:05.018837+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:07.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:05.018837+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:06.537629+0000 mon.a (mon.0) 500 : cluster [INF] osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] boot 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:06.537629+0000 mon.a (mon.0) 500 : cluster [INF] osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] boot 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:06.537687+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:06.537687+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:06.538784+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:06.538784+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:06.648374+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:06.648374+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:06.653491+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:06.653491+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:07.073454+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v133: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:07.073454+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v133: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:07.089044+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:07.089044+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:07.090866+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:07.090866+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:07.096500+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: audit 2026-03-09T15:52:07.096500+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:07.536909+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:07 vm01 bash[28152]: cluster 2026-03-09T15:52:07.536909+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:05.018792+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:05.018792+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:05.018837+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:05.018837+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:06.537629+0000 mon.a (mon.0) 500 : cluster [INF] osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] boot 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:06.537629+0000 mon.a (mon.0) 500 : cluster [INF] osd.4 [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] boot 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:06.537687+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:06.537687+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:06.538784+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:06.538784+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:06.648374+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:06.648374+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:06.653491+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:06.653491+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:07.073454+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v133: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:07.073454+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v133: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:07.089044+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:07.089044+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:07.090866+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:07.090866+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:07.096500+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: audit 2026-03-09T15:52:07.096500+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:07.536909+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T15:52:07.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:07 vm01 bash[20728]: cluster 2026-03-09T15:52:07.536909+0000 mon.a (mon.0) 508 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T15:52:07.935 DEBUG:teuthology.orchestra.run.vm09:osd.4> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.4.service 2026-03-09T15:52:07.936 INFO:tasks.cephadm:Deploying osd.5 on vm09 with /dev/vdd... 2026-03-09T15:52:07.936 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- lvm zap /dev/vdd 2026-03-09T15:52:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:08 vm01 bash[28152]: audit 2026-03-09T15:52:07.729745+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:08 vm01 bash[28152]: audit 2026-03-09T15:52:07.729745+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:08 vm01 bash[28152]: audit 2026-03-09T15:52:07.738330+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:08 vm01 bash[28152]: audit 2026-03-09T15:52:07.738330+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:08 vm01 bash[28152]: audit 2026-03-09T15:52:07.745735+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:08 vm01 bash[28152]: audit 2026-03-09T15:52:07.745735+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:08 vm01 bash[28152]: cluster 2026-03-09T15:52:08.539180+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T15:52:08.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:08 vm01 bash[28152]: cluster 2026-03-09T15:52:08.539180+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T15:52:08.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:08 vm01 bash[20728]: audit 2026-03-09T15:52:07.729745+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:08.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:08 vm01 bash[20728]: audit 2026-03-09T15:52:07.729745+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:08.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:08 vm01 bash[20728]: audit 2026-03-09T15:52:07.738330+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:08.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:08 vm01 bash[20728]: audit 2026-03-09T15:52:07.738330+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:08.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:08 vm01 bash[20728]: audit 2026-03-09T15:52:07.745735+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:08.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:08 vm01 bash[20728]: audit 2026-03-09T15:52:07.745735+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:08.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:08 vm01 bash[20728]: cluster 2026-03-09T15:52:08.539180+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T15:52:08.934 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:08 vm01 bash[20728]: cluster 2026-03-09T15:52:08.539180+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T15:52:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:08 vm09 bash[22983]: audit 2026-03-09T15:52:07.729745+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:08 vm09 bash[22983]: audit 2026-03-09T15:52:07.729745+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:08 vm09 bash[22983]: audit 2026-03-09T15:52:07.738330+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:08 vm09 bash[22983]: audit 2026-03-09T15:52:07.738330+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:08 vm09 bash[22983]: audit 2026-03-09T15:52:07.745735+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:08 vm09 bash[22983]: audit 2026-03-09T15:52:07.745735+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:08 vm09 bash[22983]: cluster 2026-03-09T15:52:08.539180+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T15:52:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:08 vm09 bash[22983]: cluster 2026-03-09T15:52:08.539180+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-09T15:52:09.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:09 vm01 bash[28152]: cluster 2026-03-09T15:52:09.073709+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:09.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:09 vm01 bash[28152]: cluster 2026-03-09T15:52:09.073709+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:09.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:09 vm01 bash[20728]: cluster 2026-03-09T15:52:09.073709+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:09.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:09 vm01 bash[20728]: cluster 2026-03-09T15:52:09.073709+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:09 vm09 bash[22983]: cluster 2026-03-09T15:52:09.073709+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:09 vm09 bash[22983]: cluster 2026-03-09T15:52:09.073709+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:12.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:12 vm01 bash[28152]: cluster 2026-03-09T15:52:11.074035+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:12.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:12 vm01 bash[28152]: cluster 2026-03-09T15:52:11.074035+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:12.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:12 vm01 bash[20728]: cluster 2026-03-09T15:52:11.074035+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:12.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:12 vm01 bash[20728]: cluster 2026-03-09T15:52:11.074035+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:12.541 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:52:12.568 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:12 vm09 bash[22983]: cluster 2026-03-09T15:52:11.074035+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:12.568 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:12 vm09 bash[22983]: cluster 2026-03-09T15:52:11.074035+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:13.438 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:52:13.448 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch daemon add osd vm09:/dev/vdd 2026-03-09T15:52:14.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:14 vm01 bash[28152]: cluster 2026-03-09T15:52:13.074330+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-09T15:52:14.433 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:14 vm01 bash[28152]: cluster 2026-03-09T15:52:13.074330+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-09T15:52:14.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:14 vm01 bash[20728]: cluster 2026-03-09T15:52:13.074330+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-09T15:52:14.433 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:14 vm01 bash[20728]: cluster 2026-03-09T15:52:13.074330+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-09T15:52:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:14 vm09 bash[22983]: cluster 2026-03-09T15:52:13.074330+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-09T15:52:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:14 vm09 bash[22983]: cluster 2026-03-09T15:52:13.074330+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: cephadm 2026-03-09T15:52:14.242008+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: cephadm 2026-03-09T15:52:14.242008+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.250875+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.250875+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.259671+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.259671+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.262058+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.262058+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: cephadm 2026-03-09T15:52:14.262822+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: cephadm 2026-03-09T15:52:14.262822+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: cephadm 2026-03-09T15:52:14.263775+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: cephadm 2026-03-09T15:52:14.263775+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.264301+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.264301+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.265005+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.265005+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.271821+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:15 vm09 bash[22983]: audit 2026-03-09T15:52:14.271821+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: cephadm 2026-03-09T15:52:14.242008+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: cephadm 2026-03-09T15:52:14.242008+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.250875+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.250875+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.259671+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.259671+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.262058+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.262058+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: cephadm 2026-03-09T15:52:14.262822+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: cephadm 2026-03-09T15:52:14.262822+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: cephadm 2026-03-09T15:52:14.263775+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: cephadm 2026-03-09T15:52:14.263775+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.264301+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.264301+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.265005+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.265005+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.271821+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:15 vm01 bash[20728]: audit 2026-03-09T15:52:14.271821+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: cephadm 2026-03-09T15:52:14.242008+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: cephadm 2026-03-09T15:52:14.242008+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.250875+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.250875+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.259671+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.259671+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.262058+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.262058+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: cephadm 2026-03-09T15:52:14.262822+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: cephadm 2026-03-09T15:52:14.262822+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: cephadm 2026-03-09T15:52:14.263775+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: cephadm 2026-03-09T15:52:14.263775+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.264301+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.264301+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.265005+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.265005+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.271821+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:15.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:15 vm01 bash[28152]: audit 2026-03-09T15:52:14.271821+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:16 vm09 bash[22983]: cluster 2026-03-09T15:52:15.074603+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T15:52:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:16 vm09 bash[22983]: cluster 2026-03-09T15:52:15.074603+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T15:52:16.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:16 vm01 bash[28152]: cluster 2026-03-09T15:52:15.074603+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T15:52:16.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:16 vm01 bash[28152]: cluster 2026-03-09T15:52:15.074603+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T15:52:16.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:16 vm01 bash[20728]: cluster 2026-03-09T15:52:15.074603+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T15:52:16.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:16 vm01 bash[20728]: cluster 2026-03-09T15:52:15.074603+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T15:52:18.080 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:52:18.264 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 -- 192.168.123.109:0/305055670 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2934077aa0 msgr2=0x7f2934077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:52:18.264 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 --2- 192.168.123.109:0/305055670 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2934077aa0 0x7f2934077ea0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f291c009a30 tx=0x7f291c02f240 comp rx=0 tx=0).stop 2026-03-09T15:52:18.264 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 -- 192.168.123.109:0/305055670 shutdown_connections 2026-03-09T15:52:18.264 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 --2- 192.168.123.109:0/305055670 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2934079690 0x7f2934079f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:18.264 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 --2- 192.168.123.109:0/305055670 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2934078cf0 0x7f2934079150 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:18.264 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 --2- 192.168.123.109:0/305055670 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2934077aa0 0x7f2934077ea0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:18.264 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 -- 192.168.123.109:0/305055670 >> 192.168.123.109:0/305055670 conn(0x7f2934100520 msgr2=0x7f2934102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 -- 192.168.123.109:0/305055670 shutdown_connections 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 -- 192.168.123.109:0/305055670 wait complete. 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 Processor -- start 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 -- start start 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2934077aa0 0x7f29341a0950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2934078cf0 0x7f29341a0e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2934079690 0x7f29341a7f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f29341141b0 con 0x7f2934077aa0 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f2934114030 con 0x7f2934079690 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f293b176640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f2934114330 con 0x7f2934078cf0 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2938eeb640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2934077aa0 0x7f29341a0950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2938eeb640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2934077aa0 0x7f29341a0950 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.109:44082/0 (socket says 192.168.123.109:44082) 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2938eeb640 1 -- 192.168.123.109:0/4284020605 learned_addr learned my addr 192.168.123.109:0/4284020605 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:52:18.265 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f292bfff640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2934078cf0 0x7f29341a0e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f29396ec640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2934079690 0x7f29341a7f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2938eeb640 1 -- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2934078cf0 msgr2=0x7f29341a0e90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2938eeb640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2934078cf0 0x7f29341a0e90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2938eeb640 1 -- 192.168.123.109:0/4284020605 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2934079690 msgr2=0x7f29341a7f10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2938eeb640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2934079690 0x7f29341a7f10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2938eeb640 1 -- 192.168.123.109:0/4284020605 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f29341a8610 con 0x7f2934077aa0 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f29396ec640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2934079690 0x7f29341a7f10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2938eeb640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2934077aa0 0x7f29341a0950 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f291c002a50 tx=0x7f291c02fcd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.259+0000 7f2929ffb640 1 -- 192.168.123.109:0/4284020605 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f291c004500 con 0x7f2934077aa0 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f2929ffb640 1 -- 192.168.123.109:0/4284020605 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f291c005470 con 0x7f2934077aa0 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f29341a88a0 con 0x7f2934077aa0 2026-03-09T15:52:18.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f29341a8d30 con 0x7f2934077aa0 2026-03-09T15:52:18.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f2929ffb640 1 -- 192.168.123.109:0/4284020605 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f291c038470 con 0x7f2934077aa0 2026-03-09T15:52:18.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f28fc005180 con 0x7f2934077aa0 2026-03-09T15:52:18.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f2929ffb640 1 -- 192.168.123.109:0/4284020605 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f291c005000 con 0x7f2934077aa0 2026-03-09T15:52:18.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f2929ffb640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f290c077580 0x7f290c079a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:52:18.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f2929ffb640 1 -- 192.168.123.109:0/4284020605 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(33..33 src has 1..33) ==== 3971+0+0 (secure 0 0 0) 0x7f291c0bd940 con 0x7f2934077aa0 2026-03-09T15:52:18.269 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f292bfff640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f290c077580 0x7f290c079a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:52:18.269 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.263+0000 7f292bfff640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f290c077580 0x7f290c079a40 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f29341a1e70 tx=0x7f292400a5c0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:52:18.271 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.267+0000 7f2929ffb640 1 -- 192.168.123.109:0/4284020605 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f291c087390 con 0x7f2934077aa0 2026-03-09T15:52:18.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:18.371+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7f28fc002bf0 con 0x7f290c077580 2026-03-09T15:52:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:18 vm09 bash[22983]: cluster 2026-03-09T15:52:17.074849+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T15:52:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:18 vm09 bash[22983]: cluster 2026-03-09T15:52:17.074849+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T15:52:18.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:18 vm01 bash[28152]: cluster 2026-03-09T15:52:17.074849+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T15:52:18.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:18 vm01 bash[28152]: cluster 2026-03-09T15:52:17.074849+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T15:52:18.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:18 vm01 bash[20728]: cluster 2026-03-09T15:52:17.074849+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T15:52:18.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:18 vm01 bash[20728]: cluster 2026-03-09T15:52:17.074849+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T15:52:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:19 vm09 bash[22983]: audit 2026-03-09T15:52:18.375404+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:19 vm09 bash[22983]: audit 2026-03-09T15:52:18.375404+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:19 vm09 bash[22983]: audit 2026-03-09T15:52:18.376881+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:19 vm09 bash[22983]: audit 2026-03-09T15:52:18.376881+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:19 vm09 bash[22983]: audit 2026-03-09T15:52:18.378191+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:19 vm09 bash[22983]: audit 2026-03-09T15:52:18.378191+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:19 vm09 bash[22983]: audit 2026-03-09T15:52:18.378550+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:19 vm09 bash[22983]: audit 2026-03-09T15:52:18.378550+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:19.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:19 vm01 bash[28152]: audit 2026-03-09T15:52:18.375404+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:19 vm01 bash[28152]: audit 2026-03-09T15:52:18.375404+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:19 vm01 bash[28152]: audit 2026-03-09T15:52:18.376881+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:19 vm01 bash[28152]: audit 2026-03-09T15:52:18.376881+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:19 vm01 bash[28152]: audit 2026-03-09T15:52:18.378191+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:19 vm01 bash[28152]: audit 2026-03-09T15:52:18.378191+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:19 vm01 bash[28152]: audit 2026-03-09T15:52:18.378550+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:19 vm01 bash[28152]: audit 2026-03-09T15:52:18.378550+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:19 vm01 bash[20728]: audit 2026-03-09T15:52:18.375404+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:19 vm01 bash[20728]: audit 2026-03-09T15:52:18.375404+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:19 vm01 bash[20728]: audit 2026-03-09T15:52:18.376881+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:19 vm01 bash[20728]: audit 2026-03-09T15:52:18.376881+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:19 vm01 bash[20728]: audit 2026-03-09T15:52:18.378191+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:19 vm01 bash[20728]: audit 2026-03-09T15:52:18.378191+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:19 vm01 bash[20728]: audit 2026-03-09T15:52:18.378550+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:19.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:19 vm01 bash[20728]: audit 2026-03-09T15:52:18.378550+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:20.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:20 vm09 bash[22983]: cluster 2026-03-09T15:52:19.075068+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-09T15:52:20.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:20 vm09 bash[22983]: cluster 2026-03-09T15:52:19.075068+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-09T15:52:20.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:20 vm01 bash[28152]: cluster 2026-03-09T15:52:19.075068+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-09T15:52:20.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:20 vm01 bash[28152]: cluster 2026-03-09T15:52:19.075068+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-09T15:52:20.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:20 vm01 bash[20728]: cluster 2026-03-09T15:52:19.075068+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-09T15:52:20.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:20 vm01 bash[20728]: cluster 2026-03-09T15:52:19.075068+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-09T15:52:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:22 vm09 bash[22983]: cluster 2026-03-09T15:52:21.075305+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:22 vm09 bash[22983]: cluster 2026-03-09T15:52:21.075305+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:22.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:22 vm01 bash[28152]: cluster 2026-03-09T15:52:21.075305+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:22.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:22 vm01 bash[28152]: cluster 2026-03-09T15:52:21.075305+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:22.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:22 vm01 bash[20728]: cluster 2026-03-09T15:52:21.075305+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:22.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:22 vm01 bash[20728]: cluster 2026-03-09T15:52:21.075305+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: cluster 2026-03-09T15:52:23.076353+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: cluster 2026-03-09T15:52:23.076353+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: audit 2026-03-09T15:52:23.947374+0000 mon.a (mon.0) 522 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: audit 2026-03-09T15:52:23.947374+0000 mon.a (mon.0) 522 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: audit 2026-03-09T15:52:23.948415+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.109:0/2438802976' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: audit 2026-03-09T15:52:23.948415+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.109:0/2438802976' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: audit 2026-03-09T15:52:23.958992+0000 mon.a (mon.0) 523 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]': finished 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: audit 2026-03-09T15:52:23.958992+0000 mon.a (mon.0) 523 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]': finished 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: cluster 2026-03-09T15:52:23.964192+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: cluster 2026-03-09T15:52:23.964192+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: audit 2026-03-09T15:52:23.964297+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:24 vm01 bash[28152]: audit 2026-03-09T15:52:23.964297+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: cluster 2026-03-09T15:52:23.076353+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: cluster 2026-03-09T15:52:23.076353+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:24.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: audit 2026-03-09T15:52:23.947374+0000 mon.a (mon.0) 522 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: audit 2026-03-09T15:52:23.947374+0000 mon.a (mon.0) 522 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: audit 2026-03-09T15:52:23.948415+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.109:0/2438802976' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: audit 2026-03-09T15:52:23.948415+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.109:0/2438802976' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: audit 2026-03-09T15:52:23.958992+0000 mon.a (mon.0) 523 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]': finished 2026-03-09T15:52:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: audit 2026-03-09T15:52:23.958992+0000 mon.a (mon.0) 523 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]': finished 2026-03-09T15:52:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: cluster 2026-03-09T15:52:23.964192+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T15:52:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: cluster 2026-03-09T15:52:23.964192+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T15:52:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: audit 2026-03-09T15:52:23.964297+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:24.683 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:24 vm01 bash[20728]: audit 2026-03-09T15:52:23.964297+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: cluster 2026-03-09T15:52:23.076353+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: cluster 2026-03-09T15:52:23.076353+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: audit 2026-03-09T15:52:23.947374+0000 mon.a (mon.0) 522 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: audit 2026-03-09T15:52:23.947374+0000 mon.a (mon.0) 522 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: audit 2026-03-09T15:52:23.948415+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.109:0/2438802976' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: audit 2026-03-09T15:52:23.948415+0000 mon.b (mon.1) 12 : audit [INF] from='client.? 192.168.123.109:0/2438802976' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]: dispatch 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: audit 2026-03-09T15:52:23.958992+0000 mon.a (mon.0) 523 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]': finished 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: audit 2026-03-09T15:52:23.958992+0000 mon.a (mon.0) 523 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b983b1a8-523b-4ebf-b245-ff2849d684be"}]': finished 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: cluster 2026-03-09T15:52:23.964192+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: cluster 2026-03-09T15:52:23.964192+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: audit 2026-03-09T15:52:23.964297+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:24 vm09 bash[22983]: audit 2026-03-09T15:52:23.964297+0000 mon.a (mon.0) 525 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:25.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:25 vm01 bash[28152]: audit 2026-03-09T15:52:24.606807+0000 mon.a (mon.0) 526 : audit [DBG] from='client.? 192.168.123.109:0/3299302582' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:25.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:25 vm01 bash[28152]: audit 2026-03-09T15:52:24.606807+0000 mon.a (mon.0) 526 : audit [DBG] from='client.? 192.168.123.109:0/3299302582' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:25.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:25 vm01 bash[20728]: audit 2026-03-09T15:52:24.606807+0000 mon.a (mon.0) 526 : audit [DBG] from='client.? 192.168.123.109:0/3299302582' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:25.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:25 vm01 bash[20728]: audit 2026-03-09T15:52:24.606807+0000 mon.a (mon.0) 526 : audit [DBG] from='client.? 192.168.123.109:0/3299302582' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:25 vm09 bash[22983]: audit 2026-03-09T15:52:24.606807+0000 mon.a (mon.0) 526 : audit [DBG] from='client.? 192.168.123.109:0/3299302582' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:25 vm09 bash[22983]: audit 2026-03-09T15:52:24.606807+0000 mon.a (mon.0) 526 : audit [DBG] from='client.? 192.168.123.109:0/3299302582' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:26.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:26 vm01 bash[20728]: cluster 2026-03-09T15:52:25.076640+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:26.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:26 vm01 bash[20728]: cluster 2026-03-09T15:52:25.076640+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:26.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:26 vm01 bash[28152]: cluster 2026-03-09T15:52:25.076640+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:26.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:26 vm01 bash[28152]: cluster 2026-03-09T15:52:25.076640+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:26 vm09 bash[22983]: cluster 2026-03-09T15:52:25.076640+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:26 vm09 bash[22983]: cluster 2026-03-09T15:52:25.076640+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:28.681 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:28 vm01 bash[28152]: cluster 2026-03-09T15:52:27.076901+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:28.681 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:28 vm01 bash[28152]: cluster 2026-03-09T15:52:27.076901+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:28.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:28 vm01 bash[20728]: cluster 2026-03-09T15:52:27.076901+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:28.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:28 vm01 bash[20728]: cluster 2026-03-09T15:52:27.076901+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:28 vm09 bash[22983]: cluster 2026-03-09T15:52:27.076901+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:28.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:28 vm09 bash[22983]: cluster 2026-03-09T15:52:27.076901+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:30.681 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:30 vm01 bash[28152]: cluster 2026-03-09T15:52:29.077143+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:30.682 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:30 vm01 bash[28152]: cluster 2026-03-09T15:52:29.077143+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:30.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:30 vm01 bash[20728]: cluster 2026-03-09T15:52:29.077143+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:30.682 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:30 vm01 bash[20728]: cluster 2026-03-09T15:52:29.077143+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:30 vm09 bash[22983]: cluster 2026-03-09T15:52:29.077143+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:30 vm09 bash[22983]: cluster 2026-03-09T15:52:29.077143+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:32.680 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:32 vm09 bash[22983]: cluster 2026-03-09T15:52:31.077480+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:32.680 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:32 vm09 bash[22983]: cluster 2026-03-09T15:52:31.077480+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:32.681 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:32 vm01 bash[28152]: cluster 2026-03-09T15:52:31.077480+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:32.681 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:32 vm01 bash[28152]: cluster 2026-03-09T15:52:31.077480+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:32.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:32 vm01 bash[20728]: cluster 2026-03-09T15:52:31.077480+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:32.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:32 vm01 bash[20728]: cluster 2026-03-09T15:52:31.077480+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:33 vm09 bash[22983]: audit 2026-03-09T15:52:33.114862+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:52:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:33 vm09 bash[22983]: audit 2026-03-09T15:52:33.114862+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:52:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:33 vm09 bash[22983]: audit 2026-03-09T15:52:33.115401+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:33 vm09 bash[22983]: audit 2026-03-09T15:52:33.115401+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:33.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:33 vm01 bash[28152]: audit 2026-03-09T15:52:33.114862+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:52:33.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:33 vm01 bash[28152]: audit 2026-03-09T15:52:33.114862+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:52:33.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:33 vm01 bash[28152]: audit 2026-03-09T15:52:33.115401+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:33.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:33 vm01 bash[28152]: audit 2026-03-09T15:52:33.115401+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:33.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:33 vm01 bash[20728]: audit 2026-03-09T15:52:33.114862+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:52:33.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:33 vm01 bash[20728]: audit 2026-03-09T15:52:33.114862+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:52:33.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:33 vm01 bash[20728]: audit 2026-03-09T15:52:33.115401+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:33.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:33 vm01 bash[20728]: audit 2026-03-09T15:52:33.115401+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:34.208 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:33 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:34.208 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:34.208 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:52:33 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:34.208 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:52:34 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:34.208 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:52:33 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:34.208 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:52:34 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:52:34.465 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: cluster 2026-03-09T15:52:33.077874+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:34.466 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: cluster 2026-03-09T15:52:33.077874+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:34.466 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: cephadm 2026-03-09T15:52:33.115841+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm09 2026-03-09T15:52:34.466 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: cephadm 2026-03-09T15:52:33.115841+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm09 2026-03-09T15:52:34.466 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: audit 2026-03-09T15:52:34.228583+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:34.466 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: audit 2026-03-09T15:52:34.228583+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:34.466 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: audit 2026-03-09T15:52:34.235144+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.466 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: audit 2026-03-09T15:52:34.235144+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.466 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: audit 2026-03-09T15:52:34.241562+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.466 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:34 vm09 bash[22983]: audit 2026-03-09T15:52:34.241562+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: cluster 2026-03-09T15:52:33.077874+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: cluster 2026-03-09T15:52:33.077874+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: cephadm 2026-03-09T15:52:33.115841+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm09 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: cephadm 2026-03-09T15:52:33.115841+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm09 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: audit 2026-03-09T15:52:34.228583+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: audit 2026-03-09T15:52:34.228583+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: audit 2026-03-09T15:52:34.235144+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: audit 2026-03-09T15:52:34.235144+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: audit 2026-03-09T15:52:34.241562+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:34 vm01 bash[28152]: audit 2026-03-09T15:52:34.241562+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: cluster 2026-03-09T15:52:33.077874+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: cluster 2026-03-09T15:52:33.077874+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: cephadm 2026-03-09T15:52:33.115841+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm09 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: cephadm 2026-03-09T15:52:33.115841+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm09 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: audit 2026-03-09T15:52:34.228583+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: audit 2026-03-09T15:52:34.228583+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: audit 2026-03-09T15:52:34.235144+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: audit 2026-03-09T15:52:34.235144+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: audit 2026-03-09T15:52:34.241562+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:34.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:34 vm01 bash[20728]: audit 2026-03-09T15:52:34.241562+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:36 vm09 bash[22983]: cluster 2026-03-09T15:52:35.078168+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:36 vm09 bash[22983]: cluster 2026-03-09T15:52:35.078168+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:36.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:36 vm01 bash[28152]: cluster 2026-03-09T15:52:35.078168+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:36.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:36 vm01 bash[28152]: cluster 2026-03-09T15:52:35.078168+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:36.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:36 vm01 bash[20728]: cluster 2026-03-09T15:52:35.078168+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:36.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:36 vm01 bash[20728]: cluster 2026-03-09T15:52:35.078168+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:38 vm09 bash[22983]: cluster 2026-03-09T15:52:37.078448+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:38 vm09 bash[22983]: cluster 2026-03-09T15:52:37.078448+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:38 vm09 bash[22983]: audit 2026-03-09T15:52:38.114781+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:38 vm09 bash[22983]: audit 2026-03-09T15:52:38.114781+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:38 vm09 bash[22983]: audit 2026-03-09T15:52:38.115059+0000 mon.b (mon.1) 13 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:38 vm09 bash[22983]: audit 2026-03-09T15:52:38.115059+0000 mon.b (mon.1) 13 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:38 vm01 bash[28152]: cluster 2026-03-09T15:52:37.078448+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:38 vm01 bash[28152]: cluster 2026-03-09T15:52:37.078448+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:38 vm01 bash[28152]: audit 2026-03-09T15:52:38.114781+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:38 vm01 bash[28152]: audit 2026-03-09T15:52:38.114781+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:38 vm01 bash[28152]: audit 2026-03-09T15:52:38.115059+0000 mon.b (mon.1) 13 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:38 vm01 bash[28152]: audit 2026-03-09T15:52:38.115059+0000 mon.b (mon.1) 13 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:38 vm01 bash[20728]: cluster 2026-03-09T15:52:37.078448+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:38 vm01 bash[20728]: cluster 2026-03-09T15:52:37.078448+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:38 vm01 bash[20728]: audit 2026-03-09T15:52:38.114781+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:38 vm01 bash[20728]: audit 2026-03-09T15:52:38.114781+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:38 vm01 bash[20728]: audit 2026-03-09T15:52:38.115059+0000 mon.b (mon.1) 13 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:38.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:38 vm01 bash[20728]: audit 2026-03-09T15:52:38.115059+0000 mon.b (mon.1) 13 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:38.492068+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:38.492068+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:38.496012+0000 mon.b (mon.1) 14 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:38.496012+0000 mon.b (mon.1) 14 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: cluster 2026-03-09T15:52:38.497754+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: cluster 2026-03-09T15:52:38.497754+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:38.509372+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:38.509372+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:38.509588+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:38.509588+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:39.495012+0000 mon.a (mon.0) 537 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:39.495012+0000 mon.a (mon.0) 537 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: cluster 2026-03-09T15:52:39.501710+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: cluster 2026-03-09T15:52:39.501710+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:39.502472+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:39.502472+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:39.503079+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:39 vm09 bash[22983]: audit 2026-03-09T15:52:39.503079+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:38.492068+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:38.492068+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:38.496012+0000 mon.b (mon.1) 14 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:38.496012+0000 mon.b (mon.1) 14 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: cluster 2026-03-09T15:52:38.497754+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: cluster 2026-03-09T15:52:38.497754+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:38.509372+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:38.509372+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:38.509588+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:38.509588+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:39.495012+0000 mon.a (mon.0) 537 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:39.495012+0000 mon.a (mon.0) 537 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: cluster 2026-03-09T15:52:39.501710+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: cluster 2026-03-09T15:52:39.501710+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:39.502472+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:39.502472+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:39.503079+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:39 vm01 bash[28152]: audit 2026-03-09T15:52:39.503079+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:38.492068+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:38.492068+0000 mon.a (mon.0) 533 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:38.496012+0000 mon.b (mon.1) 14 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:38.496012+0000 mon.b (mon.1) 14 : audit [INF] from='osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: cluster 2026-03-09T15:52:38.497754+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: cluster 2026-03-09T15:52:38.497754+0000 mon.a (mon.0) 534 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:38.509372+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:38.509372+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:38.509588+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:38.509588+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:52:39.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:39.495012+0000 mon.a (mon.0) 537 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:39.495012+0000 mon.a (mon.0) 537 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:52:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: cluster 2026-03-09T15:52:39.501710+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T15:52:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: cluster 2026-03-09T15:52:39.501710+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T15:52:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:39.502472+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:39.502472+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:39.503079+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:39 vm01 bash[20728]: audit 2026-03-09T15:52:39.503079+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: cluster 2026-03-09T15:52:39.078663+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: cluster 2026-03-09T15:52:39.078663+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.353853+0000 mon.a (mon.0) 541 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.353853+0000 mon.a (mon.0) 541 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.485890+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.485890+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.491683+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.491683+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.492580+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.492580+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.493163+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.493163+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.498207+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.498207+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.504839+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.504839+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: cluster 2026-03-09T15:52:40.511062+0000 mon.a (mon.0) 548 : cluster [INF] osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] boot 2026-03-09T15:52:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: cluster 2026-03-09T15:52:40.511062+0000 mon.a (mon.0) 548 : cluster [INF] osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] boot 2026-03-09T15:52:40.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: cluster 2026-03-09T15:52:40.511095+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T15:52:40.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: cluster 2026-03-09T15:52:40.511095+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T15:52:40.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.511201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:40 vm09 bash[22983]: audit 2026-03-09T15:52:40.511201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: cluster 2026-03-09T15:52:39.078663+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:40.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: cluster 2026-03-09T15:52:39.078663+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:40.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.353853+0000 mon.a (mon.0) 541 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-09T15:52:40.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.353853+0000 mon.a (mon.0) 541 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-09T15:52:40.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.485890+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.485890+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.491683+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.491683+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.492580+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.492580+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.493163+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.493163+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.498207+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.498207+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.504839+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.504839+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: cluster 2026-03-09T15:52:40.511062+0000 mon.a (mon.0) 548 : cluster [INF] osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] boot 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: cluster 2026-03-09T15:52:40.511062+0000 mon.a (mon.0) 548 : cluster [INF] osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] boot 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: cluster 2026-03-09T15:52:40.511095+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: cluster 2026-03-09T15:52:40.511095+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.511201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:40 vm01 bash[28152]: audit 2026-03-09T15:52:40.511201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: cluster 2026-03-09T15:52:39.078663+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: cluster 2026-03-09T15:52:39.078663+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.353853+0000 mon.a (mon.0) 541 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.353853+0000 mon.a (mon.0) 541 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.485890+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.485890+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.491683+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.491683+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.492580+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.492580+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.493163+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.493163+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.498207+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.498207+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.504839+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.504839+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: cluster 2026-03-09T15:52:40.511062+0000 mon.a (mon.0) 548 : cluster [INF] osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] boot 2026-03-09T15:52:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: cluster 2026-03-09T15:52:40.511062+0000 mon.a (mon.0) 548 : cluster [INF] osd.5 [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] boot 2026-03-09T15:52:40.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: cluster 2026-03-09T15:52:40.511095+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T15:52:40.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: cluster 2026-03-09T15:52:40.511095+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T15:52:40.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.511201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:40.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:40 vm01 bash[20728]: audit 2026-03-09T15:52:40.511201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:52:41.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.531+0000 7f2929ffb640 1 -- 192.168.123.109:0/4284020605 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f28fc002bf0 con 0x7f290c077580 2026-03-09T15:52:41.539 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 5 on host 'vm09' 2026-03-09T15:52:41.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f290c077580 msgr2=0x7f290c079a40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:52:41.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f290c077580 0x7f290c079a40 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f29341a1e70 tx=0x7f292400a5c0 comp rx=0 tx=0).stop 2026-03-09T15:52:41.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2934077aa0 msgr2=0x7f29341a0950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:52:41.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2934077aa0 0x7f29341a0950 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f291c002a50 tx=0x7f291c02fcd0 comp rx=0 tx=0).stop 2026-03-09T15:52:41.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 shutdown_connections 2026-03-09T15:52:41.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2934079690 0x7f29341a7f10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:41.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f290c077580 0x7f290c079a40 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:41.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2934078cf0 0x7f29341a0e90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:41.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 --2- 192.168.123.109:0/4284020605 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2934077aa0 0x7f29341a0950 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:41.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 >> 192.168.123.109:0/4284020605 conn(0x7f2934100520 msgr2=0x7f2934101fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:52:41.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 shutdown_connections 2026-03-09T15:52:41.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:41.535+0000 7f293b176640 1 -- 192.168.123.109:0/4284020605 wait complete. 2026-03-09T15:52:41.636 DEBUG:teuthology.orchestra.run.vm09:osd.5> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.5.service 2026-03-09T15:52:41.637 INFO:tasks.cephadm:Deploying osd.6 on vm09 with /dev/vdc... 2026-03-09T15:52:41.637 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- lvm zap /dev/vdc 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: cluster 2026-03-09T15:52:39.069746+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: cluster 2026-03-09T15:52:39.069746+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: cluster 2026-03-09T15:52:39.069790+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: cluster 2026-03-09T15:52:39.069790+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: cluster 2026-03-09T15:52:41.078945+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: cluster 2026-03-09T15:52:41.078945+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: audit 2026-03-09T15:52:41.512498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: audit 2026-03-09T15:52:41.512498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: cluster 2026-03-09T15:52:41.517597+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: cluster 2026-03-09T15:52:41.517597+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: audit 2026-03-09T15:52:41.526948+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: audit 2026-03-09T15:52:41.526948+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: audit 2026-03-09T15:52:41.534552+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:41 vm09 bash[22983]: audit 2026-03-09T15:52:41.534552+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: cluster 2026-03-09T15:52:39.069746+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: cluster 2026-03-09T15:52:39.069746+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: cluster 2026-03-09T15:52:39.069790+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: cluster 2026-03-09T15:52:39.069790+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: cluster 2026-03-09T15:52:41.078945+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: cluster 2026-03-09T15:52:41.078945+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: audit 2026-03-09T15:52:41.512498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: audit 2026-03-09T15:52:41.512498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: cluster 2026-03-09T15:52:41.517597+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: cluster 2026-03-09T15:52:41.517597+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: audit 2026-03-09T15:52:41.526948+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: audit 2026-03-09T15:52:41.526948+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: audit 2026-03-09T15:52:41.534552+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:41 vm01 bash[28152]: audit 2026-03-09T15:52:41.534552+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: cluster 2026-03-09T15:52:39.069746+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: cluster 2026-03-09T15:52:39.069746+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: cluster 2026-03-09T15:52:39.069790+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: cluster 2026-03-09T15:52:39.069790+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: cluster 2026-03-09T15:52:41.078945+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: cluster 2026-03-09T15:52:41.078945+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: audit 2026-03-09T15:52:41.512498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: audit 2026-03-09T15:52:41.512498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: cluster 2026-03-09T15:52:41.517597+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: cluster 2026-03-09T15:52:41.517597+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: audit 2026-03-09T15:52:41.526948+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: audit 2026-03-09T15:52:41.526948+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: audit 2026-03-09T15:52:41.534552+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:41.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:41 vm01 bash[20728]: audit 2026-03-09T15:52:41.534552+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:43.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:43 vm09 bash[22983]: cluster 2026-03-09T15:52:42.534223+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T15:52:43.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:43 vm09 bash[22983]: cluster 2026-03-09T15:52:42.534223+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T15:52:43.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:43 vm01 bash[28152]: cluster 2026-03-09T15:52:42.534223+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T15:52:43.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:43 vm01 bash[28152]: cluster 2026-03-09T15:52:42.534223+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T15:52:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:43 vm01 bash[20728]: cluster 2026-03-09T15:52:42.534223+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T15:52:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:43 vm01 bash[20728]: cluster 2026-03-09T15:52:42.534223+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T15:52:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:44 vm09 bash[22983]: cluster 2026-03-09T15:52:43.079248+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:44 vm09 bash[22983]: cluster 2026-03-09T15:52:43.079248+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:44.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:44 vm01 bash[28152]: cluster 2026-03-09T15:52:43.079248+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:44.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:44 vm01 bash[28152]: cluster 2026-03-09T15:52:43.079248+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:44 vm01 bash[20728]: cluster 2026-03-09T15:52:43.079248+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:44 vm01 bash[20728]: cluster 2026-03-09T15:52:43.079248+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:46.306 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:52:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:46 vm09 bash[22983]: cluster 2026-03-09T15:52:45.079517+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:46 vm09 bash[22983]: cluster 2026-03-09T15:52:45.079517+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:46.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:46 vm01 bash[28152]: cluster 2026-03-09T15:52:45.079517+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:46.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:46 vm01 bash[28152]: cluster 2026-03-09T15:52:45.079517+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:46.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:46 vm01 bash[20728]: cluster 2026-03-09T15:52:45.079517+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:46.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:46 vm01 bash[20728]: cluster 2026-03-09T15:52:45.079517+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:47.177 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:52:47.197 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch daemon add osd vm09:/dev/vdc 2026-03-09T15:52:47.881 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:47 vm09 bash[22983]: cluster 2026-03-09T15:52:47.079808+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:47.881 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:47 vm09 bash[22983]: cluster 2026-03-09T15:52:47.079808+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:47 vm01 bash[28152]: cluster 2026-03-09T15:52:47.079808+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:47 vm01 bash[28152]: cluster 2026-03-09T15:52:47.079808+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:47.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:47 vm01 bash[20728]: cluster 2026-03-09T15:52:47.079808+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:47.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:47 vm01 bash[20728]: cluster 2026-03-09T15:52:47.079808+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: cephadm 2026-03-09T15:52:47.934215+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: cephadm 2026-03-09T15:52:47.934215+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.942178+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.942178+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.949889+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.949889+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.951215+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.951215+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.951763+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.951763+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: cephadm 2026-03-09T15:52:47.952121+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Adjusting osd_memory_target on vm09 to 227.8M 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: cephadm 2026-03-09T15:52:47.952121+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Adjusting osd_memory_target on vm09 to 227.8M 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: cephadm 2026-03-09T15:52:47.952555+0000 mgr.y (mgr.14150) 186 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T15:52:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: cephadm 2026-03-09T15:52:47.952555+0000 mgr.y (mgr.14150) 186 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T15:52:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.952917+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.952917+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.953399+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.953399+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.960740+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:48 vm09 bash[22983]: audit 2026-03-09T15:52:47.960740+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: cephadm 2026-03-09T15:52:47.934215+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: cephadm 2026-03-09T15:52:47.934215+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.942178+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.942178+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.949889+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.949889+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.951215+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.951215+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.951763+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.951763+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: cephadm 2026-03-09T15:52:47.952121+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Adjusting osd_memory_target on vm09 to 227.8M 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: cephadm 2026-03-09T15:52:47.952121+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Adjusting osd_memory_target on vm09 to 227.8M 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: cephadm 2026-03-09T15:52:47.952555+0000 mgr.y (mgr.14150) 186 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: cephadm 2026-03-09T15:52:47.952555+0000 mgr.y (mgr.14150) 186 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.952917+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.952917+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.953399+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.953399+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.960740+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:48 vm01 bash[28152]: audit 2026-03-09T15:52:47.960740+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: cephadm 2026-03-09T15:52:47.934215+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: cephadm 2026-03-09T15:52:47.934215+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.942178+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.942178+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.949889+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.949889+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.951215+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.951215+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.951763+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.951763+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: cephadm 2026-03-09T15:52:47.952121+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Adjusting osd_memory_target on vm09 to 227.8M 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: cephadm 2026-03-09T15:52:47.952121+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Adjusting osd_memory_target on vm09 to 227.8M 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: cephadm 2026-03-09T15:52:47.952555+0000 mgr.y (mgr.14150) 186 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: cephadm 2026-03-09T15:52:47.952555+0000 mgr.y (mgr.14150) 186 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.952917+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:49.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.952917+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:49.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.953399+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:49.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.953399+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:52:49.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.960740+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:49.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:48 vm01 bash[20728]: audit 2026-03-09T15:52:47.960740+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:52:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:49 vm09 bash[22983]: cluster 2026-03-09T15:52:49.080079+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:49 vm09 bash[22983]: cluster 2026-03-09T15:52:49.080079+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:49 vm01 bash[28152]: cluster 2026-03-09T15:52:49.080079+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:49 vm01 bash[28152]: cluster 2026-03-09T15:52:49.080079+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:49 vm01 bash[20728]: cluster 2026-03-09T15:52:49.080079+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:49 vm01 bash[20728]: cluster 2026-03-09T15:52:49.080079+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:51.857 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.023+0000 7fd1fe791640 1 -- 192.168.123.109:0/4052162759 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 msgr2=0x7fd1f8101190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.023+0000 7fd1fe791640 1 --2- 192.168.123.109:0/4052162759 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 0x7fd1f8101190 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fd1e8009a60 tx=0x7fd1e802f210 comp rx=0 tx=0).stop 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- 192.168.123.109:0/4052162759 shutdown_connections 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 --2- 192.168.123.109:0/4052162759 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fd1f81047b0 0x7fd1f810b040 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 --2- 192.168.123.109:0/4052162759 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd1f8103df0 0x7fd1f8104270 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 --2- 192.168.123.109:0/4052162759 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 0x7fd1f8101190 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- 192.168.123.109:0/4052162759 >> 192.168.123.109:0/4052162759 conn(0x7fd1f80fc880 msgr2=0x7fd1f80fece0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- 192.168.123.109:0/4052162759 shutdown_connections 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- 192.168.123.109:0/4052162759 wait complete. 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 Processor -- start 2026-03-09T15:52:52.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- start start 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 0x7fd1f81036d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fd1f8103df0 0x7fd1f8101d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd1f81047b0 0x7fd1f8102260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fd1f810dae0 con 0x7fd1f81047b0 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fd1f810d960 con 0x7fd1f8100d90 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7fd1f810dc60 con 0x7fd1f8103df0 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f7fff640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 0x7fd1f81036d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f7fff640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 0x7fd1f81036d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:57016/0 (socket says 192.168.123.109:57016) 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f7fff640 1 -- 192.168.123.109:0/120403398 learned_addr learned my addr 192.168.123.109:0/120403398 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:52:52.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fcd07640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd1f81047b0 0x7fd1f8102260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:52:52.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f7fff640 1 -- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fd1f8103df0 msgr2=0x7fd1f8101d20 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:52:52.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f7fff640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fd1f8103df0 0x7fd1f8101d20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:52.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f7fff640 1 -- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd1f81047b0 msgr2=0x7fd1f8102260 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:52:52.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f7fff640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd1f81047b0 0x7fd1f8102260 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:52:52.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f7fff640 1 -- 192.168.123.109:0/120403398 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd1f8102af0 con 0x7fd1f8100d90 2026-03-09T15:52:52.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f7fff640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 0x7fd1f81036d0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7fd1e8002410 tx=0x7fd1e80028b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:52:52.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f57fa640 1 -- 192.168.123.109:0/120403398 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd1e8046070 con 0x7fd1f8100d90 2026-03-09T15:52:52.034 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1f57fa640 1 -- 192.168.123.109:0/120403398 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fd1e8004430 con 0x7fd1f8100d90 2026-03-09T15:52:52.034 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd1f81a44b0 con 0x7fd1f8100d90 2026-03-09T15:52:52.034 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.027+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd1f81a4a70 con 0x7fd1f8100d90 2026-03-09T15:52:52.039 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.031+0000 7fd1f57fa640 1 -- 192.168.123.109:0/120403398 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd1e80049c0 con 0x7fd1f8100d90 2026-03-09T15:52:52.039 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.031+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd1bc005180 con 0x7fd1f8100d90 2026-03-09T15:52:52.039 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.031+0000 7fd1f57fa640 1 -- 192.168.123.109:0/120403398 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7fd1e80053e0 con 0x7fd1f8100d90 2026-03-09T15:52:52.039 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.031+0000 7fd1f57fa640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fd1cc077610 0x7fd1cc079ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:52:52.039 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.031+0000 7fd1f57fa640 1 -- 192.168.123.109:0/120403398 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(39..39 src has 1..39) ==== 4403+0+0 (secure 0 0 0) 0x7fd1e80bdfc0 con 0x7fd1f8100d90 2026-03-09T15:52:52.039 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.035+0000 7fd1f57fa640 1 -- 192.168.123.109:0/120403398 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd1e808b490 con 0x7fd1f8100d90 2026-03-09T15:52:52.043 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.035+0000 7fd1f77fe640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fd1cc077610 0x7fd1cc079ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:52:52.043 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.035+0000 7fd1f77fe640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fd1cc077610 0x7fd1cc079ad0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fd1f8103450 tx=0x7fd1e4006d20 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:52:52.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:52:52.151+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7fd1bc002bf0 con 0x7fd1cc077610 2026-03-09T15:52:52.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:52 vm01 bash[28152]: cluster 2026-03-09T15:52:51.080398+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:52.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:52 vm01 bash[28152]: cluster 2026-03-09T15:52:51.080398+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:52.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:52 vm01 bash[20728]: cluster 2026-03-09T15:52:51.080398+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:52.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:52 vm01 bash[20728]: cluster 2026-03-09T15:52:51.080398+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:52 vm09 bash[22983]: cluster 2026-03-09T15:52:51.080398+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:52 vm09 bash[22983]: cluster 2026-03-09T15:52:51.080398+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:53.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:53 vm01 bash[28152]: audit 2026-03-09T15:52:52.156618+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:53.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:53 vm01 bash[28152]: audit 2026-03-09T15:52:52.156618+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:53.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:53 vm01 bash[28152]: audit 2026-03-09T15:52:52.158455+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:53 vm01 bash[28152]: audit 2026-03-09T15:52:52.158455+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:53 vm01 bash[28152]: audit 2026-03-09T15:52:52.160131+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:53 vm01 bash[28152]: audit 2026-03-09T15:52:52.160131+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:53 vm01 bash[28152]: audit 2026-03-09T15:52:52.160608+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:53 vm01 bash[28152]: audit 2026-03-09T15:52:52.160608+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:53 vm01 bash[20728]: audit 2026-03-09T15:52:52.156618+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:53 vm01 bash[20728]: audit 2026-03-09T15:52:52.156618+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:53 vm01 bash[20728]: audit 2026-03-09T15:52:52.158455+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:53 vm01 bash[20728]: audit 2026-03-09T15:52:52.158455+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:53 vm01 bash[20728]: audit 2026-03-09T15:52:52.160131+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:53 vm01 bash[20728]: audit 2026-03-09T15:52:52.160131+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:53 vm01 bash[20728]: audit 2026-03-09T15:52:52.160608+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:53.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:53 vm01 bash[20728]: audit 2026-03-09T15:52:52.160608+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:53 vm09 bash[22983]: audit 2026-03-09T15:52:52.156618+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:53 vm09 bash[22983]: audit 2026-03-09T15:52:52.156618+0000 mgr.y (mgr.14150) 189 : audit [DBG] from='client.24226 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:52:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:53 vm09 bash[22983]: audit 2026-03-09T15:52:52.158455+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:53 vm09 bash[22983]: audit 2026-03-09T15:52:52.158455+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:52:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:53 vm09 bash[22983]: audit 2026-03-09T15:52:52.160131+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:53 vm09 bash[22983]: audit 2026-03-09T15:52:52.160131+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:52:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:53 vm09 bash[22983]: audit 2026-03-09T15:52:52.160608+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:53 vm09 bash[22983]: audit 2026-03-09T15:52:52.160608+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:52:54.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:54 vm01 bash[28152]: cluster 2026-03-09T15:52:53.080734+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:54.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:54 vm01 bash[28152]: cluster 2026-03-09T15:52:53.080734+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:54.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:54 vm01 bash[20728]: cluster 2026-03-09T15:52:53.080734+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:54.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:54 vm01 bash[20728]: cluster 2026-03-09T15:52:53.080734+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:54 vm09 bash[22983]: cluster 2026-03-09T15:52:53.080734+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:54 vm09 bash[22983]: cluster 2026-03-09T15:52:53.080734+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:56.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:56 vm01 bash[28152]: cluster 2026-03-09T15:52:55.081082+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:56.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:56 vm01 bash[28152]: cluster 2026-03-09T15:52:55.081082+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:56.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:56 vm01 bash[20728]: cluster 2026-03-09T15:52:55.081082+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:56.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:56 vm01 bash[20728]: cluster 2026-03-09T15:52:55.081082+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:56 vm09 bash[22983]: cluster 2026-03-09T15:52:55.081082+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:56 vm09 bash[22983]: cluster 2026-03-09T15:52:55.081082+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: cluster 2026-03-09T15:52:57.081451+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: cluster 2026-03-09T15:52:57.081451+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: audit 2026-03-09T15:52:57.651906+0000 mon.a (mon.0) 566 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]: dispatch 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: audit 2026-03-09T15:52:57.651906+0000 mon.a (mon.0) 566 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]: dispatch 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: audit 2026-03-09T15:52:57.656235+0000 mon.a (mon.0) 567 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]': finished 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: audit 2026-03-09T15:52:57.656235+0000 mon.a (mon.0) 567 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]': finished 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: cluster 2026-03-09T15:52:57.659811+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: cluster 2026-03-09T15:52:57.659811+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: audit 2026-03-09T15:52:57.662233+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:52:58.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:58 vm01 bash[28152]: audit 2026-03-09T15:52:57.662233+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: cluster 2026-03-09T15:52:57.081451+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: cluster 2026-03-09T15:52:57.081451+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: audit 2026-03-09T15:52:57.651906+0000 mon.a (mon.0) 566 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]: dispatch 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: audit 2026-03-09T15:52:57.651906+0000 mon.a (mon.0) 566 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]: dispatch 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: audit 2026-03-09T15:52:57.656235+0000 mon.a (mon.0) 567 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]': finished 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: audit 2026-03-09T15:52:57.656235+0000 mon.a (mon.0) 567 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]': finished 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: cluster 2026-03-09T15:52:57.659811+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: cluster 2026-03-09T15:52:57.659811+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: audit 2026-03-09T15:52:57.662233+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:52:58.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:58 vm01 bash[20728]: audit 2026-03-09T15:52:57.662233+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: cluster 2026-03-09T15:52:57.081451+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: cluster 2026-03-09T15:52:57.081451+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: audit 2026-03-09T15:52:57.651906+0000 mon.a (mon.0) 566 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]: dispatch 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: audit 2026-03-09T15:52:57.651906+0000 mon.a (mon.0) 566 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]: dispatch 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: audit 2026-03-09T15:52:57.656235+0000 mon.a (mon.0) 567 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]': finished 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: audit 2026-03-09T15:52:57.656235+0000 mon.a (mon.0) 567 : audit [INF] from='client.? 192.168.123.109:0/1750272174' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a674ed00-04ea-4bd3-ab96-9d977052e290"}]': finished 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: cluster 2026-03-09T15:52:57.659811+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: cluster 2026-03-09T15:52:57.659811+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: audit 2026-03-09T15:52:57.662233+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:52:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:58 vm09 bash[22983]: audit 2026-03-09T15:52:57.662233+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:52:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:59 vm09 bash[22983]: audit 2026-03-09T15:52:58.334656+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.109:0/1766390723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:52:59 vm09 bash[22983]: audit 2026-03-09T15:52:58.334656+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.109:0/1766390723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:59.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:59 vm01 bash[28152]: audit 2026-03-09T15:52:58.334656+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.109:0/1766390723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:59.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:52:59 vm01 bash[28152]: audit 2026-03-09T15:52:58.334656+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.109:0/1766390723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:59.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:59 vm01 bash[20728]: audit 2026-03-09T15:52:58.334656+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.109:0/1766390723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:52:59.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:52:59 vm01 bash[20728]: audit 2026-03-09T15:52:58.334656+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.109:0/1766390723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:53:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:00 vm09 bash[22983]: cluster 2026-03-09T15:52:59.081736+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:00 vm09 bash[22983]: cluster 2026-03-09T15:52:59.081736+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:00.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:00 vm01 bash[28152]: cluster 2026-03-09T15:52:59.081736+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:00.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:00 vm01 bash[28152]: cluster 2026-03-09T15:52:59.081736+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:00.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:00 vm01 bash[20728]: cluster 2026-03-09T15:52:59.081736+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:00.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:00 vm01 bash[20728]: cluster 2026-03-09T15:52:59.081736+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:02 vm09 bash[22983]: cluster 2026-03-09T15:53:01.082080+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:02 vm09 bash[22983]: cluster 2026-03-09T15:53:01.082080+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:02.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:02 vm01 bash[28152]: cluster 2026-03-09T15:53:01.082080+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:02.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:02 vm01 bash[28152]: cluster 2026-03-09T15:53:01.082080+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:02.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:02 vm01 bash[20728]: cluster 2026-03-09T15:53:01.082080+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:02.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:02 vm01 bash[20728]: cluster 2026-03-09T15:53:01.082080+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:04 vm09 bash[22983]: cluster 2026-03-09T15:53:03.082442+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:04 vm09 bash[22983]: cluster 2026-03-09T15:53:03.082442+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:04.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:04 vm01 bash[28152]: cluster 2026-03-09T15:53:03.082442+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:04.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:04 vm01 bash[28152]: cluster 2026-03-09T15:53:03.082442+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:04.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:04 vm01 bash[20728]: cluster 2026-03-09T15:53:03.082442+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:04.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:04 vm01 bash[20728]: cluster 2026-03-09T15:53:03.082442+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:06 vm09 bash[22983]: cluster 2026-03-09T15:53:05.082782+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:06 vm09 bash[22983]: cluster 2026-03-09T15:53:05.082782+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:06.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:06 vm01 bash[28152]: cluster 2026-03-09T15:53:05.082782+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:06.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:06 vm01 bash[28152]: cluster 2026-03-09T15:53:05.082782+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:06.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:06 vm01 bash[20728]: cluster 2026-03-09T15:53:05.082782+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:06.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:06 vm01 bash[20728]: cluster 2026-03-09T15:53:05.082782+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:07.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:07 vm09 bash[22983]: audit 2026-03-09T15:53:06.741068+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T15:53:07.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:07 vm09 bash[22983]: audit 2026-03-09T15:53:06.741068+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T15:53:07.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:07 vm09 bash[22983]: audit 2026-03-09T15:53:06.741721+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:07.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:07 vm09 bash[22983]: audit 2026-03-09T15:53:06.741721+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:07 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:07.633 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:53:07 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:07.633 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:53:07 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:07.633 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:53:07 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:07.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:07 vm01 bash[28152]: audit 2026-03-09T15:53:06.741068+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T15:53:07.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:07 vm01 bash[28152]: audit 2026-03-09T15:53:06.741068+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T15:53:07.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:07 vm01 bash[28152]: audit 2026-03-09T15:53:06.741721+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:07.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:07 vm01 bash[28152]: audit 2026-03-09T15:53:06.741721+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:07.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:07 vm01 bash[20728]: audit 2026-03-09T15:53:06.741068+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T15:53:07.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:07 vm01 bash[20728]: audit 2026-03-09T15:53:06.741068+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T15:53:07.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:07 vm01 bash[20728]: audit 2026-03-09T15:53:06.741721+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:07.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:07 vm01 bash[20728]: audit 2026-03-09T15:53:06.741721+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:07.912 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:07 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:07.913 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:53:07 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:07.913 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:53:07 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:07.913 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:53:07 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: cephadm 2026-03-09T15:53:06.742295+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: cephadm 2026-03-09T15:53:06.742295+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: cluster 2026-03-09T15:53:07.083161+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: cluster 2026-03-09T15:53:07.083161+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: audit 2026-03-09T15:53:07.905817+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: audit 2026-03-09T15:53:07.905817+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: audit 2026-03-09T15:53:07.912718+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: audit 2026-03-09T15:53:07.912718+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: audit 2026-03-09T15:53:07.919804+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:08 vm09 bash[22983]: audit 2026-03-09T15:53:07.919804+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: cephadm 2026-03-09T15:53:06.742295+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: cephadm 2026-03-09T15:53:06.742295+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: cluster 2026-03-09T15:53:07.083161+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: cluster 2026-03-09T15:53:07.083161+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: audit 2026-03-09T15:53:07.905817+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: audit 2026-03-09T15:53:07.905817+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: audit 2026-03-09T15:53:07.912718+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: audit 2026-03-09T15:53:07.912718+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: audit 2026-03-09T15:53:07.919804+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:08 vm01 bash[28152]: audit 2026-03-09T15:53:07.919804+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: cephadm 2026-03-09T15:53:06.742295+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: cephadm 2026-03-09T15:53:06.742295+0000 mgr.y (mgr.14150) 197 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: cluster 2026-03-09T15:53:07.083161+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: cluster 2026-03-09T15:53:07.083161+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: audit 2026-03-09T15:53:07.905817+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: audit 2026-03-09T15:53:07.905817+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: audit 2026-03-09T15:53:07.912718+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: audit 2026-03-09T15:53:07.912718+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: audit 2026-03-09T15:53:07.919804+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:08.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:08 vm01 bash[20728]: audit 2026-03-09T15:53:07.919804+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:10 vm09 bash[22983]: cluster 2026-03-09T15:53:09.083515+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:10 vm09 bash[22983]: cluster 2026-03-09T15:53:09.083515+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:10 vm01 bash[28152]: cluster 2026-03-09T15:53:09.083515+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:10 vm01 bash[28152]: cluster 2026-03-09T15:53:09.083515+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:10 vm01 bash[20728]: cluster 2026-03-09T15:53:09.083515+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:10 vm01 bash[20728]: cluster 2026-03-09T15:53:09.083515+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:12 vm09 bash[22983]: cluster 2026-03-09T15:53:11.083829+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:12 vm09 bash[22983]: cluster 2026-03-09T15:53:11.083829+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:12 vm09 bash[22983]: audit 2026-03-09T15:53:11.611567+0000 mon.b (mon.1) 16 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:12 vm09 bash[22983]: audit 2026-03-09T15:53:11.611567+0000 mon.b (mon.1) 16 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:12 vm09 bash[22983]: audit 2026-03-09T15:53:11.612550+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:12 vm09 bash[22983]: audit 2026-03-09T15:53:11.612550+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:12 vm01 bash[28152]: cluster 2026-03-09T15:53:11.083829+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:12 vm01 bash[28152]: cluster 2026-03-09T15:53:11.083829+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:12 vm01 bash[28152]: audit 2026-03-09T15:53:11.611567+0000 mon.b (mon.1) 16 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:12 vm01 bash[28152]: audit 2026-03-09T15:53:11.611567+0000 mon.b (mon.1) 16 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:12 vm01 bash[28152]: audit 2026-03-09T15:53:11.612550+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:12 vm01 bash[28152]: audit 2026-03-09T15:53:11.612550+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:12 vm01 bash[20728]: cluster 2026-03-09T15:53:11.083829+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:12 vm01 bash[20728]: cluster 2026-03-09T15:53:11.083829+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:12 vm01 bash[20728]: audit 2026-03-09T15:53:11.611567+0000 mon.b (mon.1) 16 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:12 vm01 bash[20728]: audit 2026-03-09T15:53:11.611567+0000 mon.b (mon.1) 16 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:12 vm01 bash[20728]: audit 2026-03-09T15:53:11.612550+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:12 vm01 bash[20728]: audit 2026-03-09T15:53:11.612550+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: audit 2026-03-09T15:53:12.255232+0000 mon.a (mon.0) 576 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: audit 2026-03-09T15:53:12.255232+0000 mon.a (mon.0) 576 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: audit 2026-03-09T15:53:12.258160+0000 mon.b (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: audit 2026-03-09T15:53:12.258160+0000 mon.b (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: cluster 2026-03-09T15:53:12.260214+0000 mon.a (mon.0) 577 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: cluster 2026-03-09T15:53:12.260214+0000 mon.a (mon.0) 577 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: audit 2026-03-09T15:53:12.260705+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: audit 2026-03-09T15:53:12.260705+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: audit 2026-03-09T15:53:12.260844+0000 mon.a (mon.0) 579 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:13 vm09 bash[22983]: audit 2026-03-09T15:53:12.260844+0000 mon.a (mon.0) 579 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: audit 2026-03-09T15:53:12.255232+0000 mon.a (mon.0) 576 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: audit 2026-03-09T15:53:12.255232+0000 mon.a (mon.0) 576 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: audit 2026-03-09T15:53:12.258160+0000 mon.b (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: audit 2026-03-09T15:53:12.258160+0000 mon.b (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: cluster 2026-03-09T15:53:12.260214+0000 mon.a (mon.0) 577 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: cluster 2026-03-09T15:53:12.260214+0000 mon.a (mon.0) 577 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: audit 2026-03-09T15:53:12.260705+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: audit 2026-03-09T15:53:12.260705+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: audit 2026-03-09T15:53:12.260844+0000 mon.a (mon.0) 579 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:13 vm01 bash[28152]: audit 2026-03-09T15:53:12.260844+0000 mon.a (mon.0) 579 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: audit 2026-03-09T15:53:12.255232+0000 mon.a (mon.0) 576 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: audit 2026-03-09T15:53:12.255232+0000 mon.a (mon.0) 576 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: audit 2026-03-09T15:53:12.258160+0000 mon.b (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: audit 2026-03-09T15:53:12.258160+0000 mon.b (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: cluster 2026-03-09T15:53:12.260214+0000 mon.a (mon.0) 577 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: cluster 2026-03-09T15:53:12.260214+0000 mon.a (mon.0) 577 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: audit 2026-03-09T15:53:12.260705+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: audit 2026-03-09T15:53:12.260705+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: audit 2026-03-09T15:53:12.260844+0000 mon.a (mon.0) 579 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:13 vm01 bash[20728]: audit 2026-03-09T15:53:12.260844+0000 mon.a (mon.0) 579 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: cluster 2026-03-09T15:53:13.084141+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: cluster 2026-03-09T15:53:13.084141+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: audit 2026-03-09T15:53:13.265856+0000 mon.a (mon.0) 580 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: audit 2026-03-09T15:53:13.265856+0000 mon.a (mon.0) 580 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: cluster 2026-03-09T15:53:13.277115+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: cluster 2026-03-09T15:53:13.277115+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: audit 2026-03-09T15:53:13.277664+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: audit 2026-03-09T15:53:13.277664+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: audit 2026-03-09T15:53:14.169991+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: audit 2026-03-09T15:53:14.169991+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: audit 2026-03-09T15:53:14.176597+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:14 vm09 bash[22983]: audit 2026-03-09T15:53:14.176597+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: cluster 2026-03-09T15:53:13.084141+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: cluster 2026-03-09T15:53:13.084141+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: audit 2026-03-09T15:53:13.265856+0000 mon.a (mon.0) 580 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: audit 2026-03-09T15:53:13.265856+0000 mon.a (mon.0) 580 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: cluster 2026-03-09T15:53:13.277115+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: cluster 2026-03-09T15:53:13.277115+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: audit 2026-03-09T15:53:13.277664+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: audit 2026-03-09T15:53:13.277664+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: audit 2026-03-09T15:53:14.169991+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: audit 2026-03-09T15:53:14.169991+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: audit 2026-03-09T15:53:14.176597+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:14 vm01 bash[28152]: audit 2026-03-09T15:53:14.176597+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: cluster 2026-03-09T15:53:13.084141+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: cluster 2026-03-09T15:53:13.084141+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: audit 2026-03-09T15:53:13.265856+0000 mon.a (mon.0) 580 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: audit 2026-03-09T15:53:13.265856+0000 mon.a (mon.0) 580 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: cluster 2026-03-09T15:53:13.277115+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: cluster 2026-03-09T15:53:13.277115+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: audit 2026-03-09T15:53:13.277664+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: audit 2026-03-09T15:53:13.277664+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: audit 2026-03-09T15:53:14.169991+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: audit 2026-03-09T15:53:14.169991+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: audit 2026-03-09T15:53:14.176597+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:14.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:14 vm01 bash[20728]: audit 2026-03-09T15:53:14.176597+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:15.345 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.339+0000 7fd1f57fa640 1 -- 192.168.123.109:0/120403398 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7fd1bc002bf0 con 0x7fd1cc077610 2026-03-09T15:53:15.345 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 6 on host 'vm09' 2026-03-09T15:53:15.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fd1cc077610 msgr2=0x7fd1cc079ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:15.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fd1cc077610 0x7fd1cc079ad0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fd1f8103450 tx=0x7fd1e4006d20 comp rx=0 tx=0).stop 2026-03-09T15:53:15.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 msgr2=0x7fd1f81036d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:15.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 0x7fd1f81036d0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7fd1e8002410 tx=0x7fd1e80028b0 comp rx=0 tx=0).stop 2026-03-09T15:53:15.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 shutdown_connections 2026-03-09T15:53:15.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fd1f81047b0 0x7fd1f8102260 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:15.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fd1cc077610 0x7fd1cc079ad0 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:15.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fd1f8103df0 0x7fd1f8101d20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:15.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 --2- 192.168.123.109:0/120403398 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd1f8100d90 0x7fd1f81036d0 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:15.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 >> 192.168.123.109:0/120403398 conn(0x7fd1f80fc880 msgr2=0x7fd1f80fe660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:53:15.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 shutdown_connections 2026-03-09T15:53:15.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:15.343+0000 7fd1fe791640 1 -- 192.168.123.109:0/120403398 wait complete. 2026-03-09T15:53:15.438 DEBUG:teuthology.orchestra.run.vm09:osd.6> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.6.service 2026-03-09T15:53:15.440 INFO:tasks.cephadm:Deploying osd.7 on vm09 with /dev/vdb... 2026-03-09T15:53:15.440 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- lvm zap /dev/vdb 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:12.640954+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:12.640954+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:12.641045+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:12.641045+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:14.276887+0000 mon.a (mon.0) 585 : cluster [INF] osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] boot 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:14.276887+0000 mon.a (mon.0) 585 : cluster [INF] osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] boot 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:14.278380+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:14.278380+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: audit 2026-03-09T15:53:14.280330+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: audit 2026-03-09T15:53:14.280330+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: audit 2026-03-09T15:53:14.588307+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: audit 2026-03-09T15:53:14.588307+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: audit 2026-03-09T15:53:14.588954+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: audit 2026-03-09T15:53:14.588954+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: audit 2026-03-09T15:53:14.595858+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: audit 2026-03-09T15:53:14.595858+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:15.281010+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T15:53:15.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:15 vm09 bash[22983]: cluster 2026-03-09T15:53:15.281010+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:12.640954+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:12.640954+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:12.641045+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:12.641045+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:14.276887+0000 mon.a (mon.0) 585 : cluster [INF] osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] boot 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:14.276887+0000 mon.a (mon.0) 585 : cluster [INF] osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] boot 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:14.278380+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:14.278380+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: audit 2026-03-09T15:53:14.280330+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: audit 2026-03-09T15:53:14.280330+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: audit 2026-03-09T15:53:14.588307+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: audit 2026-03-09T15:53:14.588307+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: audit 2026-03-09T15:53:14.588954+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: audit 2026-03-09T15:53:14.588954+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: audit 2026-03-09T15:53:14.595858+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: audit 2026-03-09T15:53:14.595858+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:15.281010+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T15:53:15.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:15 vm01 bash[28152]: cluster 2026-03-09T15:53:15.281010+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:12.640954+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:12.640954+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:12.641045+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:12.641045+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:14.276887+0000 mon.a (mon.0) 585 : cluster [INF] osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] boot 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:14.276887+0000 mon.a (mon.0) 585 : cluster [INF] osd.6 [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] boot 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:14.278380+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:14.278380+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: audit 2026-03-09T15:53:14.280330+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: audit 2026-03-09T15:53:14.280330+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: audit 2026-03-09T15:53:14.588307+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: audit 2026-03-09T15:53:14.588307+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: audit 2026-03-09T15:53:14.588954+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: audit 2026-03-09T15:53:14.588954+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: audit 2026-03-09T15:53:14.595858+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: audit 2026-03-09T15:53:14.595858+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:15.281010+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T15:53:15.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:15 vm01 bash[20728]: cluster 2026-03-09T15:53:15.281010+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T15:53:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:16 vm09 bash[22983]: cluster 2026-03-09T15:53:15.084477+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:16 vm09 bash[22983]: cluster 2026-03-09T15:53:15.084477+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:16 vm09 bash[22983]: audit 2026-03-09T15:53:15.328680+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:16 vm09 bash[22983]: audit 2026-03-09T15:53:15.328680+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:16 vm09 bash[22983]: audit 2026-03-09T15:53:15.336429+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:16 vm09 bash[22983]: audit 2026-03-09T15:53:15.336429+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:16 vm09 bash[22983]: audit 2026-03-09T15:53:15.343589+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:16 vm09 bash[22983]: audit 2026-03-09T15:53:15.343589+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:16 vm01 bash[28152]: cluster 2026-03-09T15:53:15.084477+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:16 vm01 bash[28152]: cluster 2026-03-09T15:53:15.084477+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:16 vm01 bash[28152]: audit 2026-03-09T15:53:15.328680+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:16 vm01 bash[28152]: audit 2026-03-09T15:53:15.328680+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:16 vm01 bash[28152]: audit 2026-03-09T15:53:15.336429+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:16 vm01 bash[28152]: audit 2026-03-09T15:53:15.336429+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:16 vm01 bash[28152]: audit 2026-03-09T15:53:15.343589+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:16 vm01 bash[28152]: audit 2026-03-09T15:53:15.343589+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:16 vm01 bash[20728]: cluster 2026-03-09T15:53:15.084477+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:16 vm01 bash[20728]: cluster 2026-03-09T15:53:15.084477+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:16 vm01 bash[20728]: audit 2026-03-09T15:53:15.328680+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:16 vm01 bash[20728]: audit 2026-03-09T15:53:15.328680+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:16 vm01 bash[20728]: audit 2026-03-09T15:53:15.336429+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:16 vm01 bash[20728]: audit 2026-03-09T15:53:15.336429+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:16 vm01 bash[20728]: audit 2026-03-09T15:53:15.343589+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:16.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:16 vm01 bash[20728]: audit 2026-03-09T15:53:15.343589+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:17 vm09 bash[22983]: cluster 2026-03-09T15:53:16.606949+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T15:53:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:17 vm09 bash[22983]: cluster 2026-03-09T15:53:16.606949+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T15:53:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:17 vm09 bash[22983]: cluster 2026-03-09T15:53:17.084784+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:17 vm09 bash[22983]: cluster 2026-03-09T15:53:17.084784+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:17 vm01 bash[28152]: cluster 2026-03-09T15:53:16.606949+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T15:53:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:17 vm01 bash[28152]: cluster 2026-03-09T15:53:16.606949+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T15:53:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:17 vm01 bash[28152]: cluster 2026-03-09T15:53:17.084784+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:17 vm01 bash[28152]: cluster 2026-03-09T15:53:17.084784+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:17 vm01 bash[20728]: cluster 2026-03-09T15:53:16.606949+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T15:53:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:17 vm01 bash[20728]: cluster 2026-03-09T15:53:16.606949+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T15:53:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:17 vm01 bash[20728]: cluster 2026-03-09T15:53:17.084784+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:17 vm01 bash[20728]: cluster 2026-03-09T15:53:17.084784+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:20.138 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:53:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:20 vm01 bash[28152]: cluster 2026-03-09T15:53:19.085092+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:20 vm01 bash[28152]: cluster 2026-03-09T15:53:19.085092+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:20 vm01 bash[20728]: cluster 2026-03-09T15:53:19.085092+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:20 vm01 bash[20728]: cluster 2026-03-09T15:53:19.085092+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:20.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:20 vm09 bash[22983]: cluster 2026-03-09T15:53:19.085092+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:20.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:20 vm09 bash[22983]: cluster 2026-03-09T15:53:19.085092+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:21.962 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:53:21.975 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch daemon add osd vm09:/dev/vdb 2026-03-09T15:53:22.220 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: cluster 2026-03-09T15:53:21.085421+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:22.220 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: cluster 2026-03-09T15:53:21.085421+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:22.220 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: cephadm 2026-03-09T15:53:21.209823+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:22.220 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: cephadm 2026-03-09T15:53:21.209823+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:22.220 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.216709+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.220 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.216709+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.220 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.223279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.223279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.224556+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.224556+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.225111+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.225111+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.225601+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.225601+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: cephadm 2026-03-09T15:53:21.225862+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Adjusting osd_memory_target on vm09 to 151.9M 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: cephadm 2026-03-09T15:53:21.225862+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Adjusting osd_memory_target on vm09 to 151.9M 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: cephadm 2026-03-09T15:53:21.226244+0000 mgr.y (mgr.14150) 208 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: cephadm 2026-03-09T15:53:21.226244+0000 mgr.y (mgr.14150) 208 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.226579+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.226579+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.226987+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.226987+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.231383+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.221 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:22 vm09 bash[22983]: audit 2026-03-09T15:53:21.231383+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: cluster 2026-03-09T15:53:21.085421+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: cluster 2026-03-09T15:53:21.085421+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: cephadm 2026-03-09T15:53:21.209823+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: cephadm 2026-03-09T15:53:21.209823+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.216709+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.216709+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.223279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.223279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.224556+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.224556+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.225111+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.225111+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.225601+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.225601+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: cephadm 2026-03-09T15:53:21.225862+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Adjusting osd_memory_target on vm09 to 151.9M 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: cephadm 2026-03-09T15:53:21.225862+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Adjusting osd_memory_target on vm09 to 151.9M 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: cephadm 2026-03-09T15:53:21.226244+0000 mgr.y (mgr.14150) 208 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: cephadm 2026-03-09T15:53:21.226244+0000 mgr.y (mgr.14150) 208 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.226579+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.226579+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.226987+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.226987+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.231383+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:22 vm01 bash[28152]: audit 2026-03-09T15:53:21.231383+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: cluster 2026-03-09T15:53:21.085421+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: cluster 2026-03-09T15:53:21.085421+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: cephadm 2026-03-09T15:53:21.209823+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: cephadm 2026-03-09T15:53:21.209823+0000 mgr.y (mgr.14150) 206 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.216709+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.216709+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.223279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.223279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.224556+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.224556+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.225111+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.225111+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.225601+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.225601+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: cephadm 2026-03-09T15:53:21.225862+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Adjusting osd_memory_target on vm09 to 151.9M 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: cephadm 2026-03-09T15:53:21.225862+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Adjusting osd_memory_target on vm09 to 151.9M 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: cephadm 2026-03-09T15:53:21.226244+0000 mgr.y (mgr.14150) 208 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: cephadm 2026-03-09T15:53:21.226244+0000 mgr.y (mgr.14150) 208 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.226579+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.226579+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.226987+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.226987+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.231383+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:22.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:22 vm01 bash[20728]: audit 2026-03-09T15:53:21.231383+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:24 vm09 bash[22983]: cluster 2026-03-09T15:53:23.085673+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:24 vm09 bash[22983]: cluster 2026-03-09T15:53:23.085673+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:24.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:24 vm01 bash[28152]: cluster 2026-03-09T15:53:23.085673+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:24.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:24 vm01 bash[28152]: cluster 2026-03-09T15:53:23.085673+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:24.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:24 vm01 bash[20728]: cluster 2026-03-09T15:53:23.085673+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:24.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:24 vm01 bash[20728]: cluster 2026-03-09T15:53:23.085673+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:26.603 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:53:26.621 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:26 vm09 bash[22983]: cluster 2026-03-09T15:53:25.085973+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:26.621 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:26 vm09 bash[22983]: cluster 2026-03-09T15:53:25.085973+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:26.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:26 vm01 bash[28152]: cluster 2026-03-09T15:53:25.085973+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:26.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:26 vm01 bash[28152]: cluster 2026-03-09T15:53:25.085973+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:26.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:26 vm01 bash[20728]: cluster 2026-03-09T15:53:25.085973+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:26.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:26 vm01 bash[20728]: cluster 2026-03-09T15:53:25.085973+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:26.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 -- 192.168.123.109:0/3273122136 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48105f70 msgr2=0x7f6b481063f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:26.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3273122136 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48105f70 0x7f6b481063f0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f6b38009a30 tx=0x7f6b3802f240 comp rx=0 tx=0).stop 2026-03-09T15:53:26.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 -- 192.168.123.109:0/3273122136 shutdown_connections 2026-03-09T15:53:26.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3273122136 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6b48106930 0x7f6b4810d1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:26.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3273122136 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48105f70 0x7f6b481063f0 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:26.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3273122136 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6b48104d70 0x7f6b48105170 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:26.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 -- 192.168.123.109:0/3273122136 >> 192.168.123.109:0/3273122136 conn(0x7f6b48100520 msgr2=0x7f6b48102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:53:26.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 -- 192.168.123.109:0/3273122136 shutdown_connections 2026-03-09T15:53:26.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 -- 192.168.123.109:0/3273122136 wait complete. 2026-03-09T15:53:26.785 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 Processor -- start 2026-03-09T15:53:26.785 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 -- start start 2026-03-09T15:53:26.785 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48104d70 0x7f6b4819c570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:26.785 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4cffb640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48104d70 0x7f6b4819c570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:26.785 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4cffb640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48104d70 0x7f6b4819c570 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:50512/0 (socket says 192.168.123.109:50512) 2026-03-09T15:53:26.785 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6b48105f70 0x7f6b4819cab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6b48106930 0x7f6b481a3b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f6b4810fd80 con 0x7f6b48105f70 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f6b4810fc00 con 0x7f6b48104d70 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4f286640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f6b4810ff00 con 0x7f6b48106930 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4cffb640 1 -- 192.168.123.109:0/3387657296 learned_addr learned my addr 192.168.123.109:0/3387657296 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b3ffff640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6b48105f70 0x7f6b4819cab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.779+0000 7f6b4d7fc640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6b48106930 0x7f6b481a3b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b4cffb640 1 -- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6b48106930 msgr2=0x7f6b481a3b30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b4cffb640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6b48106930 0x7f6b481a3b30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:26.786 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b4cffb640 1 -- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6b48105f70 msgr2=0x7f6b4819cab0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:26.787 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b4cffb640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6b48105f70 0x7f6b4819cab0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:26.787 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b4cffb640 1 -- 192.168.123.109:0/3387657296 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6b481a4230 con 0x7f6b48104d70 2026-03-09T15:53:26.787 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b4cffb640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48104d70 0x7f6b4819c570 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f6b3000b7d0 tx=0x7f6b3000bca0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:53:26.788 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b3dffb640 1 -- 192.168.123.109:0/3387657296 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6b30004270 con 0x7f6b48104d70 2026-03-09T15:53:26.788 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b3dffb640 1 -- 192.168.123.109:0/3387657296 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f6b30010070 con 0x7f6b48104d70 2026-03-09T15:53:26.788 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b3dffb640 1 -- 192.168.123.109:0/3387657296 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6b3000c9a0 con 0x7f6b48104d70 2026-03-09T15:53:26.788 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6b481a4520 con 0x7f6b48104d70 2026-03-09T15:53:26.788 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6b48077780 con 0x7f6b48104d70 2026-03-09T15:53:26.789 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b3dffb640 1 -- 192.168.123.109:0/3387657296 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f6b3000cb40 con 0x7f6b48104d70 2026-03-09T15:53:26.789 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.783+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6b10005180 con 0x7f6b48104d70 2026-03-09T15:53:26.792 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.787+0000 7f6b3dffb640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f6b24077640 0x7f6b24079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:26.792 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.787+0000 7f6b3dffb640 1 -- 192.168.123.109:0/3387657296 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(45..45 src has 1..45) ==== 4835+0+0 (secure 0 0 0) 0x7f6b300995a0 con 0x7f6b48104d70 2026-03-09T15:53:26.792 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.787+0000 7f6b3ffff640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f6b24077640 0x7f6b24079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:26.792 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.787+0000 7f6b3dffb640 1 -- 192.168.123.109:0/3387657296 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6b300669d0 con 0x7f6b48104d70 2026-03-09T15:53:26.798 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.791+0000 7f6b3ffff640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f6b24077640 0x7f6b24079b00 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f6b38002410 tx=0x7f6b380029c0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:53:26.894 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:26.887+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7f6b10002bf0 con 0x7f6b24077640 2026-03-09T15:53:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:27 vm09 bash[22983]: audit 2026-03-09T15:53:26.897310+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:53:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:27 vm09 bash[22983]: audit 2026-03-09T15:53:26.897310+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:53:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:27 vm09 bash[22983]: audit 2026-03-09T15:53:26.898847+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:53:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:27 vm09 bash[22983]: audit 2026-03-09T15:53:26.898847+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:53:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:27 vm09 bash[22983]: audit 2026-03-09T15:53:26.899299+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:27 vm09 bash[22983]: audit 2026-03-09T15:53:26.899299+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:27 vm01 bash[28152]: audit 2026-03-09T15:53:26.897310+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:27 vm01 bash[28152]: audit 2026-03-09T15:53:26.897310+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:27 vm01 bash[28152]: audit 2026-03-09T15:53:26.898847+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:27 vm01 bash[28152]: audit 2026-03-09T15:53:26.898847+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:27 vm01 bash[28152]: audit 2026-03-09T15:53:26.899299+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:27 vm01 bash[28152]: audit 2026-03-09T15:53:26.899299+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:27 vm01 bash[20728]: audit 2026-03-09T15:53:26.897310+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:27 vm01 bash[20728]: audit 2026-03-09T15:53:26.897310+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:27 vm01 bash[20728]: audit 2026-03-09T15:53:26.898847+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:27 vm01 bash[20728]: audit 2026-03-09T15:53:26.898847+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:27 vm01 bash[20728]: audit 2026-03-09T15:53:26.899299+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:27.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:27 vm01 bash[20728]: audit 2026-03-09T15:53:26.899299+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:28 vm09 bash[22983]: audit 2026-03-09T15:53:26.895901+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:53:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:28 vm09 bash[22983]: audit 2026-03-09T15:53:26.895901+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:53:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:28 vm09 bash[22983]: cluster 2026-03-09T15:53:27.086354+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:28 vm09 bash[22983]: cluster 2026-03-09T15:53:27.086354+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:28.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:28 vm01 bash[28152]: audit 2026-03-09T15:53:26.895901+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:53:28.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:28 vm01 bash[28152]: audit 2026-03-09T15:53:26.895901+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:53:28.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:28 vm01 bash[28152]: cluster 2026-03-09T15:53:27.086354+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:28.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:28 vm01 bash[28152]: cluster 2026-03-09T15:53:27.086354+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:28 vm01 bash[20728]: audit 2026-03-09T15:53:26.895901+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:53:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:28 vm01 bash[20728]: audit 2026-03-09T15:53:26.895901+0000 mgr.y (mgr.14150) 211 : audit [DBG] from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:53:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:28 vm01 bash[20728]: cluster 2026-03-09T15:53:27.086354+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:28 vm01 bash[20728]: cluster 2026-03-09T15:53:27.086354+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:30 vm09 bash[22983]: cluster 2026-03-09T15:53:29.086729+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:30 vm09 bash[22983]: cluster 2026-03-09T15:53:29.086729+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:30 vm01 bash[28152]: cluster 2026-03-09T15:53:29.086729+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:30 vm01 bash[28152]: cluster 2026-03-09T15:53:29.086729+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:30.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:30 vm01 bash[20728]: cluster 2026-03-09T15:53:29.086729+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:30.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:30 vm01 bash[20728]: cluster 2026-03-09T15:53:29.086729+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:32.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:32 vm09 bash[22983]: cluster 2026-03-09T15:53:31.087043+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:32.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:32 vm09 bash[22983]: cluster 2026-03-09T15:53:31.087043+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:32.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:32 vm01 bash[28152]: cluster 2026-03-09T15:53:31.087043+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:32.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:32 vm01 bash[28152]: cluster 2026-03-09T15:53:31.087043+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:32.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:32 vm01 bash[20728]: cluster 2026-03-09T15:53:31.087043+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:32.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:32 vm01 bash[20728]: cluster 2026-03-09T15:53:31.087043+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:32.370033+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.109:0/2232331217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:32.370033+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.109:0/2232331217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:32.371493+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:32.371493+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:32.375311+0000 mon.a (mon.0) 608 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]': finished 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:32.375311+0000 mon.a (mon.0) 608 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]': finished 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: cluster 2026-03-09T15:53:32.383007+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: cluster 2026-03-09T15:53:32.383007+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:32.383290+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:32.383290+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:33.053550+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.109:0/1535644065' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:53:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:33 vm09 bash[22983]: audit 2026-03-09T15:53:33.053550+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.109:0/1535644065' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:32.370033+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.109:0/2232331217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:32.370033+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.109:0/2232331217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:32.371493+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:32.371493+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:32.375311+0000 mon.a (mon.0) 608 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]': finished 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:32.375311+0000 mon.a (mon.0) 608 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]': finished 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: cluster 2026-03-09T15:53:32.383007+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: cluster 2026-03-09T15:53:32.383007+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:32.383290+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:32.383290+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:33.053550+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.109:0/1535644065' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:33 vm01 bash[28152]: audit 2026-03-09T15:53:33.053550+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.109:0/1535644065' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:32.370033+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.109:0/2232331217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:32.370033+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.109:0/2232331217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:32.371493+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:32.371493+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:32.375311+0000 mon.a (mon.0) 608 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]': finished 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:32.375311+0000 mon.a (mon.0) 608 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b"}]': finished 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: cluster 2026-03-09T15:53:32.383007+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: cluster 2026-03-09T15:53:32.383007+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:32.383290+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:32.383290+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:33.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:33.053550+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.109:0/1535644065' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:53:33.681 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:33 vm01 bash[20728]: audit 2026-03-09T15:53:33.053550+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.109:0/1535644065' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T15:53:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:34 vm09 bash[22983]: cluster 2026-03-09T15:53:33.087325+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:34 vm09 bash[22983]: cluster 2026-03-09T15:53:33.087325+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:34.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:34 vm01 bash[28152]: cluster 2026-03-09T15:53:33.087325+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:34.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:34 vm01 bash[28152]: cluster 2026-03-09T15:53:33.087325+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:34.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:34 vm01 bash[20728]: cluster 2026-03-09T15:53:33.087325+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:34.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:34 vm01 bash[20728]: cluster 2026-03-09T15:53:33.087325+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:36 vm09 bash[22983]: cluster 2026-03-09T15:53:35.087581+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:36 vm09 bash[22983]: cluster 2026-03-09T15:53:35.087581+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:36.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:36 vm01 bash[28152]: cluster 2026-03-09T15:53:35.087581+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:36.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:36 vm01 bash[28152]: cluster 2026-03-09T15:53:35.087581+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:36.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:36 vm01 bash[20728]: cluster 2026-03-09T15:53:35.087581+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:36.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:36 vm01 bash[20728]: cluster 2026-03-09T15:53:35.087581+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:38 vm01 bash[28152]: cluster 2026-03-09T15:53:37.087890+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:38 vm01 bash[28152]: cluster 2026-03-09T15:53:37.087890+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:38.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:38 vm01 bash[20728]: cluster 2026-03-09T15:53:37.087890+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:38 vm01 bash[20728]: cluster 2026-03-09T15:53:37.087890+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:38 vm09 bash[22983]: cluster 2026-03-09T15:53:37.087890+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:38 vm09 bash[22983]: cluster 2026-03-09T15:53:37.087890+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:40.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:40 vm01 bash[28152]: cluster 2026-03-09T15:53:39.088131+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:40.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:40 vm01 bash[28152]: cluster 2026-03-09T15:53:39.088131+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:40.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:40 vm01 bash[20728]: cluster 2026-03-09T15:53:39.088131+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:40.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:40 vm01 bash[20728]: cluster 2026-03-09T15:53:39.088131+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:40 vm09 bash[22983]: cluster 2026-03-09T15:53:39.088131+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:40.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:40 vm09 bash[22983]: cluster 2026-03-09T15:53:39.088131+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 bash[22983]: cluster 2026-03-09T15:53:41.088557+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 bash[22983]: cluster 2026-03-09T15:53:41.088557+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 bash[22983]: audit 2026-03-09T15:53:41.545435+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 bash[22983]: audit 2026-03-09T15:53:41.545435+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 bash[22983]: audit 2026-03-09T15:53:41.546240+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 bash[22983]: audit 2026-03-09T15:53:41.546240+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 bash[22983]: cephadm 2026-03-09T15:53:41.546932+0000 mgr.y (mgr.14150) 220 : cephadm [INF] Deploying daemon osd.7 on vm09 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 bash[22983]: cephadm 2026-03-09T15:53:41.546932+0000 mgr.y (mgr.14150) 220 : cephadm [INF] Deploying daemon osd.7 on vm09 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.633 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.634 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.634 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.634 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.634 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.634 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.634 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:53:42 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:53:42.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:42 vm01 bash[28152]: cluster 2026-03-09T15:53:41.088557+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:42.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:42 vm01 bash[28152]: cluster 2026-03-09T15:53:41.088557+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:42 vm01 bash[28152]: audit 2026-03-09T15:53:41.545435+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:42 vm01 bash[28152]: audit 2026-03-09T15:53:41.545435+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:42 vm01 bash[28152]: audit 2026-03-09T15:53:41.546240+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:42 vm01 bash[28152]: audit 2026-03-09T15:53:41.546240+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:42 vm01 bash[28152]: cephadm 2026-03-09T15:53:41.546932+0000 mgr.y (mgr.14150) 220 : cephadm [INF] Deploying daemon osd.7 on vm09 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:42 vm01 bash[28152]: cephadm 2026-03-09T15:53:41.546932+0000 mgr.y (mgr.14150) 220 : cephadm [INF] Deploying daemon osd.7 on vm09 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:42 vm01 bash[20728]: cluster 2026-03-09T15:53:41.088557+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:42 vm01 bash[20728]: cluster 2026-03-09T15:53:41.088557+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:42 vm01 bash[20728]: audit 2026-03-09T15:53:41.545435+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:42 vm01 bash[20728]: audit 2026-03-09T15:53:41.545435+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:42 vm01 bash[20728]: audit 2026-03-09T15:53:41.546240+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:42 vm01 bash[20728]: audit 2026-03-09T15:53:41.546240+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:42 vm01 bash[20728]: cephadm 2026-03-09T15:53:41.546932+0000 mgr.y (mgr.14150) 220 : cephadm [INF] Deploying daemon osd.7 on vm09 2026-03-09T15:53:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:42 vm01 bash[20728]: cephadm 2026-03-09T15:53:41.546932+0000 mgr.y (mgr.14150) 220 : cephadm [INF] Deploying daemon osd.7 on vm09 2026-03-09T15:53:43.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:43 vm09 bash[22983]: audit 2026-03-09T15:53:42.715912+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:43.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:43 vm09 bash[22983]: audit 2026-03-09T15:53:42.715912+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:43.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:43 vm09 bash[22983]: audit 2026-03-09T15:53:42.725497+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:43 vm09 bash[22983]: audit 2026-03-09T15:53:42.725497+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:43 vm09 bash[22983]: audit 2026-03-09T15:53:42.737025+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:43 vm09 bash[22983]: audit 2026-03-09T15:53:42.737025+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:43 vm01 bash[28152]: audit 2026-03-09T15:53:42.715912+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:43 vm01 bash[28152]: audit 2026-03-09T15:53:42.715912+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:43 vm01 bash[28152]: audit 2026-03-09T15:53:42.725497+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:43 vm01 bash[28152]: audit 2026-03-09T15:53:42.725497+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:43 vm01 bash[28152]: audit 2026-03-09T15:53:42.737025+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:43 vm01 bash[28152]: audit 2026-03-09T15:53:42.737025+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:43 vm01 bash[20728]: audit 2026-03-09T15:53:42.715912+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:43 vm01 bash[20728]: audit 2026-03-09T15:53:42.715912+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:43 vm01 bash[20728]: audit 2026-03-09T15:53:42.725497+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:43 vm01 bash[20728]: audit 2026-03-09T15:53:42.725497+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:43 vm01 bash[20728]: audit 2026-03-09T15:53:42.737025+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:43 vm01 bash[20728]: audit 2026-03-09T15:53:42.737025+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:44 vm09 bash[22983]: cluster 2026-03-09T15:53:43.088893+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:44 vm09 bash[22983]: cluster 2026-03-09T15:53:43.088893+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:44 vm01 bash[28152]: cluster 2026-03-09T15:53:43.088893+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:44.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:44 vm01 bash[28152]: cluster 2026-03-09T15:53:43.088893+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:44 vm01 bash[20728]: cluster 2026-03-09T15:53:43.088893+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:44 vm01 bash[20728]: cluster 2026-03-09T15:53:43.088893+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:46.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:46 vm09 bash[22983]: cluster 2026-03-09T15:53:45.089335+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:46.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:46 vm09 bash[22983]: cluster 2026-03-09T15:53:45.089335+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:46.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:46 vm09 bash[22983]: audit 2026-03-09T15:53:46.425227+0000 mon.b (mon.1) 20 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:46 vm09 bash[22983]: audit 2026-03-09T15:53:46.425227+0000 mon.b (mon.1) 20 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:46 vm09 bash[22983]: audit 2026-03-09T15:53:46.426564+0000 mon.a (mon.0) 616 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:46 vm09 bash[22983]: audit 2026-03-09T15:53:46.426564+0000 mon.a (mon.0) 616 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:46 vm01 bash[28152]: cluster 2026-03-09T15:53:45.089335+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:46 vm01 bash[28152]: cluster 2026-03-09T15:53:45.089335+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:46 vm01 bash[28152]: audit 2026-03-09T15:53:46.425227+0000 mon.b (mon.1) 20 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:46 vm01 bash[28152]: audit 2026-03-09T15:53:46.425227+0000 mon.b (mon.1) 20 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:46 vm01 bash[28152]: audit 2026-03-09T15:53:46.426564+0000 mon.a (mon.0) 616 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:46 vm01 bash[28152]: audit 2026-03-09T15:53:46.426564+0000 mon.a (mon.0) 616 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:46 vm01 bash[20728]: cluster 2026-03-09T15:53:45.089335+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:46 vm01 bash[20728]: cluster 2026-03-09T15:53:45.089335+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:46 vm01 bash[20728]: audit 2026-03-09T15:53:46.425227+0000 mon.b (mon.1) 20 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:46 vm01 bash[20728]: audit 2026-03-09T15:53:46.425227+0000 mon.b (mon.1) 20 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:46 vm01 bash[20728]: audit 2026-03-09T15:53:46.426564+0000 mon.a (mon.0) 616 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:46.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:46 vm01 bash[20728]: audit 2026-03-09T15:53:46.426564+0000 mon.a (mon.0) 616 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T15:53:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: audit 2026-03-09T15:53:46.504369+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T15:53:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: audit 2026-03-09T15:53:46.504369+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T15:53:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: audit 2026-03-09T15:53:46.507677+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: audit 2026-03-09T15:53:46.507677+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: cluster 2026-03-09T15:53:46.509109+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T15:53:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: cluster 2026-03-09T15:53:46.509109+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T15:53:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: audit 2026-03-09T15:53:46.509960+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: audit 2026-03-09T15:53:46.509960+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:47.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: audit 2026-03-09T15:53:46.510122+0000 mon.a (mon.0) 620 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:47 vm09 bash[22983]: audit 2026-03-09T15:53:46.510122+0000 mon.a (mon.0) 620 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: audit 2026-03-09T15:53:46.504369+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: audit 2026-03-09T15:53:46.504369+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: audit 2026-03-09T15:53:46.507677+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: audit 2026-03-09T15:53:46.507677+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: cluster 2026-03-09T15:53:46.509109+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: cluster 2026-03-09T15:53:46.509109+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: audit 2026-03-09T15:53:46.509960+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: audit 2026-03-09T15:53:46.509960+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: audit 2026-03-09T15:53:46.510122+0000 mon.a (mon.0) 620 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:47 vm01 bash[28152]: audit 2026-03-09T15:53:46.510122+0000 mon.a (mon.0) 620 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: audit 2026-03-09T15:53:46.504369+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: audit 2026-03-09T15:53:46.504369+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: audit 2026-03-09T15:53:46.507677+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: audit 2026-03-09T15:53:46.507677+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: cluster 2026-03-09T15:53:46.509109+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: cluster 2026-03-09T15:53:46.509109+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: audit 2026-03-09T15:53:46.509960+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: audit 2026-03-09T15:53:46.509960+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: audit 2026-03-09T15:53:46.510122+0000 mon.a (mon.0) 620 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:47.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:47 vm01 bash[20728]: audit 2026-03-09T15:53:46.510122+0000 mon.a (mon.0) 620 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: cluster 2026-03-09T15:53:47.089735+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: cluster 2026-03-09T15:53:47.089735+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:47.507507+0000 mon.a (mon.0) 621 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:47.507507+0000 mon.a (mon.0) 621 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: cluster 2026-03-09T15:53:47.513946+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: cluster 2026-03-09T15:53:47.513946+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:47.515988+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:47.515988+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:47.516595+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:47.516595+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: cluster 2026-03-09T15:53:48.118722+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: cluster 2026-03-09T15:53:48.118722+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:48.119414+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:48.119414+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:48.423874+0000 mon.a (mon.0) 627 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:48.423874+0000 mon.a (mon.0) 627 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:48.517184+0000 mon.a (mon.0) 628 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.879 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:48 vm09 bash[22983]: audit 2026-03-09T15:53:48.517184+0000 mon.a (mon.0) 628 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: cluster 2026-03-09T15:53:47.089735+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: cluster 2026-03-09T15:53:47.089735+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:47.507507+0000 mon.a (mon.0) 621 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:47.507507+0000 mon.a (mon.0) 621 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: cluster 2026-03-09T15:53:47.513946+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: cluster 2026-03-09T15:53:47.513946+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:47.515988+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:47.515988+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:47.516595+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:47.516595+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: cluster 2026-03-09T15:53:48.118722+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: cluster 2026-03-09T15:53:48.118722+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:48.119414+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:48.119414+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:48.423874+0000 mon.a (mon.0) 627 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:48.423874+0000 mon.a (mon.0) 627 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:48.517184+0000 mon.a (mon.0) 628 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:48 vm01 bash[28152]: audit 2026-03-09T15:53:48.517184+0000 mon.a (mon.0) 628 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: cluster 2026-03-09T15:53:47.089735+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: cluster 2026-03-09T15:53:47.089735+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:47.507507+0000 mon.a (mon.0) 621 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:47.507507+0000 mon.a (mon.0) 621 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: cluster 2026-03-09T15:53:47.513946+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: cluster 2026-03-09T15:53:47.513946+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:47.515988+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:47.515988+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:47.516595+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:47.516595+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: cluster 2026-03-09T15:53:48.118722+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: cluster 2026-03-09T15:53:48.118722+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:48.119414+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:48.119414+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:48.423874+0000 mon.a (mon.0) 627 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:48.423874+0000 mon.a (mon.0) 627 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:48.517184+0000 mon.a (mon.0) 628 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:48.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:48 vm01 bash[20728]: audit 2026-03-09T15:53:48.517184+0000 mon.a (mon.0) 628 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:47.432484+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:47.432484+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:47.432545+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:47.432545+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:49.090080+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:49.090080+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:49.130404+0000 mon.a (mon.0) 629 : cluster [INF] osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] boot 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:49.130404+0000 mon.a (mon.0) 629 : cluster [INF] osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] boot 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:49.130536+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: cluster 2026-03-09T15:53:49.130536+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.130759+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.130759+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.185814+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.185814+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.193818+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.193818+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.195296+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.195296+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.195985+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.195985+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.202348+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:50 vm09 bash[22983]: audit 2026-03-09T15:53:49.202348+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.171 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b3dffb640 1 -- 192.168.123.109:0/3387657296 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f6b10002bf0 con 0x7f6b24077640 2026-03-09T15:53:50.173 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 7 on host 'vm09' 2026-03-09T15:53:50.173 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f6b24077640 msgr2=0x7f6b24079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:50.173 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f6b24077640 0x7f6b24079b00 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f6b38002410 tx=0x7f6b380029c0 comp rx=0 tx=0).stop 2026-03-09T15:53:50.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48104d70 msgr2=0x7f6b4819c570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:50.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48104d70 0x7f6b4819c570 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f6b3000b7d0 tx=0x7f6b3000bca0 comp rx=0 tx=0).stop 2026-03-09T15:53:50.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 shutdown_connections 2026-03-09T15:53:50.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f6b24077640 0x7f6b24079b00 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:50.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6b48106930 0x7f6b481a3b30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:50.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6b48105f70 0x7f6b4819cab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:50.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 --2- 192.168.123.109:0/3387657296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b48104d70 0x7f6b4819c570 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:50.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 >> 192.168.123.109:0/3387657296 conn(0x7f6b48100520 msgr2=0x7f6b48101dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:53:50.175 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 shutdown_connections 2026-03-09T15:53:50.175 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:53:50.167+0000 7f6b4f286640 1 -- 192.168.123.109:0/3387657296 wait complete. 2026-03-09T15:53:50.270 DEBUG:teuthology.orchestra.run.vm09:osd.7> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.7.service 2026-03-09T15:53:50.271 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-09T15:53:50.271 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd stat -f json 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:47.432484+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:47.432484+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:47.432545+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:47.432545+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:49.090080+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:49.090080+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:49.130404+0000 mon.a (mon.0) 629 : cluster [INF] osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] boot 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:49.130404+0000 mon.a (mon.0) 629 : cluster [INF] osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] boot 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:49.130536+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: cluster 2026-03-09T15:53:49.130536+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.130759+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.130759+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.185814+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.185814+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.193818+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.193818+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.195296+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.195296+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.195985+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.195985+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.202348+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:50 vm01 bash[20728]: audit 2026-03-09T15:53:49.202348+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:47.432484+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:47.432484+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:47.432545+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:47.432545+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:49.090080+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:49.090080+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:49.130404+0000 mon.a (mon.0) 629 : cluster [INF] osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] boot 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:49.130404+0000 mon.a (mon.0) 629 : cluster [INF] osd.7 [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] boot 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:49.130536+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: cluster 2026-03-09T15:53:49.130536+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.130759+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.130759+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.185814+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.185814+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.193818+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.193818+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.195296+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.195296+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.195985+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.195985+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.202348+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:50 vm01 bash[28152]: audit 2026-03-09T15:53:49.202348+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:51 vm01 bash[28152]: cluster 2026-03-09T15:53:50.124852+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:51 vm01 bash[28152]: cluster 2026-03-09T15:53:50.124852+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:51 vm01 bash[28152]: audit 2026-03-09T15:53:50.129802+0000 mon.a (mon.0) 638 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:51 vm01 bash[28152]: audit 2026-03-09T15:53:50.129802+0000 mon.a (mon.0) 638 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:51 vm01 bash[28152]: audit 2026-03-09T15:53:50.142222+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:51 vm01 bash[28152]: audit 2026-03-09T15:53:50.142222+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:51 vm01 bash[28152]: audit 2026-03-09T15:53:50.170193+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:51 vm01 bash[28152]: audit 2026-03-09T15:53:50.170193+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:51 vm01 bash[20728]: cluster 2026-03-09T15:53:50.124852+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:51 vm01 bash[20728]: cluster 2026-03-09T15:53:50.124852+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:51 vm01 bash[20728]: audit 2026-03-09T15:53:50.129802+0000 mon.a (mon.0) 638 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:51 vm01 bash[20728]: audit 2026-03-09T15:53:50.129802+0000 mon.a (mon.0) 638 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:51 vm01 bash[20728]: audit 2026-03-09T15:53:50.142222+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:51 vm01 bash[20728]: audit 2026-03-09T15:53:50.142222+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:51 vm01 bash[20728]: audit 2026-03-09T15:53:50.170193+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:51 vm01 bash[20728]: audit 2026-03-09T15:53:50.170193+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:51 vm09 bash[22983]: cluster 2026-03-09T15:53:50.124852+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T15:53:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:51 vm09 bash[22983]: cluster 2026-03-09T15:53:50.124852+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T15:53:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:51 vm09 bash[22983]: audit 2026-03-09T15:53:50.129802+0000 mon.a (mon.0) 638 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:51 vm09 bash[22983]: audit 2026-03-09T15:53:50.129802+0000 mon.a (mon.0) 638 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:53:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:51 vm09 bash[22983]: audit 2026-03-09T15:53:50.142222+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:51 vm09 bash[22983]: audit 2026-03-09T15:53:50.142222+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:51 vm09 bash[22983]: audit 2026-03-09T15:53:50.170193+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:51 vm09 bash[22983]: audit 2026-03-09T15:53:50.170193+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:52.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:52 vm01 bash[28152]: cluster 2026-03-09T15:53:51.090455+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:52.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:52 vm01 bash[28152]: cluster 2026-03-09T15:53:51.090455+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:52.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:52 vm01 bash[28152]: cluster 2026-03-09T15:53:51.179079+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T15:53:52.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:52 vm01 bash[28152]: cluster 2026-03-09T15:53:51.179079+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T15:53:52.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:52 vm01 bash[20728]: cluster 2026-03-09T15:53:51.090455+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:52.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:52 vm01 bash[20728]: cluster 2026-03-09T15:53:51.090455+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:52.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:52 vm01 bash[20728]: cluster 2026-03-09T15:53:51.179079+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T15:53:52.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:52 vm01 bash[20728]: cluster 2026-03-09T15:53:51.179079+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T15:53:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:52 vm09 bash[22983]: cluster 2026-03-09T15:53:51.090455+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:52 vm09 bash[22983]: cluster 2026-03-09T15:53:51.090455+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:52 vm09 bash[22983]: cluster 2026-03-09T15:53:51.179079+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T15:53:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:52 vm09 bash[22983]: cluster 2026-03-09T15:53:51.179079+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T15:53:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:54 vm09 bash[22983]: cluster 2026-03-09T15:53:53.090788+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:54 vm09 bash[22983]: cluster 2026-03-09T15:53:53.090788+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:54.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:54 vm01 bash[28152]: cluster 2026-03-09T15:53:53.090788+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:54.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:54 vm01 bash[28152]: cluster 2026-03-09T15:53:53.090788+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:54.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:54 vm01 bash[20728]: cluster 2026-03-09T15:53:53.090788+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:54.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:54 vm01 bash[20728]: cluster 2026-03-09T15:53:53.090788+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:54.909 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 -- 192.168.123.101:0/4000008657 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3388075a40 msgr2=0x7f3388075ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 --2- 192.168.123.101:0/4000008657 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3388075a40 0x7f3388075ea0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f3370009960 tx=0x7f337002f160 comp rx=0 tx=0).stop 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 -- 192.168.123.101:0/4000008657 shutdown_connections 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 --2- 192.168.123.101:0/4000008657 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f33881064c0 0x7f33881113d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 --2- 192.168.123.101:0/4000008657 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3388075a40 0x7f3388075ea0 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 --2- 192.168.123.101:0/4000008657 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f33880770a0 0x7f3388075500 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 -- 192.168.123.101:0/4000008657 >> 192.168.123.101:0/4000008657 conn(0x7f33880fe290 msgr2=0x7f33881006b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 -- 192.168.123.101:0/4000008657 shutdown_connections 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 -- 192.168.123.101:0/4000008657 wait complete. 2026-03-09T15:53:55.094 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 Processor -- start 2026-03-09T15:53:55.095 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 -- start start 2026-03-09T15:53:55.095 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3388075a40 0x7f33881a08e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:55.095 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f33880770a0 0x7f33881a0e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:55.095 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338caba640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3388075a40 0x7f33881a08e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:55.095 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338caba640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3388075a40 0x7f33881a08e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:32830/0 (socket says 192.168.123.101:32830) 2026-03-09T15:53:55.095 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f33881064c0 0x7f33881a7ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:55.095 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3388114200 con 0x7f3388075a40 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f3388114080 con 0x7f33880770a0 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338ed45640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f3388114380 con 0x7f33881064c0 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338caba640 1 -- 192.168.123.101:0/1597187984 learned_addr learned my addr 192.168.123.101:0/1597187984 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338caba640 1 -- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f33881064c0 msgr2=0x7f33881a7ea0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338caba640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f33881064c0 0x7f33881a7ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.090+0000 7f338caba640 1 -- 192.168.123.101:0/1597187984 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f33880770a0 msgr2=0x7f33881a0e20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f337ffff640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f33880770a0 0x7f33881a0e20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f338caba640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f33880770a0 0x7f33881a0e20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f338caba640 1 -- 192.168.123.101:0/1597187984 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f33881a85a0 con 0x7f3388075a40 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f337ffff640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f33880770a0 0x7f33881a0e20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f338caba640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3388075a40 0x7f33881a08e0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f337800ed60 tx=0x7f337800c6a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:53:55.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f337dffb640 1 -- 192.168.123.101:0/1597187984 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f33780040d0 con 0x7f3388075a40 2026-03-09T15:53:55.097 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f337dffb640 1 -- 192.168.123.101:0/1597187984 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f33780026e0 con 0x7f3388075a40 2026-03-09T15:53:55.097 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f337dffb640 1 -- 192.168.123.101:0/1597187984 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3378010670 con 0x7f3388075a40 2026-03-09T15:53:55.097 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f33881a8890 con 0x7f3388075a40 2026-03-09T15:53:55.097 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f33881a8d50 con 0x7f3388075a40 2026-03-09T15:53:55.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3350005180 con 0x7f3388075a40 2026-03-09T15:53:55.103 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.094+0000 7f337dffb640 1 -- 192.168.123.101:0/1597187984 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f3378004270 con 0x7f3388075a40 2026-03-09T15:53:55.103 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.098+0000 7f337dffb640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f335c077640 0x7f335c079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:55.103 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.098+0000 7f337dffb640 1 -- 192.168.123.101:0/1597187984 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(52..52 src has 1..52) ==== 5267+0+0 (secure 0 0 0) 0x7f3378099760 con 0x7f3388075a40 2026-03-09T15:53:55.103 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.098+0000 7f337ffff640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f335c077640 0x7f335c079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:55.104 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.102+0000 7f337dffb640 1 -- 192.168.123.101:0/1597187984 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3378062c20 con 0x7f3388075a40 2026-03-09T15:53:55.104 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.102+0000 7f337ffff640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f335c077640 0x7f335c079b00 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f3370002410 tx=0x7f3370002830 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:53:55.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.206+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7f3350005470 con 0x7f3388075a40 2026-03-09T15:53:55.210 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.206+0000 7f337dffb640 1 -- 192.168.123.101:0/1597187984 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v52) ==== 74+0+130 (secure 0 0 0) 0x7f33780668d0 con 0x7f3388075a40 2026-03-09T15:53:55.210 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:53:55.212 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f335c077640 msgr2=0x7f335c079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:55.212 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f335c077640 0x7f335c079b00 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f3370002410 tx=0x7f3370002830 comp rx=0 tx=0).stop 2026-03-09T15:53:55.212 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3388075a40 msgr2=0x7f33881a08e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:55.212 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3388075a40 0x7f33881a08e0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f337800ed60 tx=0x7f337800c6a0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.212 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 shutdown_connections 2026-03-09T15:53:55.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f335c077640 0x7f335c079b00 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f33881064c0 0x7f33881a7ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f33880770a0 0x7f33881a0e20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 --2- 192.168.123.101:0/1597187984 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3388075a40 0x7f33881a08e0 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:55.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 >> 192.168.123.101:0/1597187984 conn(0x7f33880fe290 msgr2=0x7f3388102460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:53:55.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 shutdown_connections 2026-03-09T15:53:55.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:55.210+0000 7f338ed45640 1 -- 192.168.123.101:0/1597187984 wait complete. 2026-03-09T15:53:55.264 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":52,"num_osds":8,"num_up_osds":8,"osd_up_since":1773071629,"num_in_osds":8,"osd_in_since":1773071612,"num_remapped_pgs":0} 2026-03-09T15:53:55.265 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd dump --format=json 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: cluster 2026-03-09T15:53:55.091069+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: cluster 2026-03-09T15:53:55.091069+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.211167+0000 mon.a (mon.0) 642 : audit [DBG] from='client.? 192.168.123.101:0/1597187984' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.211167+0000 mon.a (mon.0) 642 : audit [DBG] from='client.? 192.168.123.101:0/1597187984' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.843431+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.843431+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.848987+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.848987+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.851381+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.851381+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.852194+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.852194+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.852917+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.852917+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.853525+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.853525+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.854895+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.854895+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.855461+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.855461+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.859197+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:56 vm09 bash[22983]: audit 2026-03-09T15:53:55.859197+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: cluster 2026-03-09T15:53:55.091069+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: cluster 2026-03-09T15:53:55.091069+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.211167+0000 mon.a (mon.0) 642 : audit [DBG] from='client.? 192.168.123.101:0/1597187984' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.211167+0000 mon.a (mon.0) 642 : audit [DBG] from='client.? 192.168.123.101:0/1597187984' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.843431+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.843431+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.848987+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.848987+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.851381+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.851381+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.852194+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.852194+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.852917+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.852917+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.853525+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.853525+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.854895+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.854895+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.855461+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.855461+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.859197+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:56 vm01 bash[28152]: audit 2026-03-09T15:53:55.859197+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: cluster 2026-03-09T15:53:55.091069+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: cluster 2026-03-09T15:53:55.091069+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.211167+0000 mon.a (mon.0) 642 : audit [DBG] from='client.? 192.168.123.101:0/1597187984' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.211167+0000 mon.a (mon.0) 642 : audit [DBG] from='client.? 192.168.123.101:0/1597187984' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.843431+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.843431+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.848987+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.848987+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.851381+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.851381+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.852194+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.852194+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.852917+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.852917+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.853525+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.853525+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.854895+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.854895+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.855461+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.855461+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.859197+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:56.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:56 vm01 bash[20728]: audit 2026-03-09T15:53:55.859197+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:53:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:57 vm09 bash[22983]: cephadm 2026-03-09T15:53:55.837321+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:57 vm09 bash[22983]: cephadm 2026-03-09T15:53:55.837321+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:57 vm09 bash[22983]: cephadm 2026-03-09T15:53:55.853972+0000 mgr.y (mgr.14150) 229 : cephadm [INF] Adjusting osd_memory_target on vm09 to 113.9M 2026-03-09T15:53:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:57 vm09 bash[22983]: cephadm 2026-03-09T15:53:55.853972+0000 mgr.y (mgr.14150) 229 : cephadm [INF] Adjusting osd_memory_target on vm09 to 113.9M 2026-03-09T15:53:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:57 vm09 bash[22983]: cephadm 2026-03-09T15:53:55.854543+0000 mgr.y (mgr.14150) 230 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T15:53:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:57 vm09 bash[22983]: cephadm 2026-03-09T15:53:55.854543+0000 mgr.y (mgr.14150) 230 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:57 vm01 bash[28152]: cephadm 2026-03-09T15:53:55.837321+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:57 vm01 bash[28152]: cephadm 2026-03-09T15:53:55.837321+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:57 vm01 bash[28152]: cephadm 2026-03-09T15:53:55.853972+0000 mgr.y (mgr.14150) 229 : cephadm [INF] Adjusting osd_memory_target on vm09 to 113.9M 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:57 vm01 bash[28152]: cephadm 2026-03-09T15:53:55.853972+0000 mgr.y (mgr.14150) 229 : cephadm [INF] Adjusting osd_memory_target on vm09 to 113.9M 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:57 vm01 bash[28152]: cephadm 2026-03-09T15:53:55.854543+0000 mgr.y (mgr.14150) 230 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:57 vm01 bash[28152]: cephadm 2026-03-09T15:53:55.854543+0000 mgr.y (mgr.14150) 230 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:57 vm01 bash[20728]: cephadm 2026-03-09T15:53:55.837321+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:57 vm01 bash[20728]: cephadm 2026-03-09T15:53:55.837321+0000 mgr.y (mgr.14150) 228 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:57 vm01 bash[20728]: cephadm 2026-03-09T15:53:55.853972+0000 mgr.y (mgr.14150) 229 : cephadm [INF] Adjusting osd_memory_target on vm09 to 113.9M 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:57 vm01 bash[20728]: cephadm 2026-03-09T15:53:55.853972+0000 mgr.y (mgr.14150) 229 : cephadm [INF] Adjusting osd_memory_target on vm09 to 113.9M 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:57 vm01 bash[20728]: cephadm 2026-03-09T15:53:55.854543+0000 mgr.y (mgr.14150) 230 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T15:53:57.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:57 vm01 bash[20728]: cephadm 2026-03-09T15:53:55.854543+0000 mgr.y (mgr.14150) 230 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T15:53:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:58 vm09 bash[22983]: cluster 2026-03-09T15:53:57.091337+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:53:58 vm09 bash[22983]: cluster 2026-03-09T15:53:57.091337+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:58.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:58 vm01 bash[28152]: cluster 2026-03-09T15:53:57.091337+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:58.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:53:58 vm01 bash[28152]: cluster 2026-03-09T15:53:57.091337+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:58.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:58 vm01 bash[20728]: cluster 2026-03-09T15:53:57.091337+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:58.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:53:58 vm01 bash[20728]: cluster 2026-03-09T15:53:57.091337+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:53:58.935 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:53:59.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- 192.168.123.101:0/1488049272 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff60c10a070 msgr2=0x7ff60c111bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:59.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 --2- 192.168.123.101:0/1488049272 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff60c10a070 0x7ff60c111bf0 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7ff60800b0a0 tx=0x7ff60802f470 comp rx=0 tx=0).stop 2026-03-09T15:53:59.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- 192.168.123.101:0/1488049272 shutdown_connections 2026-03-09T15:53:59.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 --2- 192.168.123.101:0/1488049272 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff60c10a070 0x7ff60c111bf0 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 --2- 192.168.123.101:0/1488049272 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff60c1058f0 0x7ff60c109940 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 --2- 192.168.123.101:0/1488049272 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60c104f40 0x7ff60c105320 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.096 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- 192.168.123.101:0/1488049272 >> 192.168.123.101:0/1488049272 conn(0x7ff60c1009e0 msgr2=0x7ff60c102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:53:59.097 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- 192.168.123.101:0/1488049272 shutdown_connections 2026-03-09T15:53:59.097 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- 192.168.123.101:0/1488049272 wait complete. 2026-03-09T15:53:59.097 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 Processor -- start 2026-03-09T15:53:59.097 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- start start 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60c104f40 0x7ff60c1a2610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff60c1058f0 0x7ff60c1a2b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff60c10a070 0x7ff60c19c770 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7ff60c114370 con 0x7ff60c1058f0 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7ff60c1141f0 con 0x7ff60c104f40 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7ff60c1144f0 con 0x7ff60c10a070 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff610b9d640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60c104f40 0x7ff60c1a2610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff610b9d640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60c104f40 0x7ff60c1a2610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:44588/0 (socket says 192.168.123.101:44588) 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff610b9d640 1 -- 192.168.123.101:0/119381654 learned_addr learned my addr 192.168.123.101:0/119381654 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff603fff640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff60c1058f0 0x7ff60c1a2b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff61139e640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff60c10a070 0x7ff60c19c770 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:59.098 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff610b9d640 1 -- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff60c10a070 msgr2=0x7ff60c19c770 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:59.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff610b9d640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff60c10a070 0x7ff60c19c770 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff610b9d640 1 -- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff60c1058f0 msgr2=0x7ff60c1a2b50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:59.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff610b9d640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff60c1058f0 0x7ff60c1a2b50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff610b9d640 1 -- 192.168.123.101:0/119381654 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff60c19cfa0 con 0x7ff60c104f40 2026-03-09T15:53:59.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff603fff640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff60c1058f0 0x7ff60c1a2b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:53:59.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff61139e640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff60c10a070 0x7ff60c19c770 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:53:59.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff610b9d640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60c104f40 0x7ff60c1a2610 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7ff5f400ea10 tx=0x7ff5f400eee0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:53:59.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff601ffb640 1 -- 192.168.123.101:0/119381654 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff5f400ce50 con 0x7ff60c104f40 2026-03-09T15:53:59.099 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.094+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff60c19d290 con 0x7ff60c104f40 2026-03-09T15:53:59.100 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.098+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff60c1a9600 con 0x7ff60c104f40 2026-03-09T15:53:59.100 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.098+0000 7ff601ffb640 1 -- 192.168.123.101:0/119381654 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff5f4004510 con 0x7ff60c104f40 2026-03-09T15:53:59.100 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.098+0000 7ff601ffb640 1 -- 192.168.123.101:0/119381654 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff5f4010690 con 0x7ff60c104f40 2026-03-09T15:53:59.101 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.098+0000 7ff601ffb640 1 -- 192.168.123.101:0/119381654 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7ff5f4020070 con 0x7ff60c104f40 2026-03-09T15:53:59.102 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.098+0000 7ff601ffb640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff5dc0777b0 0x7ff5dc079c70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:53:59.102 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.098+0000 7ff601ffb640 1 -- 192.168.123.101:0/119381654 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(52..52 src has 1..52) ==== 5267+0+0 (secure 0 0 0) 0x7ff5f4099c40 con 0x7ff60c104f40 2026-03-09T15:53:59.102 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.098+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff5d4005180 con 0x7ff60c104f40 2026-03-09T15:53:59.102 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.098+0000 7ff603fff640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff5dc0777b0 0x7ff5dc079c70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:53:59.103 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.098+0000 7ff603fff640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff5dc0777b0 0x7ff5dc079c70 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7ff60c10b650 tx=0x7ff5fc004360 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:53:59.106 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.102+0000 7ff601ffb640 1 -- 192.168.123.101:0/119381654 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff5f401f2f0 con 0x7ff60c104f40 2026-03-09T15:53:59.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.202+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7ff5d4005470 con 0x7ff60c104f40 2026-03-09T15:53:59.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.202+0000 7ff601ffb640 1 -- 192.168.123.101:0/119381654 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v52) ==== 74+0+13835 (secure 0 0 0) 0x7ff5f4063100 con 0x7ff60c104f40 2026-03-09T15:53:59.206 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:53:59.206 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":52,"fsid":"397fadc0-1bcf-11f1-8481-edc1430c2c03","created":"2026-03-09T15:48:02.072991+0000","modified":"2026-03-09T15:53:51.167895+0000","last_up_change":"2026-03-09T15:53:49.113988+0000","last_in_change":"2026-03-09T15:53:32.372004+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T15:51:01.105730+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"85259aee-a52d-45ab-8429-e3d0212392b7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6803","nonce":115286186}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6805","nonce":115286186}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6809","nonce":115286186}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6807","nonce":115286186}]},"public_addr":"192.168.123.101:6803/115286186","cluster_addr":"192.168.123.101:6805/115286186","heartbeat_back_addr":"192.168.123.101:6809/115286186","heartbeat_front_addr":"192.168.123.101:6807/115286186","state":["exists","up"]},{"osd":1,"uuid":"e7c85482-6eb5-4953-8a19-029686ffe773","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":32,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6811","nonce":4163266826}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6813","nonce":4163266826}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6817","nonce":4163266826}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6815","nonce":4163266826}]},"public_addr":"192.168.123.101:6811/4163266826","cluster_addr":"192.168.123.101:6813/4163266826","heartbeat_back_addr":"192.168.123.101:6817/4163266826","heartbeat_front_addr":"192.168.123.101:6815/4163266826","state":["exists","up"]},{"osd":2,"uuid":"d1982b6d-a77c-466e-996b-c1ff61952b4b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6818","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6819","nonce":1701239335}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6820","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6821","nonce":1701239335}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6824","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6825","nonce":1701239335}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6822","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6823","nonce":1701239335}]},"public_addr":"192.168.123.101:6819/1701239335","cluster_addr":"192.168.123.101:6821/1701239335","heartbeat_back_addr":"192.168.123.101:6825/1701239335","heartbeat_front_addr":"192.168.123.101:6823/1701239335","state":["exists","up"]},{"osd":3,"uuid":"59646c31-d8a8-4171-8402-970963810d37","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6826","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6827","nonce":994063283}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6828","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6829","nonce":994063283}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6832","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6833","nonce":994063283}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6830","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6831","nonce":994063283}]},"public_addr":"192.168.123.101:6827/994063283","cluster_addr":"192.168.123.101:6829/994063283","heartbeat_back_addr":"192.168.123.101:6833/994063283","heartbeat_front_addr":"192.168.123.101:6831/994063283","state":["exists","up"]},{"osd":4,"uuid":"642a6d0d-91ea-4433-b755-50a0d7442acf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6801","nonce":2242917856}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6803","nonce":2242917856}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6807","nonce":2242917856}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6805","nonce":2242917856}]},"public_addr":"192.168.123.109:6801/2242917856","cluster_addr":"192.168.123.109:6803/2242917856","heartbeat_back_addr":"192.168.123.109:6807/2242917856","heartbeat_front_addr":"192.168.123.109:6805/2242917856","state":["exists","up"]},{"osd":5,"uuid":"b983b1a8-523b-4ebf-b245-ff2849d684be","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":37,"up_thru":38,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6809","nonce":2799407982}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6811","nonce":2799407982}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6815","nonce":2799407982}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6813","nonce":2799407982}]},"public_addr":"192.168.123.109:6809/2799407982","cluster_addr":"192.168.123.109:6811/2799407982","heartbeat_back_addr":"192.168.123.109:6815/2799407982","heartbeat_front_addr":"192.168.123.109:6813/2799407982","state":["exists","up"]},{"osd":6,"uuid":"a674ed00-04ea-4bd3-ab96-9d977052e290","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":44,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6817","nonce":920695066}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6819","nonce":920695066}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6823","nonce":920695066}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6821","nonce":920695066}]},"public_addr":"192.168.123.109:6817/920695066","cluster_addr":"192.168.123.109:6819/920695066","heartbeat_back_addr":"192.168.123.109:6823/920695066","heartbeat_front_addr":"192.168.123.109:6821/920695066","state":["exists","up"]},{"osd":7,"uuid":"d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":50,"up_thru":51,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6825","nonce":1747724061}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6827","nonce":1747724061}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6831","nonce":1747724061}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6829","nonce":1747724061}]},"public_addr":"192.168.123.109:6825/1747724061","cluster_addr":"192.168.123.109:6827/1747724061","heartbeat_back_addr":"192.168.123.109:6831/1747724061","heartbeat_front_addr":"192.168.123.109:6829/1747724061","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:49:48.509437+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:50:22.208224+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:50:56.403701+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:51:31.273145+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:52:05.018838+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:52:39.069791+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:53:12.641048+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:53:47.432546+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:6801/2530303036":"2026-03-10T15:48:23.034006+0000","192.168.123.101:6800/2530303036":"2026-03-10T15:48:23.034006+0000","192.168.123.101:0/3195752727":"2026-03-10T15:48:23.034006+0000","192.168.123.101:0/328174782":"2026-03-10T15:48:23.034006+0000","192.168.123.101:6801/2299265276":"2026-03-10T15:48:12.689768+0000","192.168.123.101:6800/2299265276":"2026-03-10T15:48:12.689768+0000","192.168.123.101:0/407132826":"2026-03-10T15:48:12.689768+0000","192.168.123.101:0/3119601777":"2026-03-10T15:48:12.689768+0000","192.168.123.101:0/4136833719":"2026-03-10T15:48:23.034006+0000","192.168.123.101:0/957411864":"2026-03-10T15:48:12.689768+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T15:53:59.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff5dc0777b0 msgr2=0x7ff5dc079c70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:59.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff5dc0777b0 0x7ff5dc079c70 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7ff60c10b650 tx=0x7ff5fc004360 comp rx=0 tx=0).stop 2026-03-09T15:53:59.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60c104f40 msgr2=0x7ff60c1a2610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:53:59.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60c104f40 0x7ff60c1a2610 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7ff5f400ea10 tx=0x7ff5f400eee0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 shutdown_connections 2026-03-09T15:53:59.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7ff5dc0777b0 0x7ff5dc079c70 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff60c10a070 0x7ff60c19c770 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff60c1058f0 0x7ff60c1a2b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 --2- 192.168.123.101:0/119381654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60c104f40 0x7ff60c1a2610 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:53:59.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 >> 192.168.123.101:0/119381654 conn(0x7ff60c1009e0 msgr2=0x7ff60c101db0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:53:59.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 shutdown_connections 2026-03-09T15:53:59.209 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:53:59.206+0000 7ff612e28640 1 -- 192.168.123.101:0/119381654 wait complete. 2026-03-09T15:53:59.277 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T15:51:01.105730+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '22', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T15:53:59.277 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd pool get .mgr pg_num 2026-03-09T15:54:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:00 vm09 bash[22983]: cluster 2026-03-09T15:53:59.091653+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:00 vm09 bash[22983]: cluster 2026-03-09T15:53:59.091653+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:00 vm09 bash[22983]: audit 2026-03-09T15:53:59.205657+0000 mon.b (mon.1) 22 : audit [DBG] from='client.? 192.168.123.101:0/119381654' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:54:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:00 vm09 bash[22983]: audit 2026-03-09T15:53:59.205657+0000 mon.b (mon.1) 22 : audit [DBG] from='client.? 192.168.123.101:0/119381654' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:54:00.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:00 vm01 bash[28152]: cluster 2026-03-09T15:53:59.091653+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:00.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:00 vm01 bash[28152]: cluster 2026-03-09T15:53:59.091653+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:00.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:00 vm01 bash[28152]: audit 2026-03-09T15:53:59.205657+0000 mon.b (mon.1) 22 : audit [DBG] from='client.? 192.168.123.101:0/119381654' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:54:00.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:00 vm01 bash[28152]: audit 2026-03-09T15:53:59.205657+0000 mon.b (mon.1) 22 : audit [DBG] from='client.? 192.168.123.101:0/119381654' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:54:00.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:00 vm01 bash[20728]: cluster 2026-03-09T15:53:59.091653+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:00.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:00 vm01 bash[20728]: cluster 2026-03-09T15:53:59.091653+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:00.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:00 vm01 bash[20728]: audit 2026-03-09T15:53:59.205657+0000 mon.b (mon.1) 22 : audit [DBG] from='client.? 192.168.123.101:0/119381654' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:54:00.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:00 vm01 bash[20728]: audit 2026-03-09T15:53:59.205657+0000 mon.b (mon.1) 22 : audit [DBG] from='client.? 192.168.123.101:0/119381654' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:54:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:02 vm09 bash[22983]: cluster 2026-03-09T15:54:01.091985+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:02 vm09 bash[22983]: cluster 2026-03-09T15:54:01.091985+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:02.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:02 vm01 bash[28152]: cluster 2026-03-09T15:54:01.091985+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:02.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:02 vm01 bash[28152]: cluster 2026-03-09T15:54:01.091985+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:02 vm01 bash[20728]: cluster 2026-03-09T15:54:01.091985+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:02 vm01 bash[20728]: cluster 2026-03-09T15:54:01.091985+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:02.957 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:54:03.125 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 -- 192.168.123.101:0/3699758359 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f72b4104590 msgr2=0x7f72b410ca80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:03.125 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/3699758359 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f72b4104590 0x7f72b410ca80 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f72a40099b0 tx=0x7f72a402f240 comp rx=0 tx=0).stop 2026-03-09T15:54:03.125 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 -- 192.168.123.101:0/3699758359 shutdown_connections 2026-03-09T15:54:03.125 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/3699758359 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f72b410d050 0x7f72b410f4f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.125 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/3699758359 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f72b4104590 0x7f72b410ca80 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.125 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/3699758359 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b4103c70 0x7f72b4104050 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.125 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 -- 192.168.123.101:0/3699758359 >> 192.168.123.101:0/3699758359 conn(0x7f72b40fd650 msgr2=0x7f72b40ffa70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:03.125 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 -- 192.168.123.101:0/3699758359 shutdown_connections 2026-03-09T15:54:03.125 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 -- 192.168.123.101:0/3699758359 wait complete. 2026-03-09T15:54:03.126 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 Processor -- start 2026-03-09T15:54:03.126 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 -- start start 2026-03-09T15:54:03.126 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f72b4103c70 0x7f72b419a750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72ba42f640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f72b4103c70 0x7f72b419a750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72ba42f640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f72b4103c70 0x7f72b419a750 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:60518/0 (socket says 192.168.123.101:60518) 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f72b4104590 0x7f72b419ac90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b410d050 0x7f72b419f020 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f72b4112140 con 0x7f72b4104590 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f72b4111fc0 con 0x7f72b410d050 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bc6ba640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f72b41122c0 con 0x7f72b4103c70 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72ba42f640 1 -- 192.168.123.101:0/1206633654 learned_addr learned my addr 192.168.123.101:0/1206633654 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72bac30640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b410d050 0x7f72b419f020 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72b9c2e640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f72b4104590 0x7f72b419ac90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72ba42f640 1 -- 192.168.123.101:0/1206633654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b410d050 msgr2=0x7f72b419f020 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72ba42f640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b410d050 0x7f72b419f020 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.122+0000 7f72ba42f640 1 -- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f72b4104590 msgr2=0x7f72b419ac90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:03.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72ba42f640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f72b4104590 0x7f72b419ac90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72ba42f640 1 -- 192.168.123.101:0/1206633654 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f72b419f7a0 con 0x7f72b4103c70 2026-03-09T15:54:03.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72bac30640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b410d050 0x7f72b419f020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:54:03.128 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72ba42f640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f72b4103c70 0x7f72b419a750 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f72a800ed60 tx=0x7f72a800c6a0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:03.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72a37fe640 1 -- 192.168.123.101:0/1206633654 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f72a80040d0 con 0x7f72b4103c70 2026-03-09T15:54:03.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72a37fe640 1 -- 192.168.123.101:0/1206633654 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f72a80026e0 con 0x7f72b4103c70 2026-03-09T15:54:03.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f72b419fa90 con 0x7f72b4103c70 2026-03-09T15:54:03.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72b9c2e640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f72b4104590 0x7f72b419ac90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:54:03.129 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f72b4101780 con 0x7f72b4103c70 2026-03-09T15:54:03.131 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72a37fe640 1 -- 192.168.123.101:0/1206633654 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f72a8010650 con 0x7f72b4103c70 2026-03-09T15:54:03.131 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.126+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f72b406ba40 con 0x7f72b4103c70 2026-03-09T15:54:03.135 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.130+0000 7f72a37fe640 1 -- 192.168.123.101:0/1206633654 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f72a8004270 con 0x7f72b4103c70 2026-03-09T15:54:03.135 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.130+0000 7f72a37fe640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f7294077650 0x7f7294079b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:03.135 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.130+0000 7f72a37fe640 1 -- 192.168.123.101:0/1206633654 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(52..52 src has 1..52) ==== 5267+0+0 (secure 0 0 0) 0x7f72a8099a00 con 0x7f72b4103c70 2026-03-09T15:54:03.135 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.130+0000 7f72a37fe640 1 -- 192.168.123.101:0/1206633654 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f72a8099e20 con 0x7f72b4103c70 2026-03-09T15:54:03.135 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.130+0000 7f72b9c2e640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f7294077650 0x7f7294079b10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:03.139 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.138+0000 7f72b9c2e640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f7294077650 0x7f7294079b10 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f72b40ffe90 tx=0x7f72a4031040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:03.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.230+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"} v 0) -- 0x7f72b4104050 con 0x7f72b4103c70 2026-03-09T15:54:03.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72a37fe640 1 -- 192.168.123.101:0/1206633654 <== mon.2 v2:192.168.123.101:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]=0 v52) ==== 93+0+10 (secure 0 0 0) 0x7f72a8062f40 con 0x7f72b4103c70 2026-03-09T15:54:03.236 INFO:teuthology.orchestra.run.vm01.stdout:pg_num: 1 2026-03-09T15:54:03.238 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f7294077650 msgr2=0x7f7294079b10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:03.238 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f7294077650 0x7f7294079b10 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f72b40ffe90 tx=0x7f72a4031040 comp rx=0 tx=0).stop 2026-03-09T15:54:03.238 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f72b4103c70 msgr2=0x7f72b419a750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:03.238 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f72b4103c70 0x7f72b419a750 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f72a800ed60 tx=0x7f72a800c6a0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.239 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 shutdown_connections 2026-03-09T15:54:03.239 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b410d050 0x7f72b419f020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.239 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f72b4104590 0x7f72b419ac90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.239 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f7294077650 0x7f7294079b10 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.239 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 --2- 192.168.123.101:0/1206633654 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f72b4103c70 0x7f72b419a750 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:03.239 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 >> 192.168.123.101:0/1206633654 conn(0x7f72b40fd650 msgr2=0x7f72b40ff190 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:03.239 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 shutdown_connections 2026-03-09T15:54:03.239 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:03.234+0000 7f72bc6ba640 1 -- 192.168.123.101:0/1206633654 wait complete. 2026-03-09T15:54:03.310 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm01 2026-03-09T15:54:03.310 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch apply rgw foo.a --placement '1;vm01=foo.a' 2026-03-09T15:54:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:04 vm01 bash[20728]: cluster 2026-03-09T15:54:03.092320+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:04 vm01 bash[20728]: cluster 2026-03-09T15:54:03.092320+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:04 vm01 bash[20728]: audit 2026-03-09T15:54:03.236650+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.101:0/1206633654' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T15:54:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:04 vm01 bash[20728]: audit 2026-03-09T15:54:03.236650+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.101:0/1206633654' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T15:54:04.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:04 vm01 bash[28152]: cluster 2026-03-09T15:54:03.092320+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:04.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:04 vm01 bash[28152]: cluster 2026-03-09T15:54:03.092320+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:04.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:04 vm01 bash[28152]: audit 2026-03-09T15:54:03.236650+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.101:0/1206633654' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T15:54:04.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:04 vm01 bash[28152]: audit 2026-03-09T15:54:03.236650+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.101:0/1206633654' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T15:54:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:04 vm09 bash[22983]: cluster 2026-03-09T15:54:03.092320+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:04 vm09 bash[22983]: cluster 2026-03-09T15:54:03.092320+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:04 vm09 bash[22983]: audit 2026-03-09T15:54:03.236650+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.101:0/1206633654' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T15:54:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:04 vm09 bash[22983]: audit 2026-03-09T15:54:03.236650+0000 mon.c (mon.2) 14 : audit [DBG] from='client.? 192.168.123.101:0/1206633654' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T15:54:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:06 vm09 bash[22983]: cluster 2026-03-09T15:54:05.092636+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:06 vm09 bash[22983]: cluster 2026-03-09T15:54:05.092636+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:06.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:06 vm01 bash[20728]: cluster 2026-03-09T15:54:05.092636+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:06.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:06 vm01 bash[20728]: cluster 2026-03-09T15:54:05.092636+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:06.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:06 vm01 bash[28152]: cluster 2026-03-09T15:54:05.092636+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:06.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:06 vm01 bash[28152]: cluster 2026-03-09T15:54:05.092636+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:07.964 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:54:08.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.123+0000 7f60c0735640 1 -- 192.168.123.109:0/2438786308 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8105f70 msgr2=0x7f60b81063f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:08.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.123+0000 7f60c0735640 1 --2- 192.168.123.109:0/2438786308 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8105f70 0x7f60b81063f0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f60a8009a30 tx=0x7f60a802f260 comp rx=0 tx=0).stop 2026-03-09T15:54:08.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- 192.168.123.109:0/2438786308 shutdown_connections 2026-03-09T15:54:08.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 --2- 192.168.123.109:0/2438786308 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f60b8106930 0x7f60b810d1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:08.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 --2- 192.168.123.109:0/2438786308 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8105f70 0x7f60b81063f0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:08.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 --2- 192.168.123.109:0/2438786308 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f60b8104d70 0x7f60b8105170 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:08.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- 192.168.123.109:0/2438786308 >> 192.168.123.109:0/2438786308 conn(0x7f60b8100520 msgr2=0x7f60b8102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:08.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- 192.168.123.109:0/2438786308 shutdown_connections 2026-03-09T15:54:08.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- 192.168.123.109:0/2438786308 wait complete. 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 Processor -- start 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- start start 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8104d70 0x7f60b807aea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60be4aa640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8104d70 0x7f60b807aea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60be4aa640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8104d70 0x7f60b807aea0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:41724/0 (socket says 192.168.123.109:41724) 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f60b8105f70 0x7f60b807b3e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f60b8106930 0x7f60b80779a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f60b810fd80 con 0x7f60b8106930 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f60b810fc00 con 0x7f60b8104d70 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f60b810ff00 con 0x7f60b8105f70 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60be4aa640 1 -- 192.168.123.109:0/1571825511 learned_addr learned my addr 192.168.123.109:0/1571825511 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60bdca9640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f60b8105f70 0x7f60b807b3e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60becab640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f60b8106930 0x7f60b80779a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60be4aa640 1 -- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f60b8105f70 msgr2=0x7f60b807b3e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60be4aa640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f60b8105f70 0x7f60b807b3e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60be4aa640 1 -- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f60b8106930 msgr2=0x7f60b80779a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60be4aa640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f60b8106930 0x7f60b80779a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60be4aa640 1 -- 192.168.123.109:0/1571825511 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f60b8078260 con 0x7f60b8104d70 2026-03-09T15:54:08.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60be4aa640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8104d70 0x7f60b807aea0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f60ac00b7c0 tx=0x7f60ac00bc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:08.133 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60a77fe640 1 -- 192.168.123.109:0/1571825511 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f60ac004270 con 0x7f60b8104d70 2026-03-09T15:54:08.133 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f60b81a8af0 con 0x7f60b8104d70 2026-03-09T15:54:08.134 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f60b81a8e90 con 0x7f60b8104d70 2026-03-09T15:54:08.134 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60a77fe640 1 -- 192.168.123.109:0/1571825511 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f60ac010070 con 0x7f60b8104d70 2026-03-09T15:54:08.134 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.127+0000 7f60a77fe640 1 -- 192.168.123.109:0/1571825511 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f60ac00c920 con 0x7f60b8104d70 2026-03-09T15:54:08.134 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.131+0000 7f60a77fe640 1 -- 192.168.123.109:0/1571825511 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f60ac00cbe0 con 0x7f60b8104d70 2026-03-09T15:54:08.135 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.131+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6088005180 con 0x7f60b8104d70 2026-03-09T15:54:08.137 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.131+0000 7f60a77fe640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f60840776e0 0x7f6084079ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:08.138 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.131+0000 7f60a77fe640 1 -- 192.168.123.109:0/1571825511 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(52..52 src has 1..52) ==== 5267+0+0 (secure 0 0 0) 0x7f60ac098760 con 0x7f60b8104d70 2026-03-09T15:54:08.138 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.131+0000 7f60bdca9640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f60840776e0 0x7f6084079ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:08.138 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.131+0000 7f60a77fe640 1 -- 192.168.123.109:0/1571825511 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f60ac061db0 con 0x7f60b8104d70 2026-03-09T15:54:08.143 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.139+0000 7f60bdca9640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f60840776e0 0x7f6084079ba0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f60b8078e20 tx=0x7f60a803a040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:08.246 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.239+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm01=foo.a", "target": ["mon-mgr", ""]}) -- 0x7f6088002bf0 con 0x7f60840776e0 2026-03-09T15:54:08.254 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.247+0000 7f60a77fe640 1 -- 192.168.123.109:0/1571825511 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+30 (secure 0 0 0) 0x7f6088002bf0 con 0x7f60840776e0 2026-03-09T15:54:08.254 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled rgw.foo.a update... 2026-03-09T15:54:08.257 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f60840776e0 msgr2=0x7f6084079ba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:08.257 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f60840776e0 0x7f6084079ba0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f60b8078e20 tx=0x7f60a803a040 comp rx=0 tx=0).stop 2026-03-09T15:54:08.257 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8104d70 msgr2=0x7f60b807aea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:08.257 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8104d70 0x7f60b807aea0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f60ac00b7c0 tx=0x7f60ac00bc90 comp rx=0 tx=0).stop 2026-03-09T15:54:08.257 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 shutdown_connections 2026-03-09T15:54:08.258 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f60b8106930 0x7f60b80779a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:08.258 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f60840776e0 0x7f6084079ba0 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:08.258 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f60b8105f70 0x7f60b807b3e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:08.258 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 --2- 192.168.123.109:0/1571825511 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f60b8104d70 0x7f60b807aea0 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:08.258 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 >> 192.168.123.109:0/1571825511 conn(0x7f60b8100520 msgr2=0x7f60b8101fe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:08.258 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.251+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 shutdown_connections 2026-03-09T15:54:08.258 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:08.255+0000 7f60c0735640 1 -- 192.168.123.109:0/1571825511 wait complete. 2026-03-09T15:54:08.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:08 vm09 bash[22983]: cluster 2026-03-09T15:54:07.092923+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:08.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:08 vm09 bash[22983]: cluster 2026-03-09T15:54:07.092923+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:08.328 DEBUG:teuthology.orchestra.run.vm01:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@rgw.foo.a.service 2026-03-09T15:54:08.329 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm09 2026-03-09T15:54:08.330 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd pool create datapool 3 3 replicated 2026-03-09T15:54:08.589 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:08 vm01 bash[28152]: cluster 2026-03-09T15:54:07.092923+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:08.589 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:08 vm01 bash[28152]: cluster 2026-03-09T15:54:07.092923+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:08.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:08 vm01 bash[20728]: cluster 2026-03-09T15:54:07.092923+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:08.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:08 vm01 bash[20728]: cluster 2026-03-09T15:54:07.092923+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:09.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:08 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.179 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:54:08 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.179 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:54:09 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.179 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 15:54:08 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.180 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 15:54:09 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:08 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.180 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:54:08 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.180 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:54:09 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.180 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:08 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.180 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:09 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.180 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 15:54:08 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.180 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 15:54:09 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.248218+0000 mgr.y (mgr.14150) 237 : audit [DBG] from='client.24295 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm01=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.248218+0000 mgr.y (mgr.14150) 237 : audit [DBG] from='client.24295 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm01=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: cephadm 2026-03-09T15:54:08.249357+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: cephadm 2026-03-09T15:54:08.249357+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.254078+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.254078+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.255285+0000 mon.a (mon.0) 653 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.255285+0000 mon.a (mon.0) 653 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.256828+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.256828+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.257338+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.257338+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.266180+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.266180+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.268917+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.268917+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.271742+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.271742+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.281960+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.281960+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.284570+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:08.284570+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: cephadm 2026-03-09T15:54:08.285465+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Deploying daemon rgw.foo.a on vm01 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: cephadm 2026-03-09T15:54:08.285465+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Deploying daemon rgw.foo.a on vm01 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:09.249535+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:09 vm01 bash[28152]: audit 2026-03-09T15:54:09.249535+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.248218+0000 mgr.y (mgr.14150) 237 : audit [DBG] from='client.24295 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm01=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.248218+0000 mgr.y (mgr.14150) 237 : audit [DBG] from='client.24295 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm01=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: cephadm 2026-03-09T15:54:08.249357+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: cephadm 2026-03-09T15:54:08.249357+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.254078+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.254078+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.255285+0000 mon.a (mon.0) 653 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.255285+0000 mon.a (mon.0) 653 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.256828+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.256828+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.257338+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:09.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.257338+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.266180+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.266180+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.268917+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.268917+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.271742+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.271742+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.281960+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.281960+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.284570+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:08.284570+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: cephadm 2026-03-09T15:54:08.285465+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Deploying daemon rgw.foo.a on vm01 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: cephadm 2026-03-09T15:54:08.285465+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Deploying daemon rgw.foo.a on vm01 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:09.249535+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.630 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:09 vm01 bash[20728]: audit 2026-03-09T15:54:09.249535+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.248218+0000 mgr.y (mgr.14150) 237 : audit [DBG] from='client.24295 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm01=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.248218+0000 mgr.y (mgr.14150) 237 : audit [DBG] from='client.24295 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm01=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: cephadm 2026-03-09T15:54:08.249357+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: cephadm 2026-03-09T15:54:08.249357+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.254078+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.254078+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.255285+0000 mon.a (mon.0) 653 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.255285+0000 mon.a (mon.0) 653 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.256828+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.256828+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.257338+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.257338+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.266180+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.266180+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.268917+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.268917+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.271742+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.271742+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.281960+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.281960+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.284570+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:08.284570+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: cephadm 2026-03-09T15:54:08.285465+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Deploying daemon rgw.foo.a on vm01 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: cephadm 2026-03-09T15:54:08.285465+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Deploying daemon rgw.foo.a on vm01 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:09.249535+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:09 vm09 bash[22983]: audit 2026-03-09T15:54:09.249535+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: cluster 2026-03-09T15:54:09.093268+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: cluster 2026-03-09T15:54:09.093268+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.256593+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.256593+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.281337+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.281337+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: cephadm 2026-03-09T15:54:09.283856+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: cephadm 2026-03-09T15:54:09.283856+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.301770+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.301770+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.311047+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.311047+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.323696+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:10 vm09 bash[22983]: audit 2026-03-09T15:54:09.323696+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:10.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: cluster 2026-03-09T15:54:09.093268+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:10.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: cluster 2026-03-09T15:54:09.093268+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:10.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.256593+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.256593+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.281337+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.281337+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: cephadm 2026-03-09T15:54:09.283856+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: cephadm 2026-03-09T15:54:09.283856+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.301770+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.301770+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.311047+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.311047+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.323696+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:10 vm01 bash[28152]: audit 2026-03-09T15:54:09.323696+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: cluster 2026-03-09T15:54:09.093268+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: cluster 2026-03-09T15:54:09.093268+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.256593+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.256593+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.281337+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.281337+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: cephadm 2026-03-09T15:54:09.283856+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: cephadm 2026-03-09T15:54:09.283856+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm01=foo.a;count:1 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.301770+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.301770+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.311047+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.311047+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.323696+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:10.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:10 vm01 bash[20728]: audit 2026-03-09T15:54:09.323696+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:11 vm09 bash[22983]: cluster 2026-03-09T15:54:10.324613+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T15:54:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:11 vm09 bash[22983]: cluster 2026-03-09T15:54:10.324613+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T15:54:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:11 vm09 bash[22983]: audit 2026-03-09T15:54:10.329396+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T15:54:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:11 vm09 bash[22983]: audit 2026-03-09T15:54:10.329396+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T15:54:11.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:11 vm01 bash[28152]: cluster 2026-03-09T15:54:10.324613+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T15:54:11.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:11 vm01 bash[28152]: cluster 2026-03-09T15:54:10.324613+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T15:54:11.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:11 vm01 bash[28152]: audit 2026-03-09T15:54:10.329396+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T15:54:11.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:11 vm01 bash[28152]: audit 2026-03-09T15:54:10.329396+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T15:54:11.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:11 vm01 bash[20728]: cluster 2026-03-09T15:54:10.324613+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T15:54:11.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:11 vm01 bash[20728]: cluster 2026-03-09T15:54:10.324613+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T15:54:11.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:11 vm01 bash[20728]: audit 2026-03-09T15:54:10.329396+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T15:54:11.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:11 vm01 bash[20728]: audit 2026-03-09T15:54:10.329396+0000 mon.a (mon.0) 668 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T15:54:11.986 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.147+0000 7f35643ee640 1 -- 192.168.123.109:0/2005392290 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c104d70 msgr2=0x7f355c105170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.147+0000 7f35643ee640 1 --2- 192.168.123.109:0/2005392290 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c104d70 0x7f355c105170 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f3550009a80 tx=0x7f355002f270 comp rx=0 tx=0).stop 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 -- 192.168.123.109:0/2005392290 shutdown_connections 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 --2- 192.168.123.109:0/2005392290 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f355c106930 0x7f355c10d1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 --2- 192.168.123.109:0/2005392290 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f355c105f70 0x7f355c1063f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 --2- 192.168.123.109:0/2005392290 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c104d70 0x7f355c105170 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 -- 192.168.123.109:0/2005392290 >> 192.168.123.109:0/2005392290 conn(0x7f355c100520 msgr2=0x7f355c102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 -- 192.168.123.109:0/2005392290 shutdown_connections 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 -- 192.168.123.109:0/2005392290 wait complete. 2026-03-09T15:54:12.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 Processor -- start 2026-03-09T15:54:12.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 -- start start 2026-03-09T15:54:12.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f355c104d70 0x7f355c19c420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:12.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c105f70 0x7f355c19c960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:12.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f355c106930 0x7f355c1a39e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:12.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f355c10fc10 con 0x7f355c106930 2026-03-09T15:54:12.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f355c10fa90 con 0x7f355c105f70 2026-03-09T15:54:12.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f355c10fd90 con 0x7f355c104d70 2026-03-09T15:54:12.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3561962640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c105f70 0x7f355c19c960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3561962640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c105f70 0x7f355c19c960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:41734/0 (socket says 192.168.123.109:41734) 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3561962640 1 -- 192.168.123.109:0/2747609006 learned_addr learned my addr 192.168.123.109:0/2747609006 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3562163640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f355c104d70 0x7f355c19c420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3562964640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f355c106930 0x7f355c1a39e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3561962640 1 -- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f355c104d70 msgr2=0x7f355c19c420 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3561962640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f355c104d70 0x7f355c19c420 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3561962640 1 -- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f355c106930 msgr2=0x7f355c1a39e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3561962640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f355c106930 0x7f355c1a39e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3561962640 1 -- 192.168.123.109:0/2747609006 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f355c1a40e0 con 0x7f355c105f70 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3562163640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f355c104d70 0x7f355c19c420 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:54:12.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3562964640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f355c106930 0x7f355c1a39e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:54:12.158 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f3561962640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c105f70 0x7f355c19c960 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f354c00d950 tx=0x7f354c00de20 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:12.158 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f354b7fe640 1 -- 192.168.123.109:0/2747609006 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f354c014070 con 0x7f355c105f70 2026-03-09T15:54:12.158 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.151+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f355c1a43d0 con 0x7f355c105f70 2026-03-09T15:54:12.159 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.155+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f355c1a4960 con 0x7f355c105f70 2026-03-09T15:54:12.159 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.155+0000 7f354b7fe640 1 -- 192.168.123.109:0/2747609006 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f354c0044e0 con 0x7f355c105f70 2026-03-09T15:54:12.159 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.155+0000 7f354b7fe640 1 -- 192.168.123.109:0/2747609006 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f354c004e30 con 0x7f355c105f70 2026-03-09T15:54:12.160 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.155+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f352c005180 con 0x7f355c105f70 2026-03-09T15:54:12.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.155+0000 7f354b7fe640 1 -- 192.168.123.109:0/2747609006 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f354c00b900 con 0x7f355c105f70 2026-03-09T15:54:12.163 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.155+0000 7f354b7fe640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3528077640 0x7f3528079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:12.163 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.159+0000 7f354b7fe640 1 -- 192.168.123.109:0/2747609006 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(54..54 src has 1..54) ==== 5626+0+0 (secure 0 0 0) 0x7f354c05db30 con 0x7f355c105f70 2026-03-09T15:54:12.163 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.159+0000 7f3562163640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3528077640 0x7f3528079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:12.163 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.159+0000 7f3562163640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3528077640 0x7f3528079b00 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f35500040c0 tx=0x7f35500023d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:12.164 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.159+0000 7f354b7fe640 1 -- 192.168.123.109:0/2747609006 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f354c061e90 con 0x7f355c105f70 2026-03-09T15:54:12.262 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.255+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"} v 0) -- 0x7f352c005470 con 0x7f355c105f70 2026-03-09T15:54:12.338 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.331+0000 7f354b7fe640 1 -- 192.168.123.109:0/2747609006 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]=0 pool 'datapool' created v55) ==== 160+0+0 (secure 0 0 0) 0x7f354c065b40 con 0x7f355c105f70 2026-03-09T15:54:12.338 INFO:teuthology.orchestra.run.vm09.stderr:pool 'datapool' created 2026-03-09T15:54:12.345 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3528077640 msgr2=0x7f3528079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:12.345 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3528077640 0x7f3528079b00 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f35500040c0 tx=0x7f35500023d0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c105f70 msgr2=0x7f355c19c960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c105f70 0x7f355c19c960 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f354c00d950 tx=0x7f354c00de20 comp rx=0 tx=0).stop 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 shutdown_connections 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f355c106930 0x7f355c1a39e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f355c105f70 0x7f355c19c960 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f3528077640 0x7f3528079b00 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 --2- 192.168.123.109:0/2747609006 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f355c104d70 0x7f355c19c420 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 >> 192.168.123.109:0/2747609006 conn(0x7f355c100520 msgr2=0x7f355c101e10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 shutdown_connections 2026-03-09T15:54:12.346 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:12.339+0000 7f35643ee640 1 -- 192.168.123.109:0/2747609006 wait complete. 2026-03-09T15:54:12.417 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- rbd pool init datapool 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: cluster 2026-03-09T15:54:11.093607+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v217: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: cluster 2026-03-09T15:54:11.093607+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v217: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: cluster 2026-03-09T15:54:11.310523+0000 mon.a (mon.0) 669 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: cluster 2026-03-09T15:54:11.310523+0000 mon.a (mon.0) 669 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: audit 2026-03-09T15:54:11.321451+0000 mon.a (mon.0) 670 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: audit 2026-03-09T15:54:11.321451+0000 mon.a (mon.0) 670 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: cluster 2026-03-09T15:54:11.339552+0000 mon.a (mon.0) 671 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: cluster 2026-03-09T15:54:11.339552+0000 mon.a (mon.0) 671 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: audit 2026-03-09T15:54:11.607073+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: audit 2026-03-09T15:54:11.607073+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: audit 2026-03-09T15:54:12.262958+0000 mon.b (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/2747609006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: audit 2026-03-09T15:54:12.262958+0000 mon.b (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/2747609006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: audit 2026-03-09T15:54:12.264669+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:12 vm09 bash[22983]: audit 2026-03-09T15:54:12.264669+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: cluster 2026-03-09T15:54:11.093607+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v217: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: cluster 2026-03-09T15:54:11.093607+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v217: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: cluster 2026-03-09T15:54:11.310523+0000 mon.a (mon.0) 669 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: cluster 2026-03-09T15:54:11.310523+0000 mon.a (mon.0) 669 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: audit 2026-03-09T15:54:11.321451+0000 mon.a (mon.0) 670 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: audit 2026-03-09T15:54:11.321451+0000 mon.a (mon.0) 670 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: cluster 2026-03-09T15:54:11.339552+0000 mon.a (mon.0) 671 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: cluster 2026-03-09T15:54:11.339552+0000 mon.a (mon.0) 671 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: audit 2026-03-09T15:54:11.607073+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: audit 2026-03-09T15:54:11.607073+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: audit 2026-03-09T15:54:12.262958+0000 mon.b (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/2747609006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: audit 2026-03-09T15:54:12.262958+0000 mon.b (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/2747609006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: audit 2026-03-09T15:54:12.264669+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:12 vm01 bash[28152]: audit 2026-03-09T15:54:12.264669+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: cluster 2026-03-09T15:54:11.093607+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v217: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: cluster 2026-03-09T15:54:11.093607+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v217: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: cluster 2026-03-09T15:54:11.310523+0000 mon.a (mon.0) 669 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: cluster 2026-03-09T15:54:11.310523+0000 mon.a (mon.0) 669 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: audit 2026-03-09T15:54:11.321451+0000 mon.a (mon.0) 670 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: audit 2026-03-09T15:54:11.321451+0000 mon.a (mon.0) 670 : audit [INF] from='client.? 192.168.123.101:0/3736015795' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: cluster 2026-03-09T15:54:11.339552+0000 mon.a (mon.0) 671 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: cluster 2026-03-09T15:54:11.339552+0000 mon.a (mon.0) 671 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: audit 2026-03-09T15:54:11.607073+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: audit 2026-03-09T15:54:11.607073+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: audit 2026-03-09T15:54:12.262958+0000 mon.b (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/2747609006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: audit 2026-03-09T15:54:12.262958+0000 mon.b (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/2747609006' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: audit 2026-03-09T15:54:12.264669+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:12.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:12 vm01 bash[20728]: audit 2026-03-09T15:54:12.264669+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: audit 2026-03-09T15:54:12.325692+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: audit 2026-03-09T15:54:12.325692+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: cluster 2026-03-09T15:54:12.339719+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: cluster 2026-03-09T15:54:12.339719+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: audit 2026-03-09T15:54:12.340287+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: audit 2026-03-09T15:54:12.340287+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: audit 2026-03-09T15:54:12.346687+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: audit 2026-03-09T15:54:12.346687+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: audit 2026-03-09T15:54:13.329734+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: audit 2026-03-09T15:54:13.329734+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: cluster 2026-03-09T15:54:13.363743+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T15:54:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:13 vm09 bash[22983]: cluster 2026-03-09T15:54:13.363743+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T15:54:13.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: audit 2026-03-09T15:54:12.325692+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T15:54:13.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: audit 2026-03-09T15:54:12.325692+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T15:54:13.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: cluster 2026-03-09T15:54:12.339719+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T15:54:13.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: cluster 2026-03-09T15:54:12.339719+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T15:54:13.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: audit 2026-03-09T15:54:12.340287+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: audit 2026-03-09T15:54:12.340287+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: audit 2026-03-09T15:54:12.346687+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: audit 2026-03-09T15:54:12.346687+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: audit 2026-03-09T15:54:13.329734+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: audit 2026-03-09T15:54:13.329734+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: cluster 2026-03-09T15:54:13.363743+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:13 vm01 bash[28152]: cluster 2026-03-09T15:54:13.363743+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: audit 2026-03-09T15:54:12.325692+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: audit 2026-03-09T15:54:12.325692+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: cluster 2026-03-09T15:54:12.339719+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: cluster 2026-03-09T15:54:12.339719+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: audit 2026-03-09T15:54:12.340287+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: audit 2026-03-09T15:54:12.340287+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: audit 2026-03-09T15:54:12.346687+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: audit 2026-03-09T15:54:12.346687+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: audit 2026-03-09T15:54:13.329734+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: audit 2026-03-09T15:54:13.329734+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: cluster 2026-03-09T15:54:13.363743+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T15:54:13.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:13 vm01 bash[20728]: cluster 2026-03-09T15:54:13.363743+0000 mon.a (mon.0) 678 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T15:54:14.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:14 vm01 bash[28152]: cluster 2026-03-09T15:54:13.093935+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v220: 68 pgs: 11 creating+peering, 48 unknown, 9 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:54:14.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:14 vm01 bash[28152]: cluster 2026-03-09T15:54:13.093935+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v220: 68 pgs: 11 creating+peering, 48 unknown, 9 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:54:14.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:14 vm01 bash[28152]: cluster 2026-03-09T15:54:14.355801+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T15:54:14.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:14 vm01 bash[28152]: cluster 2026-03-09T15:54:14.355801+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T15:54:14.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:14 vm01 bash[28152]: audit 2026-03-09T15:54:14.361513+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:14 vm01 bash[28152]: audit 2026-03-09T15:54:14.361513+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:14 vm01 bash[28152]: audit 2026-03-09T15:54:14.365091+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:14 vm01 bash[28152]: audit 2026-03-09T15:54:14.365091+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:14 vm01 bash[20728]: cluster 2026-03-09T15:54:13.093935+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v220: 68 pgs: 11 creating+peering, 48 unknown, 9 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:54:14.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:14 vm01 bash[20728]: cluster 2026-03-09T15:54:13.093935+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v220: 68 pgs: 11 creating+peering, 48 unknown, 9 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:54:14.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:14 vm01 bash[20728]: cluster 2026-03-09T15:54:14.355801+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T15:54:14.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:14 vm01 bash[20728]: cluster 2026-03-09T15:54:14.355801+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T15:54:14.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:14 vm01 bash[20728]: audit 2026-03-09T15:54:14.361513+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:14 vm01 bash[20728]: audit 2026-03-09T15:54:14.361513+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:14 vm01 bash[20728]: audit 2026-03-09T15:54:14.365091+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:14 vm01 bash[20728]: audit 2026-03-09T15:54:14.365091+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:14 vm09 bash[22983]: cluster 2026-03-09T15:54:13.093935+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v220: 68 pgs: 11 creating+peering, 48 unknown, 9 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:54:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:14 vm09 bash[22983]: cluster 2026-03-09T15:54:13.093935+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v220: 68 pgs: 11 creating+peering, 48 unknown, 9 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:54:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:14 vm09 bash[22983]: cluster 2026-03-09T15:54:14.355801+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T15:54:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:14 vm09 bash[22983]: cluster 2026-03-09T15:54:14.355801+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T15:54:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:14 vm09 bash[22983]: audit 2026-03-09T15:54:14.361513+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:14 vm09 bash[22983]: audit 2026-03-09T15:54:14.361513+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:14 vm09 bash[22983]: audit 2026-03-09T15:54:14.365091+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:14 vm09 bash[22983]: audit 2026-03-09T15:54:14.365091+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T15:54:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:14.475881+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:14.475881+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:14.482118+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:14.482118+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:14.844375+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:14.844375+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:14.845078+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:14.845078+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:15.342701+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T15:54:15.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: audit 2026-03-09T15:54:15.342701+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T15:54:15.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: cluster 2026-03-09T15:54:15.352901+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T15:54:15.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:15 vm09 bash[22983]: cluster 2026-03-09T15:54:15.352901+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T15:54:15.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:14.475881+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:14.475881+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:14.482118+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:14.482118+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:14.844375+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:14.844375+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:14.845078+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:14.845078+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:15.342701+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: audit 2026-03-09T15:54:15.342701+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: cluster 2026-03-09T15:54:15.352901+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:15 vm01 bash[28152]: cluster 2026-03-09T15:54:15.352901+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:14.475881+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:14.475881+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:14.482118+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:14.482118+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:14.844375+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:14.844375+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:14.845078+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:14.845078+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:15.342701+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: audit 2026-03-09T15:54:15.342701+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: cluster 2026-03-09T15:54:15.352901+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T15:54:15.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:15 vm01 bash[20728]: cluster 2026-03-09T15:54:15.352901+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: cephadm 2026-03-09T15:54:14.847816+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: cephadm 2026-03-09T15:54:14.847816+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: cluster 2026-03-09T15:54:15.094677+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v223: 100 pgs: 15 creating+peering, 50 unknown, 35 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: cluster 2026-03-09T15:54:15.094677+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v223: 100 pgs: 15 creating+peering, 50 unknown, 35 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: cluster 2026-03-09T15:54:16.358110+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: cluster 2026-03-09T15:54:16.358110+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: audit 2026-03-09T15:54:16.365088+0000 mon.b (mon.1) 24 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: audit 2026-03-09T15:54:16.365088+0000 mon.b (mon.1) 24 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: audit 2026-03-09T15:54:16.367261+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: audit 2026-03-09T15:54:16.367261+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: audit 2026-03-09T15:54:16.372266+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: audit 2026-03-09T15:54:16.372266+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: audit 2026-03-09T15:54:16.372386+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:16 vm09 bash[22983]: audit 2026-03-09T15:54:16.372386+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: cephadm 2026-03-09T15:54:14.847816+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: cephadm 2026-03-09T15:54:14.847816+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: cluster 2026-03-09T15:54:15.094677+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v223: 100 pgs: 15 creating+peering, 50 unknown, 35 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: cluster 2026-03-09T15:54:15.094677+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v223: 100 pgs: 15 creating+peering, 50 unknown, 35 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: cluster 2026-03-09T15:54:16.358110+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: cluster 2026-03-09T15:54:16.358110+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: audit 2026-03-09T15:54:16.365088+0000 mon.b (mon.1) 24 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: audit 2026-03-09T15:54:16.365088+0000 mon.b (mon.1) 24 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: audit 2026-03-09T15:54:16.367261+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: audit 2026-03-09T15:54:16.367261+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: audit 2026-03-09T15:54:16.372266+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: audit 2026-03-09T15:54:16.372266+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: audit 2026-03-09T15:54:16.372386+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:16 vm01 bash[28152]: audit 2026-03-09T15:54:16.372386+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: cephadm 2026-03-09T15:54:14.847816+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: cephadm 2026-03-09T15:54:14.847816+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: cluster 2026-03-09T15:54:15.094677+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v223: 100 pgs: 15 creating+peering, 50 unknown, 35 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: cluster 2026-03-09T15:54:15.094677+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v223: 100 pgs: 15 creating+peering, 50 unknown, 35 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: cluster 2026-03-09T15:54:16.358110+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: cluster 2026-03-09T15:54:16.358110+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: audit 2026-03-09T15:54:16.365088+0000 mon.b (mon.1) 24 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: audit 2026-03-09T15:54:16.365088+0000 mon.b (mon.1) 24 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: audit 2026-03-09T15:54:16.367261+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: audit 2026-03-09T15:54:16.367261+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: audit 2026-03-09T15:54:16.372266+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: audit 2026-03-09T15:54:16.372266+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: audit 2026-03-09T15:54:16.372386+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:16.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:16 vm01 bash[20728]: audit 2026-03-09T15:54:16.372386+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T15:54:17.056 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:54:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.201293+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.109:0/553246306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.201293+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.109:0/553246306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.203065+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.203065+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.350855+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.350855+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.351218+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.351218+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.351331+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.351331+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: cluster 2026-03-09T15:54:17.364376+0000 mon.a (mon.0) 694 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: cluster 2026-03-09T15:54:17.364376+0000 mon.a (mon.0) 694 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.378223+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.378223+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.378585+0000 mon.a (mon.0) 695 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.378585+0000 mon.a (mon.0) 695 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.382653+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.382653+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.385225+0000 mon.a (mon.0) 696 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:17 vm09 bash[22983]: audit 2026-03-09T15:54:17.385225+0000 mon.a (mon.0) 696 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.201293+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.109:0/553246306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.201293+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.109:0/553246306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.203065+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.203065+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.350855+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.350855+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.351218+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.351218+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.351331+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.351331+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: cluster 2026-03-09T15:54:17.364376+0000 mon.a (mon.0) 694 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: cluster 2026-03-09T15:54:17.364376+0000 mon.a (mon.0) 694 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.378223+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.378223+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.378585+0000 mon.a (mon.0) 695 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.378585+0000 mon.a (mon.0) 695 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.382653+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.382653+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.385225+0000 mon.a (mon.0) 696 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:17 vm01 bash[28152]: audit 2026-03-09T15:54:17.385225+0000 mon.a (mon.0) 696 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.201293+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.109:0/553246306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.201293+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.109:0/553246306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.203065+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.203065+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.350855+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.350855+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.351218+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.351218+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.351331+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.351331+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: cluster 2026-03-09T15:54:17.364376+0000 mon.a (mon.0) 694 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: cluster 2026-03-09T15:54:17.364376+0000 mon.a (mon.0) 694 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.378223+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.378223+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.101:0/2538676989' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.378585+0000 mon.a (mon.0) 695 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.378585+0000 mon.a (mon.0) 695 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.382653+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.382653+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.101:0/1298365723' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.385225+0000 mon.a (mon.0) 696 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:17.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:17 vm01 bash[20728]: audit 2026-03-09T15:54:17.385225+0000 mon.a (mon.0) 696 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: cluster 2026-03-09T15:54:17.095384+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v226: 132 pgs: 16 creating+peering, 50 unknown, 66 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: cluster 2026-03-09T15:54:17.095384+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v226: 132 pgs: 16 creating+peering, 50 unknown, 66 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: cluster 2026-03-09T15:54:18.110582+0000 mon.a (mon.0) 697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: cluster 2026-03-09T15:54:18.110582+0000 mon.a (mon.0) 697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: audit 2026-03-09T15:54:18.354336+0000 mon.a (mon.0) 698 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: audit 2026-03-09T15:54:18.354336+0000 mon.a (mon.0) 698 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: audit 2026-03-09T15:54:18.354609+0000 mon.a (mon.0) 699 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: audit 2026-03-09T15:54:18.354609+0000 mon.a (mon.0) 699 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: cluster 2026-03-09T15:54:18.372138+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T15:54:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:18 vm09 bash[22983]: cluster 2026-03-09T15:54:18.372138+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: cluster 2026-03-09T15:54:17.095384+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v226: 132 pgs: 16 creating+peering, 50 unknown, 66 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: cluster 2026-03-09T15:54:17.095384+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v226: 132 pgs: 16 creating+peering, 50 unknown, 66 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: cluster 2026-03-09T15:54:18.110582+0000 mon.a (mon.0) 697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: cluster 2026-03-09T15:54:18.110582+0000 mon.a (mon.0) 697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: audit 2026-03-09T15:54:18.354336+0000 mon.a (mon.0) 698 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: audit 2026-03-09T15:54:18.354336+0000 mon.a (mon.0) 698 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: audit 2026-03-09T15:54:18.354609+0000 mon.a (mon.0) 699 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: audit 2026-03-09T15:54:18.354609+0000 mon.a (mon.0) 699 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: cluster 2026-03-09T15:54:18.372138+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:18 vm01 bash[28152]: cluster 2026-03-09T15:54:18.372138+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T15:54:18.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: cluster 2026-03-09T15:54:17.095384+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v226: 132 pgs: 16 creating+peering, 50 unknown, 66 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T15:54:18.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: cluster 2026-03-09T15:54:17.095384+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v226: 132 pgs: 16 creating+peering, 50 unknown, 66 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T15:54:18.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: cluster 2026-03-09T15:54:18.110582+0000 mon.a (mon.0) 697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:18.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: cluster 2026-03-09T15:54:18.110582+0000 mon.a (mon.0) 697 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:54:18.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: audit 2026-03-09T15:54:18.354336+0000 mon.a (mon.0) 698 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: audit 2026-03-09T15:54:18.354336+0000 mon.a (mon.0) 698 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: audit 2026-03-09T15:54:18.354609+0000 mon.a (mon.0) 699 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: audit 2026-03-09T15:54:18.354609+0000 mon.a (mon.0) 699 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T15:54:18.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: cluster 2026-03-09T15:54:18.372138+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T15:54:18.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[20728]: cluster 2026-03-09T15:54:18.372138+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T15:54:18.930 INFO:journalctl@ceph.rgw.foo.a.vm01.stdout:Mar 09 15:54:18 vm01 bash[53549]: debug 2026-03-09T15:54:18.451+0000 7ffb387a0980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-09T15:54:19.596 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.109 --placement '1;vm09=iscsi.a' 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:18.690341+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:18.690341+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:18.707991+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:18.707991+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:18.732915+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:18.732915+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:18.755845+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:18.755845+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: cluster 2026-03-09T15:54:19.096119+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 16 creating+peering, 16 unknown, 100 active+clean; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 2.2 KiB/s wr, 11 op/s 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: cluster 2026-03-09T15:54:19.096119+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 16 creating+peering, 16 unknown, 100 active+clean; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 2.2 KiB/s wr, 11 op/s 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:19.150090+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:19.150090+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:19.151308+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:19.151308+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: cephadm 2026-03-09T15:54:19.154767+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: cephadm 2026-03-09T15:54:19.154767+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: cluster 2026-03-09T15:54:19.466930+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: cluster 2026-03-09T15:54:19.466930+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T15:54:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:19.575659+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:19.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:19 vm09 bash[22983]: audit 2026-03-09T15:54:19.575659+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:18.690341+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:18.690341+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:18.707991+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:18.707991+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:18.732915+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:18.732915+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:18.755845+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:18.755845+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: cluster 2026-03-09T15:54:19.096119+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 16 creating+peering, 16 unknown, 100 active+clean; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 2.2 KiB/s wr, 11 op/s 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: cluster 2026-03-09T15:54:19.096119+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 16 creating+peering, 16 unknown, 100 active+clean; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 2.2 KiB/s wr, 11 op/s 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:19.150090+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:19.150090+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:19.151308+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:19.151308+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: cephadm 2026-03-09T15:54:19.154767+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: cephadm 2026-03-09T15:54:19.154767+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: cluster 2026-03-09T15:54:19.466930+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: cluster 2026-03-09T15:54:19.466930+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:19.575659+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:19 vm01 bash[20728]: audit 2026-03-09T15:54:19.575659+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:18.690341+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:18.690341+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:18.707991+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:18.707991+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:18.732915+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:18.732915+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:18.755845+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:18.755845+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: cluster 2026-03-09T15:54:19.096119+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 16 creating+peering, 16 unknown, 100 active+clean; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 2.2 KiB/s wr, 11 op/s 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: cluster 2026-03-09T15:54:19.096119+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 16 creating+peering, 16 unknown, 100 active+clean; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 2.2 KiB/s wr, 11 op/s 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:19.150090+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:19.150090+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:19.151308+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:19.151308+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: cephadm 2026-03-09T15:54:19.154767+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: cephadm 2026-03-09T15:54:19.154767+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: cluster 2026-03-09T15:54:19.466930+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: cluster 2026-03-09T15:54:19.466930+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T15:54:20.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:19.575659+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:20.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:19 vm01 bash[28152]: audit 2026-03-09T15:54:19.575659+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:20 vm09 bash[22983]: cluster 2026-03-09T15:54:19.727325+0000 mon.a (mon.0) 709 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-09T15:54:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:20 vm09 bash[22983]: cluster 2026-03-09T15:54:19.727325+0000 mon.a (mon.0) 709 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-09T15:54:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:20 vm09 bash[22983]: cluster 2026-03-09T15:54:19.727348+0000 mon.a (mon.0) 710 : cluster [INF] Cluster is now healthy 2026-03-09T15:54:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:20 vm09 bash[22983]: cluster 2026-03-09T15:54:19.727348+0000 mon.a (mon.0) 710 : cluster [INF] Cluster is now healthy 2026-03-09T15:54:21.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:20 vm01 bash[20728]: cluster 2026-03-09T15:54:19.727325+0000 mon.a (mon.0) 709 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-09T15:54:21.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:20 vm01 bash[20728]: cluster 2026-03-09T15:54:19.727325+0000 mon.a (mon.0) 709 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-09T15:54:21.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:20 vm01 bash[20728]: cluster 2026-03-09T15:54:19.727348+0000 mon.a (mon.0) 710 : cluster [INF] Cluster is now healthy 2026-03-09T15:54:21.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:20 vm01 bash[20728]: cluster 2026-03-09T15:54:19.727348+0000 mon.a (mon.0) 710 : cluster [INF] Cluster is now healthy 2026-03-09T15:54:21.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:20 vm01 bash[28152]: cluster 2026-03-09T15:54:19.727325+0000 mon.a (mon.0) 709 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-09T15:54:21.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:20 vm01 bash[28152]: cluster 2026-03-09T15:54:19.727325+0000 mon.a (mon.0) 709 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-09T15:54:21.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:20 vm01 bash[28152]: cluster 2026-03-09T15:54:19.727348+0000 mon.a (mon.0) 710 : cluster [INF] Cluster is now healthy 2026-03-09T15:54:21.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:20 vm01 bash[28152]: cluster 2026-03-09T15:54:19.727348+0000 mon.a (mon.0) 710 : cluster [INF] Cluster is now healthy 2026-03-09T15:54:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:21 vm09 bash[22983]: cluster 2026-03-09T15:54:21.096623+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 9 creating+peering, 3 unknown, 120 active+clean; 452 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.2 KiB/s wr, 65 op/s 2026-03-09T15:54:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:21 vm09 bash[22983]: cluster 2026-03-09T15:54:21.096623+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 9 creating+peering, 3 unknown, 120 active+clean; 452 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.2 KiB/s wr, 65 op/s 2026-03-09T15:54:22.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:21 vm01 bash[28152]: cluster 2026-03-09T15:54:21.096623+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 9 creating+peering, 3 unknown, 120 active+clean; 452 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.2 KiB/s wr, 65 op/s 2026-03-09T15:54:22.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:21 vm01 bash[28152]: cluster 2026-03-09T15:54:21.096623+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 9 creating+peering, 3 unknown, 120 active+clean; 452 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.2 KiB/s wr, 65 op/s 2026-03-09T15:54:22.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:21 vm01 bash[20728]: cluster 2026-03-09T15:54:21.096623+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 9 creating+peering, 3 unknown, 120 active+clean; 452 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.2 KiB/s wr, 65 op/s 2026-03-09T15:54:22.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:21 vm01 bash[20728]: cluster 2026-03-09T15:54:21.096623+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 9 creating+peering, 3 unknown, 120 active+clean; 452 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.2 KiB/s wr, 65 op/s 2026-03-09T15:54:24.230 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:54:24.408 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/2509507659 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5f2810a9d0 msgr2=0x7f5f2810ce60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/2509507659 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5f2810a9d0 0x7f5f2810ce60 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7f5f24009f90 tx=0x7f5f2402f390 comp rx=0 tx=0).stop 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/2509507659 shutdown_connections 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/2509507659 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5f2810a9d0 0x7f5f2810ce60 unknown :-1 s=CLOSED pgs=140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/2509507659 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5f28107fc0 0x7f5f2810a3b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/2509507659 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5f2806bd50 0x7f5f28107a80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/2509507659 >> 192.168.123.109:0/2509507659 conn(0x7f5f280fd110 msgr2=0x7f5f280ff530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/2509507659 shutdown_connections 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/2509507659 wait complete. 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 Processor -- start 2026-03-09T15:54:24.409 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 -- start start 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5f2806bd50 0x7f5f2819c340 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5f28107fc0 0x7f5f2819c880 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5f2810a9d0 0x7f5f281a3900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f5f2810fbf0 con 0x7f5f28107fc0 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f5f2810fa70 con 0x7f5f2810a9d0 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2f6d2640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f5f2810fd70 con 0x7f5f2806bd50 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2dc48640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5f2810a9d0 0x7f5f281a3900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2dc48640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5f2810a9d0 0x7f5f281a3900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:43380/0 (socket says 192.168.123.109:43380) 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2dc48640 1 -- 192.168.123.109:0/3617598402 learned_addr learned my addr 192.168.123.109:0/3617598402 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2d447640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5f2806bd50 0x7f5f2819c340 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2dc48640 1 -- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5f2806bd50 msgr2=0x7f5f2819c340 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.403+0000 7f5f2dc48640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5f2806bd50 0x7f5f2819c340 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f2dc48640 1 -- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5f28107fc0 msgr2=0x7f5f2819c880 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f2cc46640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5f28107fc0 0x7f5f2819c880 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:24.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f2dc48640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5f28107fc0 0x7f5f2819c880 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:24.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f2dc48640 1 -- 192.168.123.109:0/3617598402 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5f281a4000 con 0x7f5f2810a9d0 2026-03-09T15:54:24.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f2cc46640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5f28107fc0 0x7f5f2819c880 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:54:24.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f2dc48640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5f2810a9d0 0x7f5f281a3900 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f5f24005c20 tx=0x7f5f24004060 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:24.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f167fc640 1 -- 192.168.123.109:0/3617598402 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5f24047070 con 0x7f5f2810a9d0 2026-03-09T15:54:24.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5f281a4290 con 0x7f5f2810a9d0 2026-03-09T15:54:24.413 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5f281a47d0 con 0x7f5f2810a9d0 2026-03-09T15:54:24.413 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f167fc640 1 -- 192.168.123.109:0/3617598402 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f5f240047b0 con 0x7f5f2810a9d0 2026-03-09T15:54:24.413 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f167fc640 1 -- 192.168.123.109:0/3617598402 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5f24007de0 con 0x7f5f2810a9d0 2026-03-09T15:54:24.413 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.407+0000 7f5f167fc640 1 -- 192.168.123.109:0/3617598402 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f5f24038420 con 0x7f5f2810a9d0 2026-03-09T15:54:24.414 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.411+0000 7f5f167fc640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5efc077690 0x7f5efc079b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:24.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.411+0000 7f5f167fc640 1 -- 192.168.123.109:0/3617598402 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(62..62 src has 1..62) ==== 7156+0+0 (secure 0 0 0) 0x7f5f24007600 con 0x7f5f2810a9d0 2026-03-09T15:54:24.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.411+0000 7f5f2d447640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5efc077690 0x7f5efc079b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:24.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.411+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5f28107a80 con 0x7f5f2810a9d0 2026-03-09T15:54:24.419 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.411+0000 7f5f2d447640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5efc077690 0x7f5efc079b50 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f5f18004520 tx=0x7f5f18009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:24.419 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.415+0000 7f5f167fc640 1 -- 192.168.123.109:0/3617598402 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f5f2403d070 con 0x7f5f2810a9d0 2026-03-09T15:54:24.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:24 vm01 bash[28152]: cluster 2026-03-09T15:54:23.097224+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 66 KiB/s rd, 6.0 KiB/s wr, 159 op/s 2026-03-09T15:54:24.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:24 vm01 bash[28152]: cluster 2026-03-09T15:54:23.097224+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 66 KiB/s rd, 6.0 KiB/s wr, 159 op/s 2026-03-09T15:54:24.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:24 vm01 bash[20728]: cluster 2026-03-09T15:54:23.097224+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 66 KiB/s rd, 6.0 KiB/s wr, 159 op/s 2026-03-09T15:54:24.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:24 vm01 bash[20728]: cluster 2026-03-09T15:54:23.097224+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 66 KiB/s rd, 6.0 KiB/s wr, 159 op/s 2026-03-09T15:54:24.523 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.515+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}) -- 0x7f5f280630c0 con 0x7f5efc077690 2026-03-09T15:54:24.533 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.527+0000 7f5f167fc640 1 -- 192.168.123.109:0/3617598402 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+35 (secure 0 0 0) 0x7f5f280630c0 con 0x7f5efc077690 2026-03-09T15:54:24.533 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled iscsi.datapool update... 2026-03-09T15:54:24.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5efc077690 msgr2=0x7f5efc079b50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:24.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5efc077690 0x7f5efc079b50 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f5f18004520 tx=0x7f5f18009290 comp rx=0 tx=0).stop 2026-03-09T15:54:24.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5f2810a9d0 msgr2=0x7f5f281a3900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:24.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5f2810a9d0 0x7f5f281a3900 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f5f24005c20 tx=0x7f5f24004060 comp rx=0 tx=0).stop 2026-03-09T15:54:24.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 shutdown_connections 2026-03-09T15:54:24.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5f2810a9d0 0x7f5f281a3900 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:24.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5f28107fc0 0x7f5f2819c880 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:24.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f5efc077690 0x7f5efc079b50 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:24.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 --2- 192.168.123.109:0/3617598402 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5f2806bd50 0x7f5f2819c340 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:24.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 >> 192.168.123.109:0/3617598402 conn(0x7f5f280fd110 msgr2=0x7f5f28108bc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:24.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 shutdown_connections 2026-03-09T15:54:24.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:24.531+0000 7f5f2f6d2640 1 -- 192.168.123.109:0/3617598402 wait complete. 2026-03-09T15:54:24.549 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:24 vm09 bash[22983]: cluster 2026-03-09T15:54:23.097224+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 66 KiB/s rd, 6.0 KiB/s wr, 159 op/s 2026-03-09T15:54:24.549 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:24 vm09 bash[22983]: cluster 2026-03-09T15:54:23.097224+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 66 KiB/s rd, 6.0 KiB/s wr, 159 op/s 2026-03-09T15:54:24.622 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-09T15:54:24.622 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:54:24.622 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T15:54:24.636 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:54:24.636 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T15:54:24.644 DEBUG:teuthology.orchestra.run.vm09:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@iscsi.iscsi.a.service 2026-03-09T15:54:24.688 INFO:tasks.cephadm:Adding prometheus.a on vm09 2026-03-09T15:54:24.688 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch apply prometheus '1;vm09=a' 2026-03-09T15:54:24.845 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:24 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.379 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.380 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.380 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.633 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.633 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: Started Ceph iscsi.iscsi.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:54:25.633 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.633 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.634 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:54:25 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.524771+0000 mgr.y (mgr.14150) 251 : audit [DBG] from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.524771+0000 mgr.y (mgr.14150) 251 : audit [DBG] from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: cephadm 2026-03-09T15:54:24.526524+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: cephadm 2026-03-09T15:54:24.526524+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.532663+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.532663+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.534063+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.534063+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.535573+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:25.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.535573+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.536388+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.536388+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.543317+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.543317+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.546700+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.546700+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.549265+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.549265+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.557088+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: audit 2026-03-09T15:54:24.557088+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: cephadm 2026-03-09T15:54:24.558066+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T15:54:25.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:25 vm09 bash[22983]: cephadm 2026-03-09T15:54:24.558066+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T15:54:26.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.524771+0000 mgr.y (mgr.14150) 251 : audit [DBG] from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:26.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.524771+0000 mgr.y (mgr.14150) 251 : audit [DBG] from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:26.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: cephadm 2026-03-09T15:54:24.526524+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T15:54:26.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: cephadm 2026-03-09T15:54:24.526524+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.532663+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.532663+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.534063+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.534063+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.535573+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.535573+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.536388+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.536388+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.543317+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.543317+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.546700+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.546700+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.549265+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.549265+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.557088+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: audit 2026-03-09T15:54:24.557088+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: cephadm 2026-03-09T15:54:24.558066+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:25 vm01 bash[20728]: cephadm 2026-03-09T15:54:24.558066+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.524771+0000 mgr.y (mgr.14150) 251 : audit [DBG] from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.524771+0000 mgr.y (mgr.14150) 251 : audit [DBG] from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: cephadm 2026-03-09T15:54:24.526524+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: cephadm 2026-03-09T15:54:24.526524+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.532663+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.532663+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.534063+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.534063+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.535573+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.535573+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.536388+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.536388+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.543317+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.543317+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.546700+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.546700+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.549265+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.549265+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.557088+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: audit 2026-03-09T15:54:24.557088+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: cephadm 2026-03-09T15:54:24.558066+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T15:54:26.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:25 vm01 bash[28152]: cephadm 2026-03-09T15:54:24.558066+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: debug Started the configuration object watcher 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: debug Checking for config object changes every 1s 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: debug Processing osd blocklist entries for this node 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: debug Reading the configuration object to update local LIO configuration 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: debug Configuration does not have an entry for this host(vm09.local) - nothing to define to LIO 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: * Environment: production 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: Use a production WSGI server instead. 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: * Debug mode: off 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: debug * Running on all addresses. 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: * Running on all addresses. 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T15:54:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:26 vm09 bash[48403]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: cluster 2026-03-09T15:54:25.097893+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 147 op/s 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: cluster 2026-03-09T15:54:25.097893+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 147 op/s 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.753441+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.753441+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.762664+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.762664+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.773913+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.773913+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: cephadm 2026-03-09T15:54:25.775361+0000 mgr.y (mgr.14150) 255 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: cephadm 2026-03-09T15:54:25.775361+0000 mgr.y (mgr.14150) 255 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.790820+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.790820+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.803371+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:25.803371+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:26.289234+0000 mon.a (mon.0) 724 : audit [DBG] from='client.? 192.168.123.109:0/1334451544' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:54:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:26.289234+0000 mon.a (mon.0) 724 : audit [DBG] from='client.? 192.168.123.109:0/1334451544' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:54:27.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:26.614663+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:26 vm09 bash[22983]: audit 2026-03-09T15:54:26.614663+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: cluster 2026-03-09T15:54:25.097893+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 147 op/s 2026-03-09T15:54:27.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: cluster 2026-03-09T15:54:25.097893+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 147 op/s 2026-03-09T15:54:27.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.753441+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.753441+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.762664+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.762664+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.773913+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.773913+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: cephadm 2026-03-09T15:54:25.775361+0000 mgr.y (mgr.14150) 255 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: cephadm 2026-03-09T15:54:25.775361+0000 mgr.y (mgr.14150) 255 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.790820+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.790820+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.803371+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:25.803371+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:26.289234+0000 mon.a (mon.0) 724 : audit [DBG] from='client.? 192.168.123.109:0/1334451544' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:26.289234+0000 mon.a (mon.0) 724 : audit [DBG] from='client.? 192.168.123.109:0/1334451544' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:26.614663+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:26 vm01 bash[20728]: audit 2026-03-09T15:54:26.614663+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: cluster 2026-03-09T15:54:25.097893+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 147 op/s 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: cluster 2026-03-09T15:54:25.097893+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 147 op/s 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.753441+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.753441+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.762664+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.762664+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.773913+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.773913+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: cephadm 2026-03-09T15:54:25.775361+0000 mgr.y (mgr.14150) 255 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: cephadm 2026-03-09T15:54:25.775361+0000 mgr.y (mgr.14150) 255 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.790820+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.790820+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.803371+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:25.803371+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:26.289234+0000 mon.a (mon.0) 724 : audit [DBG] from='client.? 192.168.123.109:0/1334451544' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:26.289234+0000 mon.a (mon.0) 724 : audit [DBG] from='client.? 192.168.123.109:0/1334451544' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:26.614663+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:26 vm01 bash[28152]: audit 2026-03-09T15:54:26.614663+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:27.974 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:27 vm09 bash[22983]: cluster 2026-03-09T15:54:27.098651+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s rd, 4.1 KiB/s wr, 129 op/s 2026-03-09T15:54:27.974 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:27 vm09 bash[22983]: cluster 2026-03-09T15:54:27.098651+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s rd, 4.1 KiB/s wr, 129 op/s 2026-03-09T15:54:28.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:27 vm01 bash[20728]: cluster 2026-03-09T15:54:27.098651+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s rd, 4.1 KiB/s wr, 129 op/s 2026-03-09T15:54:28.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:27 vm01 bash[20728]: cluster 2026-03-09T15:54:27.098651+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s rd, 4.1 KiB/s wr, 129 op/s 2026-03-09T15:54:28.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:27 vm01 bash[28152]: cluster 2026-03-09T15:54:27.098651+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s rd, 4.1 KiB/s wr, 129 op/s 2026-03-09T15:54:28.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:27 vm01 bash[28152]: cluster 2026-03-09T15:54:27.098651+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s rd, 4.1 KiB/s wr, 129 op/s 2026-03-09T15:54:29.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:28 vm09 bash[22983]: cluster 2026-03-09T15:54:27.730456+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-09T15:54:29.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:28 vm09 bash[22983]: cluster 2026-03-09T15:54:27.730456+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-09T15:54:29.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:28 vm09 bash[22983]: cluster 2026-03-09T15:54:28.362472+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T15:54:29.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:28 vm09 bash[22983]: cluster 2026-03-09T15:54:28.362472+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T15:54:29.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:28 vm01 bash[20728]: cluster 2026-03-09T15:54:27.730456+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-09T15:54:29.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:28 vm01 bash[20728]: cluster 2026-03-09T15:54:27.730456+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-09T15:54:29.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:28 vm01 bash[20728]: cluster 2026-03-09T15:54:28.362472+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T15:54:29.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:28 vm01 bash[20728]: cluster 2026-03-09T15:54:28.362472+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T15:54:29.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:28 vm01 bash[28152]: cluster 2026-03-09T15:54:27.730456+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-09T15:54:29.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:28 vm01 bash[28152]: cluster 2026-03-09T15:54:27.730456+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-09T15:54:29.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:28 vm01 bash[28152]: cluster 2026-03-09T15:54:28.362472+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T15:54:29.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:28 vm01 bash[28152]: cluster 2026-03-09T15:54:28.362472+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T15:54:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:29 vm09 bash[22983]: cluster 2026-03-09T15:54:29.099118+0000 mgr.y (mgr.14150) 257 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 116 op/s 2026-03-09T15:54:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:29 vm09 bash[22983]: cluster 2026-03-09T15:54:29.099118+0000 mgr.y (mgr.14150) 257 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 116 op/s 2026-03-09T15:54:30.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:29 vm01 bash[20728]: cluster 2026-03-09T15:54:29.099118+0000 mgr.y (mgr.14150) 257 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 116 op/s 2026-03-09T15:54:30.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:29 vm01 bash[20728]: cluster 2026-03-09T15:54:29.099118+0000 mgr.y (mgr.14150) 257 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 116 op/s 2026-03-09T15:54:30.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:29 vm01 bash[28152]: cluster 2026-03-09T15:54:29.099118+0000 mgr.y (mgr.14150) 257 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 116 op/s 2026-03-09T15:54:30.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:29 vm01 bash[28152]: cluster 2026-03-09T15:54:29.099118+0000 mgr.y (mgr.14150) 257 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 116 op/s 2026-03-09T15:54:30.409 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:54:30.669 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.663+0000 7f763d80a640 1 -- 192.168.123.109:0/296879416 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763810a470 msgr2=0x7f763810a8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:30.670 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.663+0000 7f763d80a640 1 --2- 192.168.123.109:0/296879416 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763810a470 0x7f763810a8d0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f763000b0a0 tx=0x7f763002f450 comp rx=0 tx=0).stop 2026-03-09T15:54:30.674 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.667+0000 7f763d80a640 1 -- 192.168.123.109:0/296879416 shutdown_connections 2026-03-09T15:54:30.674 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.667+0000 7f763d80a640 1 --2- 192.168.123.109:0/296879416 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763810ae10 0x7f76381116e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:30.674 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.667+0000 7f763d80a640 1 --2- 192.168.123.109:0/296879416 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763810a470 0x7f763810a8d0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:30.674 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.667+0000 7f763d80a640 1 --2- 192.168.123.109:0/296879416 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7638073f20 0x7f7638074320 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:30.674 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.667+0000 7f763d80a640 1 -- 192.168.123.109:0/296879416 >> 192.168.123.109:0/296879416 conn(0x7f763806f820 msgr2=0x7f7638071c60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:30.674 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.667+0000 7f763d80a640 1 -- 192.168.123.109:0/296879416 shutdown_connections 2026-03-09T15:54:30.675 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.667+0000 7f763d80a640 1 -- 192.168.123.109:0/296879416 wait complete. 2026-03-09T15:54:30.675 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 Processor -- start 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 -- start start 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7638073f20 0x7f76381a8fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763810a470 0x7f76381a9520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763810ae10 0x7f76381b05a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f763811ca10 con 0x7f7638073f20 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f763811c890 con 0x7f763810ae10 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f763811cb90 con 0x7f763810a470 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763c808640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7638073f20 0x7f76381a8fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763c808640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7638073f20 0x7f76381a8fe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.109:48290/0 (socket says 192.168.123.109:48290) 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763c808640 1 -- 192.168.123.109:0/3968800664 learned_addr learned my addr 192.168.123.109:0/3968800664 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f7637fff640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763810a470 0x7f76381a9520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d009640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763810ae10 0x7f76381b05a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763c808640 1 -- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763810a470 msgr2=0x7f76381a9520 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763c808640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763810a470 0x7f76381a9520 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763c808640 1 -- 192.168.123.109:0/3968800664 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763810ae10 msgr2=0x7f76381b05a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763c808640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763810ae10 0x7f76381b05a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763c808640 1 -- 192.168.123.109:0/3968800664 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7638111e90 con 0x7f7638073f20 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d009640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763810ae10 0x7f76381b05a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:54:30.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763c808640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7638073f20 0x7f76381a8fe0 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7f762800d950 tx=0x7f762800de20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:30.678 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f7635ffb640 1 -- 192.168.123.109:0/3968800664 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7628014070 con 0x7f7638073f20 2026-03-09T15:54:30.679 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7638112180 con 0x7f7638073f20 2026-03-09T15:54:30.679 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f76381126c0 con 0x7f7638073f20 2026-03-09T15:54:30.679 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f7635ffb640 1 -- 192.168.123.109:0/3968800664 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f76280044e0 con 0x7f7638073f20 2026-03-09T15:54:30.679 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.671+0000 7f7635ffb640 1 -- 192.168.123.109:0/3968800664 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7628002d40 con 0x7f7638073f20 2026-03-09T15:54:30.681 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.675+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7638074320 con 0x7f7638073f20 2026-03-09T15:54:30.681 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.675+0000 7f7635ffb640 1 -- 192.168.123.109:0/3968800664 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 15) ==== 100086+0+0 (secure 0 0 0) 0x7f7628021020 con 0x7f7638073f20 2026-03-09T15:54:30.681 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.675+0000 7f7635ffb640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f760c077720 0x7f760c079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:30.682 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.675+0000 7f7637fff640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f760c077720 0x7f760c079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:30.682 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.675+0000 7f7635ffb640 1 -- 192.168.123.109:0/3968800664 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(63..63 src has 1..63) ==== 7128+0+0 (secure 0 0 0) 0x7f762805e6c0 con 0x7f7638073f20 2026-03-09T15:54:30.683 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.679+0000 7f7637fff640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f760c077720 0x7f760c079be0 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f7630009fd0 tx=0x7f763003a040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:30.685 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.679+0000 7f7635ffb640 1 -- 192.168.123.109:0/3968800664 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7628066620 con 0x7f7638073f20 2026-03-09T15:54:30.813 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.807+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}) -- 0x7f76380630c0 con 0x7f760c077720 2026-03-09T15:54:30.823 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.819+0000 7f7635ffb640 1 -- 192.168.123.109:0/3968800664 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+31 (secure 0 0 0) 0x7f76380630c0 con 0x7f760c077720 2026-03-09T15:54:30.823 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled prometheus update... 2026-03-09T15:54:30.826 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.819+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f760c077720 msgr2=0x7f760c079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:30.826 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.819+0000 7f763d80a640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f760c077720 0x7f760c079be0 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f7630009fd0 tx=0x7f763003a040 comp rx=0 tx=0).stop 2026-03-09T15:54:30.826 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.819+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7638073f20 msgr2=0x7f76381a8fe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:30.827 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.819+0000 7f763d80a640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7638073f20 0x7f76381a8fe0 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7f762800d950 tx=0x7f762800de20 comp rx=0 tx=0).stop 2026-03-09T15:54:30.827 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.823+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 shutdown_connections 2026-03-09T15:54:30.827 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.823+0000 7f763d80a640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763810ae10 0x7f76381b05a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:30.827 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.823+0000 7f763d80a640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f760c077720 0x7f760c079be0 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:30.827 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.823+0000 7f763d80a640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763810a470 0x7f76381a9520 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:30.827 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.823+0000 7f763d80a640 1 --2- 192.168.123.109:0/3968800664 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7638073f20 0x7f76381a8fe0 unknown :-1 s=CLOSED pgs=148 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:30.827 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.823+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 >> 192.168.123.109:0/3968800664 conn(0x7f763806f820 msgr2=0x7f7638072300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:30.827 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.823+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 shutdown_connections 2026-03-09T15:54:30.827 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:30.823+0000 7f763d80a640 1 -- 192.168.123.109:0/3968800664 wait complete. 2026-03-09T15:54:30.887 DEBUG:teuthology.orchestra.run.vm09:prometheus.a> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@prometheus.a.service 2026-03-09T15:54:30.888 INFO:tasks.cephadm:Adding node-exporter.a on vm01 2026-03-09T15:54:30.888 INFO:tasks.cephadm:Adding node-exporter.b on vm09 2026-03-09T15:54:30.888 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch apply node-exporter '2;vm01=a;vm09=b' 2026-03-09T15:54:31.299 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:31 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.790449+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.790449+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.797787+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.797787+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.799410+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.799410+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.800457+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.800457+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.805663+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.805663+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.814866+0000 mgr.y (mgr.14150) 258 : audit [DBG] from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.814866+0000 mgr.y (mgr.14150) 258 : audit [DBG] from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: cephadm 2026-03-09T15:54:30.815796+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: cephadm 2026-03-09T15:54:30.815796+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.820720+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.820720+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.824163+0000 mon.a (mon.0) 734 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.824163+0000 mon.a (mon.0) 734 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.824614+0000 mgr.y (mgr.14150) 260 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.824614+0000 mgr.y (mgr.14150) 260 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: cephadm 2026-03-09T15:54:30.825480+0000 mgr.y (mgr.14150) 261 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.109:5000 to Dashboard 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: cephadm 2026-03-09T15:54:30.825480+0000 mgr.y (mgr.14150) 261 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.109:5000 to Dashboard 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.825641+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.825641+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.825862+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.825862+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.831317+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.831317+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.837878+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.837878+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.838153+0000 mgr.y (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.838153+0000 mgr.y (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.841620+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.841620+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.843005+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.843005+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.843902+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.843902+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.844387+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.844387+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.851654+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: audit 2026-03-09T15:54:30.851654+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: cephadm 2026-03-09T15:54:31.015131+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm09 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: cephadm 2026-03-09T15:54:31.015131+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm09 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: cluster 2026-03-09T15:54:31.099765+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 85 op/s 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:31 vm01 bash[20728]: cluster 2026-03-09T15:54:31.099765+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 85 op/s 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.790449+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.790449+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.797787+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.797787+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.799410+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.799410+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.800457+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.800457+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.805663+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.805663+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.814866+0000 mgr.y (mgr.14150) 258 : audit [DBG] from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.814866+0000 mgr.y (mgr.14150) 258 : audit [DBG] from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: cephadm 2026-03-09T15:54:30.815796+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: cephadm 2026-03-09T15:54:30.815796+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.820720+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.820720+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.824163+0000 mon.a (mon.0) 734 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.824163+0000 mon.a (mon.0) 734 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.824614+0000 mgr.y (mgr.14150) 260 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.824614+0000 mgr.y (mgr.14150) 260 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: cephadm 2026-03-09T15:54:30.825480+0000 mgr.y (mgr.14150) 261 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.109:5000 to Dashboard 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: cephadm 2026-03-09T15:54:30.825480+0000 mgr.y (mgr.14150) 261 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.109:5000 to Dashboard 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.825641+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.825641+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.825862+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.825862+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.831317+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.831317+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.837878+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.837878+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.838153+0000 mgr.y (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.838153+0000 mgr.y (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.841620+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.841620+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.843005+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.843005+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.843902+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.843902+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.844387+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.844387+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.851654+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: audit 2026-03-09T15:54:30.851654+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: cephadm 2026-03-09T15:54:31.015131+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm09 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: cephadm 2026-03-09T15:54:31.015131+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm09 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: cluster 2026-03-09T15:54:31.099765+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 85 op/s 2026-03-09T15:54:32.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:31 vm01 bash[28152]: cluster 2026-03-09T15:54:31.099765+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 85 op/s 2026-03-09T15:54:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.790449+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.790449+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.797787+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.797787+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.799410+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.799410+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.800457+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.800457+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.805663+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.805663+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.814866+0000 mgr.y (mgr.14150) 258 : audit [DBG] from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.814866+0000 mgr.y (mgr.14150) 258 : audit [DBG] from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: cephadm 2026-03-09T15:54:30.815796+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: cephadm 2026-03-09T15:54:30.815796+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.820720+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.820720+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.824163+0000 mon.a (mon.0) 734 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.824163+0000 mon.a (mon.0) 734 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.824614+0000 mgr.y (mgr.14150) 260 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.824614+0000 mgr.y (mgr.14150) 260 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: cephadm 2026-03-09T15:54:30.825480+0000 mgr.y (mgr.14150) 261 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.109:5000 to Dashboard 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: cephadm 2026-03-09T15:54:30.825480+0000 mgr.y (mgr.14150) 261 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.109:5000 to Dashboard 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.825641+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.825641+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.825862+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.825862+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.831317+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.831317+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.837878+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.837878+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.838153+0000 mgr.y (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.838153+0000 mgr.y (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.841620+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.841620+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.843005+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.843005+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.843902+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.843902+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.844387+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.844387+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.851654+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: audit 2026-03-09T15:54:30.851654+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: cephadm 2026-03-09T15:54:31.015131+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm09 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: cephadm 2026-03-09T15:54:31.015131+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm09 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: cluster 2026-03-09T15:54:31.099765+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 85 op/s 2026-03-09T15:54:32.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:31 vm09 bash[22983]: cluster 2026-03-09T15:54:31.099765+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 85 op/s 2026-03-09T15:54:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:34 vm09 bash[22983]: cluster 2026-03-09T15:54:33.100350+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 9.1 KiB/s rd, 102 B/s wr, 19 op/s 2026-03-09T15:54:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:34 vm09 bash[22983]: cluster 2026-03-09T15:54:33.100350+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 9.1 KiB/s rd, 102 B/s wr, 19 op/s 2026-03-09T15:54:34.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:34 vm01 bash[28152]: cluster 2026-03-09T15:54:33.100350+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 9.1 KiB/s rd, 102 B/s wr, 19 op/s 2026-03-09T15:54:34.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:34 vm01 bash[28152]: cluster 2026-03-09T15:54:33.100350+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 9.1 KiB/s rd, 102 B/s wr, 19 op/s 2026-03-09T15:54:34.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:34 vm01 bash[20728]: cluster 2026-03-09T15:54:33.100350+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 9.1 KiB/s rd, 102 B/s wr, 19 op/s 2026-03-09T15:54:34.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:34 vm01 bash[20728]: cluster 2026-03-09T15:54:33.100350+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 9.1 KiB/s rd, 102 B/s wr, 19 op/s 2026-03-09T15:54:35.578 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:54:36.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:54:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:36 vm09 bash[22983]: cluster 2026-03-09T15:54:35.100820+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:36 vm09 bash[22983]: cluster 2026-03-09T15:54:35.100820+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:36.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 -- 192.168.123.109:0/2175058148 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a4b70 msgr2=0x7fa0dc0a4f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:36.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 --2- 192.168.123.109:0/2175058148 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a4b70 0x7fa0dc0a4f70 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fa0e4068d10 tx=0x7fa0e40a4bd0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 -- 192.168.123.109:0/2175058148 shutdown_connections 2026-03-09T15:54:36.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 --2- 192.168.123.109:0/2175058148 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa0dc0a6600 0x7fa0dc0aaec0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 --2- 192.168.123.109:0/2175058148 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa0dc0a5c60 0x7fa0dc0a60c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 --2- 192.168.123.109:0/2175058148 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a4b70 0x7fa0dc0a4f70 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 -- 192.168.123.109:0/2175058148 >> 192.168.123.109:0/2175058148 conn(0x7fa0dc0a01c0 msgr2=0x7fa0dc0a2620 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:36.442 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 -- 192.168.123.109:0/2175058148 shutdown_connections 2026-03-09T15:54:36.443 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 -- 192.168.123.109:0/2175058148 wait complete. 2026-03-09T15:54:36.443 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 Processor -- start 2026-03-09T15:54:36.443 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 -- start start 2026-03-09T15:54:36.443 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa0dc0a4b70 0x7fa0dc0b6c70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:36.443 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa0dc0a5c60 0x7fa0dc0b52c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:36.443 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a6600 0x7fa0dc0b5820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fa0dc0ad9f0 con 0x7fa0dc0a4b70 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fa0dc0ad870 con 0x7fa0dc0a6600 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e9a4d640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7fa0dc0adb70 con 0x7fa0dc0a5c60 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e924c640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a6600 0x7fa0dc0b5820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e924c640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a6600 0x7fa0dc0b5820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:48080/0 (socket says 192.168.123.109:48080) 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e924c640 1 -- 192.168.123.109:0/2710226968 learned_addr learned my addr 192.168.123.109:0/2710226968 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e3fff640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa0dc0a5c60 0x7fa0dc0b52c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.435+0000 7fa0e8a4b640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa0dc0a4b70 0x7fa0dc0b6c70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0e924c640 1 -- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa0dc0a5c60 msgr2=0x7fa0dc0b52c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0e924c640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa0dc0a5c60 0x7fa0dc0b52c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0e924c640 1 -- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa0dc0a4b70 msgr2=0x7fa0dc0b6c70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0e924c640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa0dc0a4b70 0x7fa0dc0b6c70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0e924c640 1 -- 192.168.123.109:0/2710226968 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa0dc0b60e0 con 0x7fa0dc0a6600 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0e924c640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a6600 0x7fa0dc0b5820 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fa0d800c970 tx=0x7fa0d800ce40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:36.444 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0e1ffb640 1 -- 192.168.123.109:0/2710226968 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa0d8007bf0 con 0x7fa0dc0a6600 2026-03-09T15:54:36.451 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0e9a4d640 1 -- 192.168.123.109:0/2710226968 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa0dc00baf0 con 0x7fa0dc0a6600 2026-03-09T15:54:36.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0e9a4d640 1 -- 192.168.123.109:0/2710226968 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fa0dc00c0b0 con 0x7fa0dc0a6600 2026-03-09T15:54:36.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.439+0000 7fa0b77fe640 1 -- 192.168.123.109:0/2710226968 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa0a8005180 con 0x7fa0dc0a6600 2026-03-09T15:54:36.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.443+0000 7fa0e1ffb640 1 -- 192.168.123.109:0/2710226968 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fa0d8007d90 con 0x7fa0dc0a6600 2026-03-09T15:54:36.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.443+0000 7fa0e1ffb640 1 -- 192.168.123.109:0/2710226968 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa0d8005730 con 0x7fa0dc0a6600 2026-03-09T15:54:36.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.443+0000 7fa0e1ffb640 1 -- 192.168.123.109:0/2710226968 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 15) ==== 100086+0+0 (secure 0 0 0) 0x7fa0d8020020 con 0x7fa0dc0a6600 2026-03-09T15:54:36.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.447+0000 7fa0e1ffb640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa0b80777f0 0x7fa0b8079cb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:36.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.447+0000 7fa0e1ffb640 1 -- 192.168.123.109:0/2710226968 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(63..63 src has 1..63) ==== 7128+0+0 (secure 0 0 0) 0x7fa0d809ac00 con 0x7fa0dc0a6600 2026-03-09T15:54:36.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.447+0000 7fa0e8a4b640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa0b80777f0 0x7fa0b8079cb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:36.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.447+0000 7fa0e1ffb640 1 -- 192.168.123.109:0/2710226968 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa0d8014030 con 0x7fa0dc0a6600 2026-03-09T15:54:36.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.447+0000 7fa0e8a4b640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa0b80777f0 0x7fa0b8079cb0 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7fa0e4068b80 tx=0x7fa0e4050ff0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:36.608 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.603+0000 7fa0b77fe640 1 -- 192.168.123.109:0/2710226968 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm01=a;vm09=b", "target": ["mon-mgr", ""]}) -- 0x7fa0a8002bf0 con 0x7fa0b80777f0 2026-03-09T15:54:36.616 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.611+0000 7fa0e1ffb640 1 -- 192.168.123.109:0/2710226968 <== mgr.14150 v2:192.168.123.101:6800/1421049061 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+34 (secure 0 0 0) 0x7fa0a8002bf0 con 0x7fa0b80777f0 2026-03-09T15:54:36.616 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled node-exporter update... 2026-03-09T15:54:36.619 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 -- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa0b80777f0 msgr2=0x7fa0b8079cb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:36.619 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa0b80777f0 0x7fa0b8079cb0 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7fa0e4068b80 tx=0x7fa0e4050ff0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.619 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 -- 192.168.123.109:0/2710226968 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a6600 msgr2=0x7fa0dc0b5820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:36.619 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a6600 0x7fa0dc0b5820 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fa0d800c970 tx=0x7fa0d800ce40 comp rx=0 tx=0).stop 2026-03-09T15:54:36.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 -- 192.168.123.109:0/2710226968 shutdown_connections 2026-03-09T15:54:36.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa0dc0a6600 0x7fa0dc0b5820 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7fa0b80777f0 0x7fa0b8079cb0 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa0dc0a5c60 0x7fa0dc0b52c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 --2- 192.168.123.109:0/2710226968 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa0dc0a4b70 0x7fa0dc0b6c70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:36.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 -- 192.168.123.109:0/2710226968 >> 192.168.123.109:0/2710226968 conn(0x7fa0dc0a01c0 msgr2=0x7fa0dc0a19c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:36.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 -- 192.168.123.109:0/2710226968 shutdown_connections 2026-03-09T15:54:36.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:36.615+0000 7fa0b77fe640 1 -- 192.168.123.109:0/2710226968 wait complete. 2026-03-09T15:54:36.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:36 vm01 bash[28152]: cluster 2026-03-09T15:54:35.100820+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:36.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:36 vm01 bash[28152]: cluster 2026-03-09T15:54:35.100820+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:36.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:36 vm01 bash[20728]: cluster 2026-03-09T15:54:35.100820+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:36.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:36 vm01 bash[20728]: cluster 2026-03-09T15:54:35.100820+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:36.834 DEBUG:teuthology.orchestra.run.vm01:node-exporter.a> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@node-exporter.a.service 2026-03-09T15:54:36.836 DEBUG:teuthology.orchestra.run.vm09:node-exporter.b> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@node-exporter.b.service 2026-03-09T15:54:36.837 INFO:tasks.cephadm:Adding alertmanager.a on vm01 2026-03-09T15:54:36.837 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch apply alertmanager '1;vm01=a' 2026-03-09T15:54:37.326 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:37 vm09 bash[22983]: audit 2026-03-09T15:54:36.126416+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:37.327 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:37 vm09 bash[22983]: audit 2026-03-09T15:54:36.126416+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:37.327 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:37 vm09 bash[22983]: audit 2026-03-09T15:54:36.616908+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:37.327 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:37 vm09 bash[22983]: audit 2026-03-09T15:54:36.616908+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:37.624 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.624 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.624 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.624 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.625 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.625 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.625 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.625 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:37 vm01 bash[28152]: audit 2026-03-09T15:54:36.126416+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:37.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:37 vm01 bash[28152]: audit 2026-03-09T15:54:36.126416+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:37.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:37 vm01 bash[28152]: audit 2026-03-09T15:54:36.616908+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:37.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:37 vm01 bash[28152]: audit 2026-03-09T15:54:36.616908+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:37.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:37 vm01 bash[20728]: audit 2026-03-09T15:54:36.126416+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:37.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:37 vm01 bash[20728]: audit 2026-03-09T15:54:36.126416+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:37.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:37 vm01 bash[20728]: audit 2026-03-09T15:54:36.616908+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:37.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:37 vm01 bash[20728]: audit 2026-03-09T15:54:36.616908+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:37.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:37.884 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 systemd[1]: Started Ceph prometheus.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.987Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.987Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.987Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm09 (none))" 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.987Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.987Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.990Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.990Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.992Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.992Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.082µs 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.992Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.993Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.993Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.993Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.993Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=41.667µs wal_replay_duration=996.276µs wbl_replay_duration=261ns total_replay_duration=1.053242ms 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.996Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.996Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:37 vm09 bash[49361]: ts=2026-03-09T15:54:37.996Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T15:54:38.281 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:38 vm09 bash[49361]: ts=2026-03-09T15:54:38.011Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=15.454701ms db_storage=1.233µs remote_storage=1.282µs web_handler=430ns query_engine=1.112µs scrape=1.583635ms scrape_sd=138.69µs notify=951ns notify_sd=811ns rules=13.110152ms tracing=6.283µs 2026-03-09T15:54:38.282 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:38 vm09 bash[49361]: ts=2026-03-09T15:54:38.011Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T15:54:38.282 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:38 vm09 bash[49361]: ts=2026-03-09T15:54:38.011Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T15:54:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:36.610508+0000 mgr.y (mgr.14150) 269 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm01=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:36.610508+0000 mgr.y (mgr.14150) 269 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm01=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: cephadm 2026-03-09T15:54:36.611547+0000 mgr.y (mgr.14150) 270 : cephadm [INF] Saving service node-exporter spec with placement vm01=a;vm09=b;count:2 2026-03-09T15:54:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: cephadm 2026-03-09T15:54:36.611547+0000 mgr.y (mgr.14150) 270 : cephadm [INF] Saving service node-exporter spec with placement vm01=a;vm09=b;count:2 2026-03-09T15:54:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: cluster 2026-03-09T15:54:37.101326+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: cluster 2026-03-09T15:54:37.101326+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:37.895460+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:37.895460+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:37.902909+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:37.902909+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:37.911880+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:37.911880+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:37.914498+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T15:54:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:38 vm09 bash[22983]: audit 2026-03-09T15:54:37.914498+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:36.610508+0000 mgr.y (mgr.14150) 269 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm01=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:36.610508+0000 mgr.y (mgr.14150) 269 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm01=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: cephadm 2026-03-09T15:54:36.611547+0000 mgr.y (mgr.14150) 270 : cephadm [INF] Saving service node-exporter spec with placement vm01=a;vm09=b;count:2 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: cephadm 2026-03-09T15:54:36.611547+0000 mgr.y (mgr.14150) 270 : cephadm [INF] Saving service node-exporter spec with placement vm01=a;vm09=b;count:2 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: cluster 2026-03-09T15:54:37.101326+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: cluster 2026-03-09T15:54:37.101326+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:37.895460+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:37.895460+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:37.902909+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:37.902909+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:37.911880+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:37.911880+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:37.914498+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:38 vm01 bash[28152]: audit 2026-03-09T15:54:37.914498+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:36.610508+0000 mgr.y (mgr.14150) 269 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm01=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:36.610508+0000 mgr.y (mgr.14150) 269 : audit [DBG] from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm01=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: cephadm 2026-03-09T15:54:36.611547+0000 mgr.y (mgr.14150) 270 : cephadm [INF] Saving service node-exporter spec with placement vm01=a;vm09=b;count:2 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: cephadm 2026-03-09T15:54:36.611547+0000 mgr.y (mgr.14150) 270 : cephadm [INF] Saving service node-exporter spec with placement vm01=a;vm09=b;count:2 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: cluster 2026-03-09T15:54:37.101326+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: cluster 2026-03-09T15:54:37.101326+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:37.895460+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:37.895460+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:37.902909+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:37.902909+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:37.911880+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:37.911880+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:37.914498+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T15:54:38.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:38 vm01 bash[20728]: audit 2026-03-09T15:54:37.914498+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T15:54:39.216 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:38 vm01 bash[21002]: ignoring --setuser ceph since I am not root 2026-03-09T15:54:39.217 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:38 vm01 bash[21002]: ignoring --setgroup ceph since I am not root 2026-03-09T15:54:39.217 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:39 vm01 bash[21002]: debug 2026-03-09T15:54:39.055+0000 7f5fd5f4b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T15:54:39.217 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:39 vm01 bash[21002]: debug 2026-03-09T15:54:39.091+0000 7f5fd5f4b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T15:54:39.230 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:38 vm09 bash[23804]: ignoring --setuser ceph since I am not root 2026-03-09T15:54:39.231 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:38 vm09 bash[23804]: ignoring --setgroup ceph since I am not root 2026-03-09T15:54:39.231 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:39 vm09 bash[23804]: debug 2026-03-09T15:54:39.051+0000 7f5ae6540140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T15:54:39.231 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:39 vm09 bash[23804]: debug 2026-03-09T15:54:39.091+0000 7f5ae6540140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T15:54:39.521 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:39 vm01 bash[21002]: debug 2026-03-09T15:54:39.215+0000 7f5fd5f4b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T15:54:39.596 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:39 vm09 bash[23804]: debug 2026-03-09T15:54:39.223+0000 7f5ae6540140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T15:54:39.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:39 vm09 bash[23804]: debug 2026-03-09T15:54:39.591+0000 7f5ae6540140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T15:54:39.929 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:39 vm01 bash[21002]: debug 2026-03-09T15:54:39.519+0000 7f5fd5f4b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T15:54:40.231 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:39 vm09 bash[22983]: audit 2026-03-09T15:54:38.916732+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T15:54:40.231 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:39 vm09 bash[22983]: audit 2026-03-09T15:54:38.916732+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T15:54:40.231 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:39 vm09 bash[22983]: cluster 2026-03-09T15:54:38.930873+0000 mon.a (mon.0) 749 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T15:54:40.231 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:39 vm09 bash[22983]: cluster 2026-03-09T15:54:38.930873+0000 mon.a (mon.0) 749 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T15:54:40.231 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: debug 2026-03-09T15:54:40.131+0000 7f5ae6540140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T15:54:40.270 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:39 vm01 bash[28152]: audit 2026-03-09T15:54:38.916732+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T15:54:40.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:39 vm01 bash[28152]: audit 2026-03-09T15:54:38.916732+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T15:54:40.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:39 vm01 bash[28152]: cluster 2026-03-09T15:54:38.930873+0000 mon.a (mon.0) 749 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T15:54:40.271 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:39 vm01 bash[28152]: cluster 2026-03-09T15:54:38.930873+0000 mon.a (mon.0) 749 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T15:54:40.271 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: debug 2026-03-09T15:54:40.031+0000 7f5fd5f4b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T15:54:40.271 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: debug 2026-03-09T15:54:40.127+0000 7f5fd5f4b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T15:54:40.271 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:39 vm01 bash[20728]: audit 2026-03-09T15:54:38.916732+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T15:54:40.271 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:39 vm01 bash[20728]: audit 2026-03-09T15:54:38.916732+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.101:0/518875925' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T15:54:40.271 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:39 vm01 bash[20728]: cluster 2026-03-09T15:54:38.930873+0000 mon.a (mon.0) 749 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T15:54:40.271 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:39 vm01 bash[20728]: cluster 2026-03-09T15:54:38.930873+0000 mon.a (mon.0) 749 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T15:54:40.534 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: debug 2026-03-09T15:54:40.223+0000 7f5ae6540140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T15:54:40.534 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T15:54:40.534 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T15:54:40.535 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: from numpy import show_config as show_numpy_config 2026-03-09T15:54:40.535 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: debug 2026-03-09T15:54:40.375+0000 7f5ae6540140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T15:54:40.544 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T15:54:40.544 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T15:54:40.544 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: from numpy import show_config as show_numpy_config 2026-03-09T15:54:40.544 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: debug 2026-03-09T15:54:40.275+0000 7f5fd5f4b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T15:54:40.544 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: debug 2026-03-09T15:54:40.459+0000 7f5fd5f4b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T15:54:40.544 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: debug 2026-03-09T15:54:40.499+0000 7f5fd5f4b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T15:54:40.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: debug 2026-03-09T15:54:40.527+0000 7f5ae6540140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T15:54:40.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: debug 2026-03-09T15:54:40.575+0000 7f5ae6540140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T15:54:40.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: debug 2026-03-09T15:54:40.615+0000 7f5ae6540140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T15:54:40.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: debug 2026-03-09T15:54:40.671+0000 7f5ae6540140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T15:54:40.883 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:40 vm09 bash[23804]: debug 2026-03-09T15:54:40.727+0000 7f5ae6540140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T15:54:40.929 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: debug 2026-03-09T15:54:40.543+0000 7f5fd5f4b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T15:54:40.929 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: debug 2026-03-09T15:54:40.595+0000 7f5fd5f4b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T15:54:40.929 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:40 vm01 bash[21002]: debug 2026-03-09T15:54:40.655+0000 7f5fd5f4b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T15:54:41.447 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:41 vm01 bash[21002]: debug 2026-03-09T15:54:41.179+0000 7f5fd5f4b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T15:54:41.447 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:41 vm01 bash[21002]: debug 2026-03-09T15:54:41.223+0000 7f5fd5f4b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T15:54:41.447 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:41 vm01 bash[21002]: debug 2026-03-09T15:54:41.267+0000 7f5fd5f4b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T15:54:41.495 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:41 vm09 bash[23804]: debug 2026-03-09T15:54:41.231+0000 7f5ae6540140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T15:54:41.495 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:41 vm09 bash[23804]: debug 2026-03-09T15:54:41.275+0000 7f5ae6540140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T15:54:41.495 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:41 vm09 bash[23804]: debug 2026-03-09T15:54:41.315+0000 7f5ae6540140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T15:54:41.526 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:54:41.723 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.715+0000 7f08e3376640 1 -- 192.168.123.109:0/143536891 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f08d40a5c60 msgr2=0x7f08d40a60c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:41.723 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.715+0000 7f08e3376640 1 --2- 192.168.123.109:0/143536891 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f08d40a5c60 0x7f08d40a60c0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f08dc068d10 tx=0x7f08dc0a4e40 comp rx=0 tx=0).stop 2026-03-09T15:54:41.723 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 -- 192.168.123.109:0/143536891 shutdown_connections 2026-03-09T15:54:41.723 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 --2- 192.168.123.109:0/143536891 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f08d40a6600 0x7f08d40aaec0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:41.723 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 --2- 192.168.123.109:0/143536891 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f08d40a5c60 0x7f08d40a60c0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:41.723 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 --2- 192.168.123.109:0/143536891 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f08d40a4b70 0x7f08d40a4f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:41.723 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 -- 192.168.123.109:0/143536891 >> 192.168.123.109:0/143536891 conn(0x7f08d40a01c0 msgr2=0x7f08d40a2620 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:41.723 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 -- 192.168.123.109:0/143536891 shutdown_connections 2026-03-09T15:54:41.724 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 -- 192.168.123.109:0/143536891 wait complete. 2026-03-09T15:54:41.724 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 Processor -- start 2026-03-09T15:54:41.724 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 -- start start 2026-03-09T15:54:41.725 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f08d40a4b70 0x7f08d413c060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:41.725 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f08d40a6600 0x7f08d413c5a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:41.725 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f08d4141610 0x7f08d4143a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:41.725 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f08d40adb60 con 0x7f08d40a6600 2026-03-09T15:54:41.726 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f08d40ad9e0 con 0x7f08d4141610 2026-03-09T15:54:41.726 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e3376640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f08d40adce0 con 0x7f08d40a4b70 2026-03-09T15:54:41.726 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e2b75640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f08d4141610 0x7f08d4143a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:41.726 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e1b73640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f08d40a6600 0x7f08d413c5a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:41.726 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e2374640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f08d40a4b70 0x7f08d413c060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:41.726 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e2374640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f08d40a4b70 0x7f08d413c060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.109:42472/0 (socket says 192.168.123.109:42472) 2026-03-09T15:54:41.727 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e2374640 1 -- 192.168.123.109:0/413291514 learned_addr learned my addr 192.168.123.109:0/413291514 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:54:41.727 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e1b73640 1 -- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f08d40a4b70 msgr2=0x7f08d413c060 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:41.727 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e1b73640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f08d40a4b70 0x7f08d413c060 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:41.727 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e1b73640 1 -- 192.168.123.109:0/413291514 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f08d4141610 msgr2=0x7f08d4143a00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:41.727 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e1b73640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f08d4141610 0x7f08d4143a00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:41.727 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.719+0000 7f08e1b73640 1 -- 192.168.123.109:0/413291514 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f08d4144100 con 0x7f08d40a6600 2026-03-09T15:54:41.727 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.723+0000 7f08e2b75640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f08d4141610 0x7f08d4143a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:54:41.728 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.723+0000 7f08e1b73640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f08d40a6600 0x7f08d413c5a0 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7f08dc0521f0 tx=0x7f08dc069b60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:41.728 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.723+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f08dc0a58a0 con 0x7f08d40a6600 2026-03-09T15:54:41.728 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.723+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f08dc0a9070 con 0x7f08d40a6600 2026-03-09T15:54:41.729 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.723+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f08dc0ac5b0 con 0x7f08d40a6600 2026-03-09T15:54:41.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.723+0000 7f08e3376640 1 -- 192.168.123.109:0/413291514 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f08d4144390 con 0x7f08d40a6600 2026-03-09T15:54:41.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.723+0000 7f08e3376640 1 -- 192.168.123.109:0/413291514 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f08d41448d0 con 0x7f08d40a6600 2026-03-09T15:54:41.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.723+0000 7f08e3376640 1 -- 192.168.123.109:0/413291514 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f08d40a5c60 con 0x7f08d40a6600 2026-03-09T15:54:41.735 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.731+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 16) ==== 100100+0+0 (secure 0 0 0) 0x7f08dc0ad410 con 0x7f08d40a6600 2026-03-09T15:54:41.735 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.731+0000 7f08cb7fe640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f08b8077660 0x7f08b8079b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:41.735 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.731+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(63..63 src has 1..63) ==== 7128+0+0 (secure 0 0 0) 0x7f08dc132cd0 con 0x7f08d40a6600 2026-03-09T15:54:41.735 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.731+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f08dc0fb970 con 0x7f08d40a6600 2026-03-09T15:54:41.739 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.735+0000 7f08e2374640 1 -- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f08b8077660 msgr2=0x7f08b8079b20 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/1421049061 2026-03-09T15:54:41.739 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.735+0000 7f08e2374640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f08b8077660 0x7f08b8079b20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T15:54:41.780 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:41 vm09 bash[23804]: debug 2026-03-09T15:54:41.487+0000 7f5ae6540140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T15:54:41.780 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:41 vm09 bash[23804]: debug 2026-03-09T15:54:41.543+0000 7f5ae6540140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T15:54:41.780 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:41 vm09 bash[23804]: debug 2026-03-09T15:54:41.623+0000 7f5ae6540140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T15:54:41.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.855+0000 7f08e3376640 1 -- 192.168.123.109:0/413291514 --> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm01=a", "target": ["mon-mgr", ""]}) -- 0x7f08d40a9ec0 con 0x7f08b8077660 2026-03-09T15:54:41.864 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:41 vm01 bash[21002]: debug 2026-03-09T15:54:41.443+0000 7f5fd5f4b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T15:54:41.864 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:41 vm01 bash[21002]: debug 2026-03-09T15:54:41.491+0000 7f5fd5f4b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T15:54:41.864 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:41 vm01 bash[21002]: debug 2026-03-09T15:54:41.543+0000 7f5fd5f4b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T15:54:41.864 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:41 vm01 bash[21002]: debug 2026-03-09T15:54:41.671+0000 7f5fd5f4b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:54:41.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.935+0000 7f08e2374640 1 -- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f08b8077660 msgr2=0x7f08b8079b20 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/1421049061 2026-03-09T15:54:41.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:41.935+0000 7f08e2374640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f08b8077660 0x7f08b8079b20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-09T15:54:42.133 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:41 vm09 bash[23804]: debug 2026-03-09T15:54:41.775+0000 7f5ae6540140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:54:42.133 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:41 vm09 bash[23804]: debug 2026-03-09T15:54:41.963+0000 7f5ae6540140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T15:54:42.155 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:41 vm01 bash[21002]: debug 2026-03-09T15:54:41.863+0000 7f5fd5f4b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T15:54:42.155 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: debug 2026-03-09T15:54:42.063+0000 7f5fd5f4b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T15:54:42.155 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: debug 2026-03-09T15:54:42.103+0000 7f5fd5f4b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T15:54:42.340 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:42.335+0000 7f08e2374640 1 -- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f08b8077660 msgr2=0x7f08b8079b20 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.101:6800/1421049061 2026-03-09T15:54:42.340 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:42.335+0000 7f08e2374640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f08b8077660 0x7f08b8079b20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-09T15:54:42.429 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: debug 2026-03-09T15:54:42.151+0000 7f5fd5f4b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T15:54:42.429 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: debug 2026-03-09T15:54:42.327+0000 7f5fd5f4b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:54:42.442 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: debug 2026-03-09T15:54:42.163+0000 7f5ae6540140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T15:54:42.443 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: debug 2026-03-09T15:54:42.207+0000 7f5ae6540140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T15:54:42.443 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: debug 2026-03-09T15:54:42.259+0000 7f5ae6540140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T15:54:42.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:42.615+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mgrmap(e 17) ==== 99714+0+0 (secure 0 0 0) 0x7f08dc0f72b0 con 0x7f08d40a6600 2026-03-09T15:54:42.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:42.615+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f08b8077660 msgr2=0x7f08b8079b20 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:54:42.620 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:42.615+0000 7f08cb7fe640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/1421049061,v1:192.168.123.101:6801/1421049061] conn(0x7f08b8077660 0x7f08b8079b20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.595496+0000 mon.a (mon.0) 750 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.595496+0000 mon.a (mon.0) 750 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.596112+0000 mon.a (mon.0) 751 : cluster [INF] Activating manager daemon y 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.596112+0000 mon.a (mon.0) 751 : cluster [INF] Activating manager daemon y 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.625040+0000 mon.a (mon.0) 752 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.625040+0000 mon.a (mon.0) 752 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.625693+0000 mon.a (mon.0) 753 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0297389s), standbys: x 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.625693+0000 mon.a (mon.0) 753 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0297389s), standbys: x 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.628612+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.628612+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.628659+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.628659+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.628691+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.628691+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.629023+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.629023+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.629111+0000 mon.a (mon.0) 758 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.629111+0000 mon.a (mon.0) 758 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:54:42.708 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.630379+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.630379+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.630626+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.630626+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.631104+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.631104+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.631426+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.631426+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.631768+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.631768+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.632461+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.632461+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.632818+0000 mon.a (mon.0) 765 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.632818+0000 mon.a (mon.0) 765 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.633174+0000 mon.a (mon.0) 766 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.633174+0000 mon.a (mon.0) 766 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.633827+0000 mon.a (mon.0) 767 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.633827+0000 mon.a (mon.0) 767 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.634069+0000 mon.a (mon.0) 768 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.634069+0000 mon.a (mon.0) 768 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.634449+0000 mon.a (mon.0) 769 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: audit 2026-03-09T15:54:42.634449+0000 mon.a (mon.0) 769 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.640679+0000 mon.a (mon.0) 770 : cluster [INF] Manager daemon y is now available 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:42 vm09 bash[22983]: cluster 2026-03-09T15:54:42.640679+0000 mon.a (mon.0) 770 : cluster [INF] Manager daemon y is now available 2026-03-09T15:54:42.709 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: debug 2026-03-09T15:54:42.435+0000 7f5ae6540140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.595496+0000 mon.a (mon.0) 750 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.595496+0000 mon.a (mon.0) 750 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.596112+0000 mon.a (mon.0) 751 : cluster [INF] Activating manager daemon y 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.596112+0000 mon.a (mon.0) 751 : cluster [INF] Activating manager daemon y 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.625040+0000 mon.a (mon.0) 752 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.625040+0000 mon.a (mon.0) 752 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.625693+0000 mon.a (mon.0) 753 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0297389s), standbys: x 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.625693+0000 mon.a (mon.0) 753 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0297389s), standbys: x 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.628612+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.628612+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.628659+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.628659+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.628691+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.628691+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.629023+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.629023+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.629111+0000 mon.a (mon.0) 758 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.629111+0000 mon.a (mon.0) 758 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.630379+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.630379+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.630626+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.630626+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.631104+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.631104+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.631426+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.631426+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.631768+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.631768+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.632461+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.632461+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.632818+0000 mon.a (mon.0) 765 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.632818+0000 mon.a (mon.0) 765 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.633174+0000 mon.a (mon.0) 766 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.633174+0000 mon.a (mon.0) 766 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.633827+0000 mon.a (mon.0) 767 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:54:42.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.633827+0000 mon.a (mon.0) 767 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.634069+0000 mon.a (mon.0) 768 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.634069+0000 mon.a (mon.0) 768 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.634449+0000 mon.a (mon.0) 769 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: audit 2026-03-09T15:54:42.634449+0000 mon.a (mon.0) 769 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.640679+0000 mon.a (mon.0) 770 : cluster [INF] Manager daemon y is now available 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:42 vm01 bash[20728]: cluster 2026-03-09T15:54:42.640679+0000 mon.a (mon.0) 770 : cluster [INF] Manager daemon y is now available 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: debug 2026-03-09T15:54:42.587+0000 7f5fd5f4b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: [09/Mar/2026:15:54:42] ENGINE Bus STARTING 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: CherryPy Checker: 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: The Application mounted at '' has an empty config. 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.595496+0000 mon.a (mon.0) 750 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.595496+0000 mon.a (mon.0) 750 : cluster [INF] Active manager daemon y restarted 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.596112+0000 mon.a (mon.0) 751 : cluster [INF] Activating manager daemon y 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.596112+0000 mon.a (mon.0) 751 : cluster [INF] Activating manager daemon y 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.625040+0000 mon.a (mon.0) 752 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.625040+0000 mon.a (mon.0) 752 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.625693+0000 mon.a (mon.0) 753 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0297389s), standbys: x 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.625693+0000 mon.a (mon.0) 753 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0297389s), standbys: x 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.628612+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.628612+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.628659+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.628659+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.628691+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.628691+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.629023+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.629023+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T15:54:42.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.629111+0000 mon.a (mon.0) 758 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.629111+0000 mon.a (mon.0) 758 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.630379+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.630379+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.630626+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.630626+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.631104+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.631104+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.631426+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.631426+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.631768+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.631768+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.632461+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.632461+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.632818+0000 mon.a (mon.0) 765 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.632818+0000 mon.a (mon.0) 765 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.633174+0000 mon.a (mon.0) 766 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.633174+0000 mon.a (mon.0) 766 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.633827+0000 mon.a (mon.0) 767 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.633827+0000 mon.a (mon.0) 767 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.634069+0000 mon.a (mon.0) 768 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.634069+0000 mon.a (mon.0) 768 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.634449+0000 mon.a (mon.0) 769 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: audit 2026-03-09T15:54:42.634449+0000 mon.a (mon.0) 769 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.640679+0000 mon.a (mon.0) 770 : cluster [INF] Manager daemon y is now available 2026-03-09T15:54:42.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:42 vm01 bash[28152]: cluster 2026-03-09T15:54:42.640679+0000 mon.a (mon.0) 770 : cluster [INF] Manager daemon y is now available 2026-03-09T15:54:43.133 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: debug 2026-03-09T15:54:42.703+0000 7f5ae6540140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T15:54:43.134 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: [09/Mar/2026:15:54:42] ENGINE Bus STARTING 2026-03-09T15:54:43.134 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: CherryPy Checker: 2026-03-09T15:54:43.134 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: The Application mounted at '' has an empty config. 2026-03-09T15:54:43.134 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: [09/Mar/2026:15:54:42] ENGINE Serving on http://:::9283 2026-03-09T15:54:43.134 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:42 vm09 bash[23804]: [09/Mar/2026:15:54:42] ENGINE Bus STARTED 2026-03-09T15:54:43.429 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: [09/Mar/2026:15:54:42] ENGINE Serving on http://:::9283 2026-03-09T15:54:43.429 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:42 vm01 bash[21002]: [09/Mar/2026:15:54:42] ENGINE Bus STARTED 2026-03-09T15:54:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.635+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 <== mon.0 v2:192.168.123.101:3300/0 8 ==== mgrmap(e 18) ==== 99841+0+0 (secure 0 0 0) 0x7f08dc0fab30 con 0x7f08d40a6600 2026-03-09T15:54:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.639+0000 7f08cb7fe640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f08b8080e50 0x7f08b8083240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.639+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 --> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm01=a", "target": ["mon-mgr", ""]}) -- 0x7f08dc0afec0 con 0x7f08b8080e50 2026-03-09T15:54:43.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.639+0000 7f08e2374640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f08b8080e50 0x7f08b8083240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:43.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.643+0000 7f08e2374640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f08b8080e50 0x7f08b8083240 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f08d8009520 tx=0x7f08d8009290 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled alertmanager update... 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.655+0000 7f08cb7fe640 1 -- 192.168.123.109:0/413291514 <== mgr.14520 v2:192.168.123.101:6800/123914266 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+33 (secure 0 0 0) 0x7f08dc0afec0 con 0x7f08b8080e50 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.659+0000 7f08c97fa640 1 -- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f08b8080e50 msgr2=0x7f08b8083240 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.659+0000 7f08c97fa640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f08b8080e50 0x7f08b8083240 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f08d8009520 tx=0x7f08d8009290 comp rx=0 tx=0).stop 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.659+0000 7f08c97fa640 1 -- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f08d40a6600 msgr2=0x7f08d413c5a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.659+0000 7f08c97fa640 1 --2- 192.168.123.109:0/413291514 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f08d40a6600 0x7f08d413c5a0 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7f08dc0521f0 tx=0x7f08dc069b60 comp rx=0 tx=0).stop 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.659+0000 7f08e2b75640 1 -- 192.168.123.109:0/413291514 reap_dead start 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.659+0000 7f08c97fa640 1 -- 192.168.123.109:0/413291514 shutdown_connections 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.659+0000 7f08c97fa640 1 -- 192.168.123.109:0/413291514 >> 192.168.123.109:0/413291514 conn(0x7f08d40a01c0 msgr2=0x7f08d40a1660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:43.665 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.659+0000 7f08c97fa640 1 -- 192.168.123.109:0/413291514 shutdown_connections 2026-03-09T15:54:43.666 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:43.659+0000 7f08c97fa640 1 -- 192.168.123.109:0/413291514 wait complete. 2026-03-09T15:54:43.752 DEBUG:teuthology.orchestra.run.vm01:alertmanager.a> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@alertmanager.a.service 2026-03-09T15:54:43.753 INFO:tasks.cephadm:Adding grafana.a on vm09 2026-03-09T15:54:43.753 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph orch apply grafana '1;vm09=a' 2026-03-09T15:54:43.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.677033+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:43.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.677033+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:43.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.677813+0000 mon.a (mon.0) 772 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:43.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.677813+0000 mon.a (mon.0) 772 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:43.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.705266+0000 mon.a (mon.0) 773 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:43.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.705266+0000 mon.a (mon.0) 773 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:43.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.708690+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.708690+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: cluster 2026-03-09T15:54:42.715647+0000 mon.a (mon.0) 775 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: cluster 2026-03-09T15:54:42.715647+0000 mon.a (mon.0) 775 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: cluster 2026-03-09T15:54:42.715737+0000 mon.a (mon.0) 776 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: cluster 2026-03-09T15:54:42.715737+0000 mon.a (mon.0) 776 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.716316+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.716316+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.717046+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.717046+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.717721+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.717721+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.718121+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.718121+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.750748+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:42.750748+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: cluster 2026-03-09T15:54:43.652141+0000 mon.a (mon.0) 778 : cluster [DBG] mgrmap e18: y(active, since 1.05619s), standbys: x 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: cluster 2026-03-09T15:54:43.652141+0000 mon.a (mon.0) 778 : cluster [DBG] mgrmap e18: y(active, since 1.05619s), standbys: x 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:43.659841+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:43 vm01 bash[28152]: audit 2026-03-09T15:54:43.659841+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.677033+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.677033+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.677813+0000 mon.a (mon.0) 772 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.677813+0000 mon.a (mon.0) 772 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.705266+0000 mon.a (mon.0) 773 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.705266+0000 mon.a (mon.0) 773 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.708690+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.708690+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: cluster 2026-03-09T15:54:42.715647+0000 mon.a (mon.0) 775 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: cluster 2026-03-09T15:54:42.715647+0000 mon.a (mon.0) 775 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T15:54:43.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: cluster 2026-03-09T15:54:42.715737+0000 mon.a (mon.0) 776 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: cluster 2026-03-09T15:54:42.715737+0000 mon.a (mon.0) 776 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.716316+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.716316+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.717046+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.717046+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.717721+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.717721+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.718121+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.718121+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.750748+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:42.750748+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: cluster 2026-03-09T15:54:43.652141+0000 mon.a (mon.0) 778 : cluster [DBG] mgrmap e18: y(active, since 1.05619s), standbys: x 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: cluster 2026-03-09T15:54:43.652141+0000 mon.a (mon.0) 778 : cluster [DBG] mgrmap e18: y(active, since 1.05619s), standbys: x 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:43.659841+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:43.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:43 vm01 bash[20728]: audit 2026-03-09T15:54:43.659841+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.677033+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.677033+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.677813+0000 mon.a (mon.0) 772 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.677813+0000 mon.a (mon.0) 772 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.705266+0000 mon.a (mon.0) 773 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.705266+0000 mon.a (mon.0) 773 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.708690+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.708690+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: cluster 2026-03-09T15:54:42.715647+0000 mon.a (mon.0) 775 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: cluster 2026-03-09T15:54:42.715647+0000 mon.a (mon.0) 775 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: cluster 2026-03-09T15:54:42.715737+0000 mon.a (mon.0) 776 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: cluster 2026-03-09T15:54:42.715737+0000 mon.a (mon.0) 776 : cluster [DBG] Standby manager daemon x started 2026-03-09T15:54:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.716316+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.716316+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.717046+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.717046+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.717721+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.717721+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.718121+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.718121+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.109:0/639431979' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.750748+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:42.750748+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: cluster 2026-03-09T15:54:43.652141+0000 mon.a (mon.0) 778 : cluster [DBG] mgrmap e18: y(active, since 1.05619s), standbys: x 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: cluster 2026-03-09T15:54:43.652141+0000 mon.a (mon.0) 778 : cluster [DBG] mgrmap e18: y(active, since 1.05619s), standbys: x 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:43.659841+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:44.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:43 vm09 bash[22983]: audit 2026-03-09T15:54:43.659841+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:44 vm09 bash[22983]: cephadm 2026-03-09T15:54:43.653621+0000 mgr.y (mgr.14520) 2 : cephadm [INF] Saving service alertmanager spec with placement vm01=a;count:1 2026-03-09T15:54:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:44 vm09 bash[22983]: cephadm 2026-03-09T15:54:43.653621+0000 mgr.y (mgr.14520) 2 : cephadm [INF] Saving service alertmanager spec with placement vm01=a;count:1 2026-03-09T15:54:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:44 vm09 bash[22983]: cluster 2026-03-09T15:54:43.661168+0000 mgr.y (mgr.14520) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:44 vm09 bash[22983]: cluster 2026-03-09T15:54:43.661168+0000 mgr.y (mgr.14520) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:45.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:44 vm01 bash[28152]: cephadm 2026-03-09T15:54:43.653621+0000 mgr.y (mgr.14520) 2 : cephadm [INF] Saving service alertmanager spec with placement vm01=a;count:1 2026-03-09T15:54:45.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:44 vm01 bash[28152]: cephadm 2026-03-09T15:54:43.653621+0000 mgr.y (mgr.14520) 2 : cephadm [INF] Saving service alertmanager spec with placement vm01=a;count:1 2026-03-09T15:54:45.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:44 vm01 bash[28152]: cluster 2026-03-09T15:54:43.661168+0000 mgr.y (mgr.14520) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:45.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:44 vm01 bash[28152]: cluster 2026-03-09T15:54:43.661168+0000 mgr.y (mgr.14520) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:45.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:44 vm01 bash[20728]: cephadm 2026-03-09T15:54:43.653621+0000 mgr.y (mgr.14520) 2 : cephadm [INF] Saving service alertmanager spec with placement vm01=a;count:1 2026-03-09T15:54:45.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:44 vm01 bash[20728]: cephadm 2026-03-09T15:54:43.653621+0000 mgr.y (mgr.14520) 2 : cephadm [INF] Saving service alertmanager spec with placement vm01=a;count:1 2026-03-09T15:54:45.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:44 vm01 bash[20728]: cluster 2026-03-09T15:54:43.661168+0000 mgr.y (mgr.14520) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:45.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:44 vm01 bash[20728]: cluster 2026-03-09T15:54:43.661168+0000 mgr.y (mgr.14520) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.023495+0000 mgr.y (mgr.14520) 4 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTING 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.023495+0000 mgr.y (mgr.14520) 4 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTING 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.133318+0000 mgr.y (mgr.14520) 5 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.133318+0000 mgr.y (mgr.14520) 5 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.134174+0000 mgr.y (mgr.14520) 6 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Client ('192.168.123.101', 56440) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.134174+0000 mgr.y (mgr.14520) 6 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Client ('192.168.123.101', 56440) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.235051+0000 mgr.y (mgr.14520) 7 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.235051+0000 mgr.y (mgr.14520) 7 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.235096+0000 mgr.y (mgr.14520) 8 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTED 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cephadm 2026-03-09T15:54:44.235096+0000 mgr.y (mgr.14520) 8 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTED 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cluster 2026-03-09T15:54:44.629790+0000 mgr.y (mgr.14520) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cluster 2026-03-09T15:54:44.629790+0000 mgr.y (mgr.14520) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cluster 2026-03-09T15:54:44.711516+0000 mon.a (mon.0) 780 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T15:54:46.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:45 vm09 bash[22983]: cluster 2026-03-09T15:54:44.711516+0000 mon.a (mon.0) 780 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T15:54:46.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.023495+0000 mgr.y (mgr.14520) 4 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTING 2026-03-09T15:54:46.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.023495+0000 mgr.y (mgr.14520) 4 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTING 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.133318+0000 mgr.y (mgr.14520) 5 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.133318+0000 mgr.y (mgr.14520) 5 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.134174+0000 mgr.y (mgr.14520) 6 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Client ('192.168.123.101', 56440) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.134174+0000 mgr.y (mgr.14520) 6 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Client ('192.168.123.101', 56440) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.235051+0000 mgr.y (mgr.14520) 7 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.235051+0000 mgr.y (mgr.14520) 7 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.235096+0000 mgr.y (mgr.14520) 8 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTED 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cephadm 2026-03-09T15:54:44.235096+0000 mgr.y (mgr.14520) 8 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTED 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cluster 2026-03-09T15:54:44.629790+0000 mgr.y (mgr.14520) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cluster 2026-03-09T15:54:44.629790+0000 mgr.y (mgr.14520) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cluster 2026-03-09T15:54:44.711516+0000 mon.a (mon.0) 780 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T15:54:46.188 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:45 vm01 bash[28152]: cluster 2026-03-09T15:54:44.711516+0000 mon.a (mon.0) 780 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.023495+0000 mgr.y (mgr.14520) 4 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTING 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.023495+0000 mgr.y (mgr.14520) 4 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTING 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.133318+0000 mgr.y (mgr.14520) 5 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.133318+0000 mgr.y (mgr.14520) 5 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.134174+0000 mgr.y (mgr.14520) 6 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Client ('192.168.123.101', 56440) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.134174+0000 mgr.y (mgr.14520) 6 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Client ('192.168.123.101', 56440) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.235051+0000 mgr.y (mgr.14520) 7 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.235051+0000 mgr.y (mgr.14520) 7 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.235096+0000 mgr.y (mgr.14520) 8 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTED 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cephadm 2026-03-09T15:54:44.235096+0000 mgr.y (mgr.14520) 8 : cephadm [INF] [09/Mar/2026:15:54:44] ENGINE Bus STARTED 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cluster 2026-03-09T15:54:44.629790+0000 mgr.y (mgr.14520) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cluster 2026-03-09T15:54:44.629790+0000 mgr.y (mgr.14520) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cluster 2026-03-09T15:54:44.711516+0000 mon.a (mon.0) 780 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T15:54:46.189 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:45 vm01 bash[20728]: cluster 2026-03-09T15:54:44.711516+0000 mon.a (mon.0) 780 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T15:54:46.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:54:47.914 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:54:47.976 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:47 vm09 bash[22983]: audit 2026-03-09T15:54:46.134648+0000 mgr.y (mgr.14520) 10 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:47.976 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:47 vm09 bash[22983]: audit 2026-03-09T15:54:46.134648+0000 mgr.y (mgr.14520) 10 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:47.976 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:47 vm09 bash[22983]: cluster 2026-03-09T15:54:46.630172+0000 mgr.y (mgr.14520) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:47.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:47 vm09 bash[22983]: cluster 2026-03-09T15:54:46.630172+0000 mgr.y (mgr.14520) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:47.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:47 vm09 bash[22983]: cluster 2026-03-09T15:54:46.728965+0000 mon.a (mon.0) 781 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-09T15:54:47.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:47 vm09 bash[22983]: cluster 2026-03-09T15:54:46.728965+0000 mon.a (mon.0) 781 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-09T15:54:48.102 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.095+0000 7f9caac33640 1 -- 192.168.123.109:0/2874682951 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ca4106390 msgr2=0x7f9ca41112a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:48.102 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.095+0000 7f9caac33640 1 --2- 192.168.123.109:0/2874682951 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ca4106390 0x7f9ca41112a0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f9ca0009f90 tx=0x7f9ca002f3d0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.102 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.095+0000 7f9caac33640 1 -- 192.168.123.109:0/2874682951 shutdown_connections 2026-03-09T15:54:48.102 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.095+0000 7f9caac33640 1 --2- 192.168.123.109:0/2874682951 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ca4106390 0x7f9ca41112a0 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.102 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.095+0000 7f9caac33640 1 --2- 192.168.123.109:0/2874682951 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ca4075a40 0x7f9ca4075ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.102 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.095+0000 7f9caac33640 1 --2- 192.168.123.109:0/2874682951 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ca40770a0 0x7f9ca4075500 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.102 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.095+0000 7f9caac33640 1 -- 192.168.123.109:0/2874682951 >> 192.168.123.109:0/2874682951 conn(0x7f9ca40fe140 msgr2=0x7f9ca4100580 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:48.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 -- 192.168.123.109:0/2874682951 shutdown_connections 2026-03-09T15:54:48.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 -- 192.168.123.109:0/2874682951 wait complete. 2026-03-09T15:54:48.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 Processor -- start 2026-03-09T15:54:48.104 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 -- start start 2026-03-09T15:54:48.105 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ca4075a40 0x7f9ca41a0560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:48.105 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ca40770a0 0x7f9ca41a0aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:48.105 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ca4106390 0x7f9ca41a7b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:48.105 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f9ca41140d0 con 0x7f9ca40770a0 2026-03-09T15:54:48.105 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f9ca4113f50 con 0x7f9ca4075a40 2026-03-09T15:54:48.105 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caac33640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f9ca4114250 con 0x7f9ca4106390 2026-03-09T15:54:48.105 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9caa432640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ca4106390 0x7f9ca41a7b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:48.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9ca9430640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ca40770a0 0x7f9ca41a0aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:48.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9ca9c31640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ca4075a40 0x7f9ca41a0560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:48.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9ca9c31640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ca4075a40 0x7f9ca41a0560 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:38980/0 (socket says 192.168.123.109:38980) 2026-03-09T15:54:48.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9ca9c31640 1 -- 192.168.123.109:0/1858200391 learned_addr learned my addr 192.168.123.109:0/1858200391 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:54:48.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9ca9c31640 1 -- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ca4106390 msgr2=0x7f9ca41a7b20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:48.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9ca9c31640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ca4106390 0x7f9ca41a7b20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9ca9c31640 1 -- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ca40770a0 msgr2=0x7f9ca41a0aa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:48.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9ca9c31640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ca40770a0 0x7f9ca41a0aa0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.099+0000 7f9ca9c31640 1 -- 192.168.123.109:0/1858200391 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9ca41a8220 con 0x7f9ca4075a40 2026-03-09T15:54:48.107 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9ca9c31640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ca4075a40 0x7f9ca41a0560 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f9c9400ea10 tx=0x7f9c9400eee0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:48.107 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9c92ffd640 1 -- 192.168.123.109:0/1858200391 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9c9400ce50 con 0x7f9ca4075a40 2026-03-09T15:54:48.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9c92ffd640 1 -- 192.168.123.109:0/1858200391 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f9c94004510 con 0x7f9ca4075a40 2026-03-09T15:54:48.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9c92ffd640 1 -- 192.168.123.109:0/1858200391 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9c94010690 con 0x7f9ca4075a40 2026-03-09T15:54:48.109 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9caac33640 1 -- 192.168.123.109:0/1858200391 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f9ca41a8510 con 0x7f9ca4075a40 2026-03-09T15:54:48.109 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9caac33640 1 -- 192.168.123.109:0/1858200391 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f9ca41a8a00 con 0x7f9ca4075a40 2026-03-09T15:54:48.112 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9caac33640 1 -- 192.168.123.109:0/1858200391 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9c6c005180 con 0x7f9ca4075a40 2026-03-09T15:54:48.112 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9c92ffd640 1 -- 192.168.123.109:0/1858200391 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f9c940040d0 con 0x7f9ca4075a40 2026-03-09T15:54:48.112 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9c92ffd640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9c800778d0 0x7f9c80079d90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:48.112 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9c92ffd640 1 -- 192.168.123.109:0/1858200391 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f9c94099ff0 con 0x7f9ca4075a40 2026-03-09T15:54:48.112 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.103+0000 7f9ca9430640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9c800778d0 0x7f9c80079d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:48.112 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.107+0000 7f9ca9430640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9c800778d0 0x7f9c80079d90 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f9ca41a1a80 tx=0x7f9c98009290 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:48.113 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.107+0000 7f9c92ffd640 1 -- 192.168.123.109:0/1858200391 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9c94002e20 con 0x7f9ca4075a40 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:47 vm01 bash[28152]: audit 2026-03-09T15:54:46.134648+0000 mgr.y (mgr.14520) 10 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:47 vm01 bash[28152]: audit 2026-03-09T15:54:46.134648+0000 mgr.y (mgr.14520) 10 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:47 vm01 bash[28152]: cluster 2026-03-09T15:54:46.630172+0000 mgr.y (mgr.14520) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:47 vm01 bash[28152]: cluster 2026-03-09T15:54:46.630172+0000 mgr.y (mgr.14520) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:47 vm01 bash[28152]: cluster 2026-03-09T15:54:46.728965+0000 mon.a (mon.0) 781 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:47 vm01 bash[28152]: cluster 2026-03-09T15:54:46.728965+0000 mon.a (mon.0) 781 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:47 vm01 bash[20728]: audit 2026-03-09T15:54:46.134648+0000 mgr.y (mgr.14520) 10 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:47 vm01 bash[20728]: audit 2026-03-09T15:54:46.134648+0000 mgr.y (mgr.14520) 10 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:47 vm01 bash[20728]: cluster 2026-03-09T15:54:46.630172+0000 mgr.y (mgr.14520) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:47 vm01 bash[20728]: cluster 2026-03-09T15:54:46.630172+0000 mgr.y (mgr.14520) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:47 vm01 bash[20728]: cluster 2026-03-09T15:54:46.728965+0000 mon.a (mon.0) 781 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-09T15:54:48.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:47 vm01 bash[20728]: cluster 2026-03-09T15:54:46.728965+0000 mon.a (mon.0) 781 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-09T15:54:48.239 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.235+0000 7f9caac33640 1 -- 192.168.123.109:0/1858200391 --> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}) -- 0x7f9c6c002bf0 con 0x7f9c800778d0 2026-03-09T15:54:48.254 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled grafana update... 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c92ffd640 1 -- 192.168.123.109:0/1858200391 <== mgr.14520 v2:192.168.123.101:6800/123914266 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+28 (secure 0 0 0) 0x7f9c6c002bf0 con 0x7f9c800778d0 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 -- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9c800778d0 msgr2=0x7f9c80079d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9c800778d0 0x7f9c80079d90 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f9ca41a1a80 tx=0x7f9c98009290 comp rx=0 tx=0).stop 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 -- 192.168.123.109:0/1858200391 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ca4075a40 msgr2=0x7f9ca41a0560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ca4075a40 0x7f9ca41a0560 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f9c9400ea10 tx=0x7f9c9400eee0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 -- 192.168.123.109:0/1858200391 shutdown_connections 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9c800778d0 0x7f9c80079d90 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ca4106390 0x7f9ca41a7b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ca40770a0 0x7f9ca41a0aa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 --2- 192.168.123.109:0/1858200391 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ca4075a40 0x7f9ca41a0560 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 -- 192.168.123.109:0/1858200391 >> 192.168.123.109:0/1858200391 conn(0x7f9ca40fe140 msgr2=0x7f9ca41021f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 -- 192.168.123.109:0/1858200391 shutdown_connections 2026-03-09T15:54:48.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:48.247+0000 7f9c90ff9640 1 -- 192.168.123.109:0/1858200391 wait complete. 2026-03-09T15:54:48.333 DEBUG:teuthology.orchestra.run.vm09:grafana.a> sudo journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@grafana.a.service 2026-03-09T15:54:48.334 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T15:54:48.335 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T15:54:49.585 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.191535+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.191535+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.199057+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.199057+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.242263+0000 mgr.y (mgr.14520) 12 : audit [DBG] from='client.24430 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.242263+0000 mgr.y (mgr.14520) 12 : audit [DBG] from='client.24430 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: cephadm 2026-03-09T15:54:48.243579+0000 mgr.y (mgr.14520) 13 : cephadm [INF] Saving service grafana spec with placement vm09=a;count:1 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: cephadm 2026-03-09T15:54:48.243579+0000 mgr.y (mgr.14520) 13 : cephadm [INF] Saving service grafana spec with placement vm09=a;count:1 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.248864+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.248864+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.404544+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.404544+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.412150+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.412150+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: cluster 2026-03-09T15:54:48.630558+0000 mgr.y (mgr.14520) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: cluster 2026-03-09T15:54:48.630558+0000 mgr.y (mgr.14520) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.887259+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.887259+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.894039+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.894039+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.896936+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:48.896936+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.054908+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.054908+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.061280+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.061280+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.062440+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.062440+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.063831+0000 mon.a (mon.0) 793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.063831+0000 mon.a (mon.0) 793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.064315+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 bash[20728]: audit 2026-03-09T15:54:49.064315+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.191535+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.191535+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.199057+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.199057+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.242263+0000 mgr.y (mgr.14520) 12 : audit [DBG] from='client.24430 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.242263+0000 mgr.y (mgr.14520) 12 : audit [DBG] from='client.24430 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:49.586 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: cephadm 2026-03-09T15:54:48.243579+0000 mgr.y (mgr.14520) 13 : cephadm [INF] Saving service grafana spec with placement vm09=a;count:1 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: cephadm 2026-03-09T15:54:48.243579+0000 mgr.y (mgr.14520) 13 : cephadm [INF] Saving service grafana spec with placement vm09=a;count:1 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.248864+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.248864+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.404544+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.404544+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.412150+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.412150+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: cluster 2026-03-09T15:54:48.630558+0000 mgr.y (mgr.14520) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: cluster 2026-03-09T15:54:48.630558+0000 mgr.y (mgr.14520) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.887259+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.887259+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.894039+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.894039+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.896936+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:48.896936+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.054908+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.054908+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.061280+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.061280+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.062440+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.062440+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.063831+0000 mon.a (mon.0) 793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.063831+0000 mon.a (mon.0) 793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.064315+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:49.587 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 bash[28152]: audit 2026-03-09T15:54:49.064315+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:49.587 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.191535+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.191535+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.199057+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.199057+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.242263+0000 mgr.y (mgr.14520) 12 : audit [DBG] from='client.24430 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.242263+0000 mgr.y (mgr.14520) 12 : audit [DBG] from='client.24430 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: cephadm 2026-03-09T15:54:48.243579+0000 mgr.y (mgr.14520) 13 : cephadm [INF] Saving service grafana spec with placement vm09=a;count:1 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: cephadm 2026-03-09T15:54:48.243579+0000 mgr.y (mgr.14520) 13 : cephadm [INF] Saving service grafana spec with placement vm09=a;count:1 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.248864+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.248864+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.404544+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.404544+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.412150+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.412150+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: cluster 2026-03-09T15:54:48.630558+0000 mgr.y (mgr.14520) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: cluster 2026-03-09T15:54:48.630558+0000 mgr.y (mgr.14520) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.887259+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.887259+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.894039+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.894039+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.896936+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:48.896936+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.054908+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.054908+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.061280+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.061280+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.062440+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.062440+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.063831+0000 mon.a (mon.0) 793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.063831+0000 mon.a (mon.0) 793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.064315+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:49.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:49 vm09 bash[22983]: audit 2026-03-09T15:54:49.064315+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:54:49.845 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:49.845 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:49.845 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:49.845 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:49.845 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:49.845 INFO:journalctl@ceph.rgw.foo.a.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:49.845 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:49.845 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:49.845 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.rgw.foo.a.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:49 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.118 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: Started Ceph node-exporter.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:54:50.119 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:54:50 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.065082+0000 mgr.y (mgr.14520) 15 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:54:50.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.065082+0000 mgr.y (mgr.14520) 15 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:54:50.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.065221+0000 mgr.y (mgr.14520) 16 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:54:50.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.065221+0000 mgr.y (mgr.14520) 16 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:54:50.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.111009+0000 mgr.y (mgr.14520) 17 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.111009+0000 mgr.y (mgr.14520) 17 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.113664+0000 mgr.y (mgr.14520) 18 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.113664+0000 mgr.y (mgr.14520) 18 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.150511+0000 mgr.y (mgr.14520) 19 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.150511+0000 mgr.y (mgr.14520) 19 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.162938+0000 mgr.y (mgr.14520) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.162938+0000 mgr.y (mgr.14520) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.181379+0000 mgr.y (mgr.14520) 21 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.181379+0000 mgr.y (mgr.14520) 21 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.215498+0000 mgr.y (mgr.14520) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.215498+0000 mgr.y (mgr.14520) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.232721+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.232721+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.238874+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.238874+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.263355+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.263355+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.270391+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.270391+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.276157+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:49.276157+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.279676+0000 mgr.y (mgr.14520) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm01 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: cephadm 2026-03-09T15:54:49.279676+0000 mgr.y (mgr.14520) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm01 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:50.128728+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:50.128728+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:50.137804+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:50.137804+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:50.143809+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:50 vm01 bash[28152]: audit 2026-03-09T15:54:50.143809+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.065082+0000 mgr.y (mgr.14520) 15 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.065082+0000 mgr.y (mgr.14520) 15 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.065221+0000 mgr.y (mgr.14520) 16 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.065221+0000 mgr.y (mgr.14520) 16 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.111009+0000 mgr.y (mgr.14520) 17 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.111009+0000 mgr.y (mgr.14520) 17 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.113664+0000 mgr.y (mgr.14520) 18 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.113664+0000 mgr.y (mgr.14520) 18 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.150511+0000 mgr.y (mgr.14520) 19 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.150511+0000 mgr.y (mgr.14520) 19 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.162938+0000 mgr.y (mgr.14520) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.162938+0000 mgr.y (mgr.14520) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.181379+0000 mgr.y (mgr.14520) 21 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.181379+0000 mgr.y (mgr.14520) 21 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.215498+0000 mgr.y (mgr.14520) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.215498+0000 mgr.y (mgr.14520) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.232721+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.232721+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.238874+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.238874+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.263355+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.263355+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.270391+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.270391+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.276157+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:49.276157+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.279676+0000 mgr.y (mgr.14520) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm01 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: cephadm 2026-03-09T15:54:49.279676+0000 mgr.y (mgr.14520) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm01 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:50.128728+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:50.128728+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:50.137804+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:50.137804+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:50.143809+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[20728]: audit 2026-03-09T15:54:50.143809+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.431 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:50 vm01 bash[55412]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T15:54:50.599 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.065082+0000 mgr.y (mgr.14520) 15 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:54:50.599 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.065082+0000 mgr.y (mgr.14520) 15 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.065221+0000 mgr.y (mgr.14520) 16 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.065221+0000 mgr.y (mgr.14520) 16 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.111009+0000 mgr.y (mgr.14520) 17 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.111009+0000 mgr.y (mgr.14520) 17 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.113664+0000 mgr.y (mgr.14520) 18 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.113664+0000 mgr.y (mgr.14520) 18 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.conf 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.150511+0000 mgr.y (mgr.14520) 19 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.150511+0000 mgr.y (mgr.14520) 19 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.162938+0000 mgr.y (mgr.14520) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.162938+0000 mgr.y (mgr.14520) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.181379+0000 mgr.y (mgr.14520) 21 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.181379+0000 mgr.y (mgr.14520) 21 : cephadm [INF] Updating vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.215498+0000 mgr.y (mgr.14520) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.215498+0000 mgr.y (mgr.14520) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/config/ceph.client.admin.keyring 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.232721+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.232721+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.238874+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.238874+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.263355+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.263355+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.270391+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.270391+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.276157+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:49.276157+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.279676+0000 mgr.y (mgr.14520) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm01 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: cephadm 2026-03-09T15:54:49.279676+0000 mgr.y (mgr.14520) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm01 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:50.128728+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:50.128728+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:50.137804+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:50.137804+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:50.143809+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.600 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[22983]: audit 2026-03-09T15:54:50.143809+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:50.849 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.850 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.850 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.850 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.850 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.850 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.850 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:50.850 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:51.133 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:51.133 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:51.133 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:51.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:51.133 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:51.133 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:51.133 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:51.133 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:51.134 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:50 vm09 systemd[1]: Started Ceph node-exporter.b for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:54:51.134 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:50 vm09 bash[50127]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T15:54:51.679 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[55412]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T15:54:51.679 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:51 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: cephadm 2026-03-09T15:54:50.144735+0000 mgr.y (mgr.14520) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm09 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: cephadm 2026-03-09T15:54:50.144735+0000 mgr.y (mgr.14520) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm09 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: cluster 2026-03-09T15:54:50.631094+0000 mgr.y (mgr.14520) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: cluster 2026-03-09T15:54:50.631094+0000 mgr.y (mgr.14520) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:50.976627+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:50.976627+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:50.985610+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:50.985610+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:50.992434+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:50.992434+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:50.997584+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:50.997584+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:51.008808+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:51 vm01 bash[28152]: audit 2026-03-09T15:54:51.008808+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: cephadm 2026-03-09T15:54:50.144735+0000 mgr.y (mgr.14520) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm09 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: cephadm 2026-03-09T15:54:50.144735+0000 mgr.y (mgr.14520) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm09 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: cluster 2026-03-09T15:54:50.631094+0000 mgr.y (mgr.14520) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: cluster 2026-03-09T15:54:50.631094+0000 mgr.y (mgr.14520) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:50.976627+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:50.976627+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:50.985610+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:50.985610+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:50.992434+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:50.992434+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:50.997584+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:50.997584+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:51.008808+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[20728]: audit 2026-03-09T15:54:51.008808+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.180 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[55412]: 2abcce694348: Pulling fs layer 2026-03-09T15:54:52.180 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[55412]: 455fd88e5221: Pulling fs layer 2026-03-09T15:54:52.180 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:51 vm01 bash[55412]: 324153f2810a: Pulling fs layer 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: cephadm 2026-03-09T15:54:50.144735+0000 mgr.y (mgr.14520) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm09 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: cephadm 2026-03-09T15:54:50.144735+0000 mgr.y (mgr.14520) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm09 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: cluster 2026-03-09T15:54:50.631094+0000 mgr.y (mgr.14520) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: cluster 2026-03-09T15:54:50.631094+0000 mgr.y (mgr.14520) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:50.976627+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:50.976627+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:50.985610+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:50.985610+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:50.992434+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:50.992434+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:50.997584+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:50.997584+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:51.008808+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:51 vm09 bash[22983]: audit 2026-03-09T15:54:51.008808+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:52.679 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: 455fd88e5221: Verifying Checksum 2026-03-09T15:54:52.679 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: 455fd88e5221: Download complete 2026-03-09T15:54:52.679 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: 2abcce694348: Verifying Checksum 2026-03-09T15:54:52.679 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: 2abcce694348: Download complete 2026-03-09T15:54:52.679 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: 2abcce694348: Pull complete 2026-03-09T15:54:52.679 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: 324153f2810a: Verifying Checksum 2026-03-09T15:54:52.679 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: 324153f2810a: Download complete 2026-03-09T15:54:52.942 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:54:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:54:52.943 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: 455fd88e5221: Pull complete 2026-03-09T15:54:52.943 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: 324153f2810a: Pull complete 2026-03-09T15:54:52.943 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T15:54:52.943 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T15:54:53.029 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:54:53.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 -- 192.168.123.101:0/1418036982 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8be8108a40 msgr2=0x7f8be8108e20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:53.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 --2- 192.168.123.101:0/1418036982 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8be8108a40 0x7f8be8108e20 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f8bdc009a60 tx=0x7f8bdc02f290 comp rx=0 tx=0).stop 2026-03-09T15:54:53.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 -- 192.168.123.101:0/1418036982 shutdown_connections 2026-03-09T15:54:53.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 --2- 192.168.123.101:0/1418036982 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8be81033e0 0x7f8be810f910 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 --2- 192.168.123.101:0/1418036982 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8be8102a40 0x7f8be8102ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 --2- 192.168.123.101:0/1418036982 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8be8108a40 0x7f8be8108e20 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 -- 192.168.123.101:0/1418036982 >> 192.168.123.101:0/1418036982 conn(0x7f8be80fe740 msgr2=0x7f8be8100b60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:53.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 -- 192.168.123.101:0/1418036982 shutdown_connections 2026-03-09T15:54:53.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 -- 192.168.123.101:0/1418036982 wait complete. 2026-03-09T15:54:53.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 Processor -- start 2026-03-09T15:54:53.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 -- start start 2026-03-09T15:54:53.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8be8102a40 0x7f8be81165c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:53.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8be81033e0 0x7f8be8116b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8be8108a40 0x7f8be81116e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f8be8076a60 con 0x7f8be8102a40 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f8be80768e0 con 0x7f8be81033e0 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8becdbb640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f8be8076be0 con 0x7f8be8108a40 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be5d74640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8be81033e0 0x7f8be8116b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be5d74640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8be81033e0 0x7f8be8116b00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:43964/0 (socket says 192.168.123.101:43964) 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be5d74640 1 -- 192.168.123.101:0/2913972082 learned_addr learned my addr 192.168.123.101:0/2913972082 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be6d76640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8be8108a40 0x7f8be81116e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be6d76640 1 -- 192.168.123.101:0/2913972082 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8be81033e0 msgr2=0x7f8be8116b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be6d76640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8be81033e0 0x7f8be8116b00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be6d76640 1 -- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8be8102a40 msgr2=0x7f8be81165c0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:54:53.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be6d76640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8be8102a40 0x7f8be81165c0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be6d76640 1 -- 192.168.123.101:0/2913972082 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8be8111fa0 con 0x7f8be8108a40 2026-03-09T15:54:53.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be5d74640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8be81033e0 0x7f8be8116b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:54:53.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.203+0000 7f8be6d76640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8be8108a40 0x7f8be81116e0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f8bd800bf70 tx=0x7f8bd800c4e0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:53.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.207+0000 7f8bcf7fe640 1 -- 192.168.123.101:0/2913972082 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8bd8006e60 con 0x7f8be8108a40 2026-03-09T15:54:53.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.207+0000 7f8bcf7fe640 1 -- 192.168.123.101:0/2913972082 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f8bd8010070 con 0x7f8be8108a40 2026-03-09T15:54:53.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.207+0000 7f8bcf7fe640 1 -- 192.168.123.101:0/2913972082 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8bd8015470 con 0x7f8be8108a40 2026-03-09T15:54:53.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.207+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f8be8112290 con 0x7f8be8108a40 2026-03-09T15:54:53.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.207+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f8be811c4b0 con 0x7f8be8108a40 2026-03-09T15:54:53.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.207+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8bb4005180 con 0x7f8be8108a40 2026-03-09T15:54:53.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.211+0000 7f8bcf7fe640 1 -- 192.168.123.101:0/2913972082 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f8bd8002b00 con 0x7f8be8108a40 2026-03-09T15:54:53.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.211+0000 7f8bcf7fe640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8bb8077750 0x7f8bb8079c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:53.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.211+0000 7f8bcf7fe640 1 -- 192.168.123.101:0/2913972082 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f8bd8099d70 con 0x7f8be8108a40 2026-03-09T15:54:53.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.211+0000 7f8bcf7fe640 1 -- 192.168.123.101:0/2913972082 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f8bd809a170 con 0x7f8be8108a40 2026-03-09T15:54:53.213 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.211+0000 7f8be6575640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8bb8077750 0x7f8bb8079c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:53.214 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.211+0000 7f8be6575640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8bb8077750 0x7f8bb8079c10 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f8bdc02f7a0 tx=0x7f8bdc0023d0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:53.359 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.355+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f8bb4005470 con 0x7f8be8108a40 2026-03-09T15:54:53.369 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.367+0000 7f8bcf7fe640 1 -- 192.168.123.101:0/2913972082 <== mon.2 v2:192.168.123.101:3301/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v16) ==== 170+0+59 (secure 0 0 0) 0x7f8bd8002e20 con 0x7f8be8108a40 2026-03-09T15:54:53.370 INFO:teuthology.orchestra.run.vm01.stdout:[client.0] 2026-03-09T15:54:53.370 INFO:teuthology.orchestra.run.vm01.stdout: key = AQBN7a5pf1iOFRAAsqeNNH7wPtXSXiaMMit9Ig== 2026-03-09T15:54:53.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8bb8077750 msgr2=0x7f8bb8079c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:53.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8bb8077750 0x7f8bb8079c10 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f8bdc02f7a0 tx=0x7f8bdc0023d0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.374 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8be8108a40 msgr2=0x7f8be81116e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:53.374 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8be8108a40 0x7f8be81116e0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f8bd800bf70 tx=0x7f8bd800c4e0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.374 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 shutdown_connections 2026-03-09T15:54:53.374 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8bb8077750 0x7f8bb8079c10 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.375 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8be8108a40 0x7f8be81116e0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.375 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8be81033e0 0x7f8be8116b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.375 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 --2- 192.168.123.101:0/2913972082 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8be8102a40 0x7f8be81165c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:53.375 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 >> 192.168.123.101:0/2913972082 conn(0x7f8be80fe740 msgr2=0x7f8be80feb20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:53.375 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 shutdown_connections 2026-03-09T15:54:53.375 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:54:53.371+0000 7f8becdbb640 1 -- 192.168.123.101:0/2913972082 wait complete. 2026-03-09T15:54:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:52 vm09 bash[22983]: cephadm 2026-03-09T15:54:51.024330+0000 mgr.y (mgr.14520) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm01 2026-03-09T15:54:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:52 vm09 bash[22983]: cephadm 2026-03-09T15:54:51.024330+0000 mgr.y (mgr.14520) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm01 2026-03-09T15:54:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:52 vm09 bash[22983]: audit 2026-03-09T15:54:52.681233+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:52 vm09 bash[22983]: audit 2026-03-09T15:54:52.681233+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:53.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[20728]: cephadm 2026-03-09T15:54:51.024330+0000 mgr.y (mgr.14520) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm01 2026-03-09T15:54:53.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[20728]: cephadm 2026-03-09T15:54:51.024330+0000 mgr.y (mgr.14520) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm01 2026-03-09T15:54:53.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[20728]: audit 2026-03-09T15:54:52.681233+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:53.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[20728]: audit 2026-03-09T15:54:52.681233+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:53.386 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:52 vm01 bash[28152]: cephadm 2026-03-09T15:54:51.024330+0000 mgr.y (mgr.14520) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm01 2026-03-09T15:54:53.386 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:52 vm01 bash[28152]: cephadm 2026-03-09T15:54:51.024330+0000 mgr.y (mgr.14520) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm01 2026-03-09T15:54:53.386 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:52 vm01 bash[28152]: audit 2026-03-09T15:54:52.681233+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:53.386 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:52 vm01 bash[28152]: audit 2026-03-09T15:54:52.681233+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:53.386 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.944Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T15:54:53.386 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.944Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T15:54:53.386 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.945Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T15:54:53.386 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.945Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.945Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.946Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.946Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.946Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.946Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.947Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.947Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.947Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.947Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.947Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.947Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.947Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.947Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.947Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.948Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.948Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.948Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.948Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.948Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.948Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.948Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.948Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.949Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.950Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.950Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T15:54:53.387 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.950Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.950Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.950Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.950Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.950Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.950Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.950Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.951Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.951Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.951Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.951Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.951Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.952Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T15:54:53.388 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:52 vm01 bash[55412]: ts=2026-03-09T15:54:52.952Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T15:54:53.455 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:54:53.455 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T15:54:53.455 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T15:54:53.469 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T15:54:54.355 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:53 vm09 bash[22983]: cluster 2026-03-09T15:54:52.631440+0000 mgr.y (mgr.14520) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T15:54:54.355 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:53 vm09 bash[22983]: cluster 2026-03-09T15:54:52.631440+0000 mgr.y (mgr.14520) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T15:54:54.355 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:53 vm09 bash[22983]: audit 2026-03-09T15:54:53.361060+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.101:0/2913972082' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.355 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:53 vm09 bash[22983]: audit 2026-03-09T15:54:53.361060+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.101:0/2913972082' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.355 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:53 vm09 bash[22983]: audit 2026-03-09T15:54:53.361535+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.355 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:53 vm09 bash[22983]: audit 2026-03-09T15:54:53.361535+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.355 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:53 vm09 bash[22983]: audit 2026-03-09T15:54:53.365679+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:54.355 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:53 vm09 bash[22983]: audit 2026-03-09T15:54:53.365679+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:53 vm01 bash[28152]: cluster 2026-03-09T15:54:52.631440+0000 mgr.y (mgr.14520) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:53 vm01 bash[28152]: cluster 2026-03-09T15:54:52.631440+0000 mgr.y (mgr.14520) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:53 vm01 bash[28152]: audit 2026-03-09T15:54:53.361060+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.101:0/2913972082' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:53 vm01 bash[28152]: audit 2026-03-09T15:54:53.361060+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.101:0/2913972082' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:53 vm01 bash[28152]: audit 2026-03-09T15:54:53.361535+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:53 vm01 bash[28152]: audit 2026-03-09T15:54:53.361535+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:53 vm01 bash[28152]: audit 2026-03-09T15:54:53.365679+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:53 vm01 bash[28152]: audit 2026-03-09T15:54:53.365679+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:53 vm01 bash[20728]: cluster 2026-03-09T15:54:52.631440+0000 mgr.y (mgr.14520) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:53 vm01 bash[20728]: cluster 2026-03-09T15:54:52.631440+0000 mgr.y (mgr.14520) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T15:54:54.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:53 vm01 bash[20728]: audit 2026-03-09T15:54:53.361060+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.101:0/2913972082' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:53 vm01 bash[20728]: audit 2026-03-09T15:54:53.361060+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.101:0/2913972082' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:53 vm01 bash[20728]: audit 2026-03-09T15:54:53.361535+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:53 vm01 bash[20728]: audit 2026-03-09T15:54:53.361535+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:54.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:53 vm01 bash[20728]: audit 2026-03-09T15:54:53.365679+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:54.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:53 vm01 bash[20728]: audit 2026-03-09T15:54:53.365679+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:54.633 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:54 vm09 bash[50127]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T15:54:55.133 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:54 vm09 bash[50127]: 2abcce694348: Pulling fs layer 2026-03-09T15:54:55.133 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:54 vm09 bash[50127]: 455fd88e5221: Pulling fs layer 2026-03-09T15:54:55.133 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:54 vm09 bash[50127]: 324153f2810a: Pulling fs layer 2026-03-09T15:54:55.238 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:55 vm01 bash[20728]: cluster 2026-03-09T15:54:54.632025+0000 mgr.y (mgr.14520) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:54:55.238 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:55 vm01 bash[20728]: cluster 2026-03-09T15:54:54.632025+0000 mgr.y (mgr.14520) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:54:55.238 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:55 vm01 bash[28152]: cluster 2026-03-09T15:54:54.632025+0000 mgr.y (mgr.14520) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:54:55.238 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:55 vm01 bash[28152]: cluster 2026-03-09T15:54:54.632025+0000 mgr.y (mgr.14520) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:54:55.523 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.523 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.523 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.523 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.523 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.523 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.523 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.523 INFO:journalctl@ceph.rgw.foo.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.523 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.523 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[22983]: cluster 2026-03-09T15:54:54.632025+0000 mgr.y (mgr.14520) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:54:55.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[22983]: cluster 2026-03-09T15:54:54.632025+0000 mgr.y (mgr.14520) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:54:55.524 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: 455fd88e5221: Verifying Checksum 2026-03-09T15:54:55.524 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: 455fd88e5221: Download complete 2026-03-09T15:54:55.524 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: 2abcce694348: Download complete 2026-03-09T15:54:55.524 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: 2abcce694348: Pull complete 2026-03-09T15:54:55.774 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.774 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.774 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.774 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.774 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.774 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.774 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.775 INFO:journalctl@ceph.rgw.foo.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.775 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.775 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.775 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.775 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.775 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:55.775 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 systemd[1]: Started Ceph alertmanager.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:54:55.796 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: 455fd88e5221: Pull complete 2026-03-09T15:54:55.796 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: 324153f2810a: Download complete 2026-03-09T15:54:55.796 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: 324153f2810a: Pull complete 2026-03-09T15:54:55.796 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T15:54:55.796 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T15:54:56.133 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.843Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.843Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.844Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.844Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.845Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.845Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T15:54:56.139 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.846Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.847Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.849Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T15:54:56.140 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:54:55 vm09 bash[50127]: ts=2026-03-09T15:54:55.849Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T15:54:56.179 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:55 vm01 bash[21002]: [09/Mar/2026:15:54:55] ENGINE Bus STOPPING 2026-03-09T15:54:56.179 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 bash[55869]: ts=2026-03-09T15:54:55.958Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T15:54:56.179 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 bash[55869]: ts=2026-03-09T15:54:55.958Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T15:54:56.179 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 bash[55869]: ts=2026-03-09T15:54:55.961Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.101 port=9094 2026-03-09T15:54:56.179 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:55 vm01 bash[55869]: ts=2026-03-09T15:54:55.962Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T15:54:56.179 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[55869]: ts=2026-03-09T15:54:56.000Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T15:54:56.179 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[55869]: ts=2026-03-09T15:54:56.001Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T15:54:56.179 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[55869]: ts=2026-03-09T15:54:56.002Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T15:54:56.179 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[55869]: ts=2026-03-09T15:54:56.002Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T15:54:56.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:54:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:54:56.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:54:56 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:54:56.679 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:56 vm01 bash[21002]: [09/Mar/2026:15:54:56] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T15:54:56.679 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:56 vm01 bash[21002]: [09/Mar/2026:15:54:56] ENGINE Bus STOPPED 2026-03-09T15:54:56.679 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:56 vm01 bash[21002]: [09/Mar/2026:15:54:56] ENGINE Bus STARTING 2026-03-09T15:54:56.679 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:56 vm01 bash[21002]: [09/Mar/2026:15:54:56] ENGINE Serving on http://:::9283 2026-03-09T15:54:56.679 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:54:56 vm01 bash[21002]: [09/Mar/2026:15:54:56] ENGINE Bus STARTED 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.822441+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.822441+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.833281+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.833281+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.843512+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.843512+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.854823+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.854823+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: cephadm 2026-03-09T15:54:55.862088+0000 mgr.y (mgr.14520) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: cephadm 2026-03-09T15:54:55.862088+0000 mgr.y (mgr.14520) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.912001+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.912001+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.922262+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.922262+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.931343+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.931343+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.932721+0000 mgr.y (mgr.14520) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.932721+0000 mgr.y (mgr.14520) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.939243+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: audit 2026-03-09T15:54:55.939243+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: cephadm 2026-03-09T15:54:55.954370+0000 mgr.y (mgr.14520) 31 : cephadm [INF] Deploying daemon grafana.a on vm09 2026-03-09T15:54:57.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:56 vm09 bash[22983]: cephadm 2026-03-09T15:54:55.954370+0000 mgr.y (mgr.14520) 31 : cephadm [INF] Deploying daemon grafana.a on vm09 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.822441+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.822441+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.833281+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.833281+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.843512+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.843512+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.854823+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.854823+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: cephadm 2026-03-09T15:54:55.862088+0000 mgr.y (mgr.14520) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: cephadm 2026-03-09T15:54:55.862088+0000 mgr.y (mgr.14520) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.912001+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.912001+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.922262+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.922262+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.931343+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.931343+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.932721+0000 mgr.y (mgr.14520) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.932721+0000 mgr.y (mgr.14520) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.939243+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: audit 2026-03-09T15:54:55.939243+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: cephadm 2026-03-09T15:54:55.954370+0000 mgr.y (mgr.14520) 31 : cephadm [INF] Deploying daemon grafana.a on vm09 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:56 vm01 bash[28152]: cephadm 2026-03-09T15:54:55.954370+0000 mgr.y (mgr.14520) 31 : cephadm [INF] Deploying daemon grafana.a on vm09 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.822441+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.822441+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.833281+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.833281+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.843512+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.843512+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.854823+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.854823+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: cephadm 2026-03-09T15:54:55.862088+0000 mgr.y (mgr.14520) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: cephadm 2026-03-09T15:54:55.862088+0000 mgr.y (mgr.14520) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.912001+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.912001+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.922262+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.922262+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.931343+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.931343+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.932721+0000 mgr.y (mgr.14520) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.932721+0000 mgr.y (mgr.14520) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.939243+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: audit 2026-03-09T15:54:55.939243+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: cephadm 2026-03-09T15:54:55.954370+0000 mgr.y (mgr.14520) 31 : cephadm [INF] Deploying daemon grafana.a on vm09 2026-03-09T15:54:57.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:56 vm01 bash[20728]: cephadm 2026-03-09T15:54:55.954370+0000 mgr.y (mgr.14520) 31 : cephadm [INF] Deploying daemon grafana.a on vm09 2026-03-09T15:54:58.105 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.b/config 2026-03-09T15:54:58.193 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:57 vm09 bash[22983]: audit 2026-03-09T15:54:56.144917+0000 mgr.y (mgr.14520) 32 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:58.193 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:57 vm09 bash[22983]: audit 2026-03-09T15:54:56.144917+0000 mgr.y (mgr.14520) 32 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:58.193 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:57 vm09 bash[22983]: cluster 2026-03-09T15:54:56.632461+0000 mgr.y (mgr.14520) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:54:58.193 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:57 vm09 bash[22983]: cluster 2026-03-09T15:54:56.632461+0000 mgr.y (mgr.14520) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:54:58.193 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:57 vm09 bash[22983]: audit 2026-03-09T15:54:57.688536+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:58.193 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:57 vm09 bash[22983]: audit 2026-03-09T15:54:57.688536+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:58.193 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:57 vm09 bash[22983]: audit 2026-03-09T15:54:57.723675+0000 mon.a (mon.0) 820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:58.193 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:57 vm09 bash[22983]: audit 2026-03-09T15:54:57.723675+0000 mon.a (mon.0) 820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.259+0000 7f9380a99640 1 -- 192.168.123.109:0/1061937928 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c107910 msgr2=0x7f937c107cf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.259+0000 7f9380a99640 1 --2- 192.168.123.109:0/1061937928 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c107910 0x7f937c107cf0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f9364009a30 tx=0x7f936402f240 comp rx=0 tx=0).stop 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.259+0000 7f9380a99640 1 -- 192.168.123.109:0/1061937928 shutdown_connections 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.259+0000 7f9380a99640 1 --2- 192.168.123.109:0/1061937928 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f937c1022b0 0x7f937c10e7e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.259+0000 7f9380a99640 1 --2- 192.168.123.109:0/1061937928 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f937c101910 0x7f937c101d70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.259+0000 7f9380a99640 1 --2- 192.168.123.109:0/1061937928 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c107910 0x7f937c107cf0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.259+0000 7f9380a99640 1 -- 192.168.123.109:0/1061937928 >> 192.168.123.109:0/1061937928 conn(0x7f937c0fd630 msgr2=0x7f937c0ffa50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.259+0000 7f9380a99640 1 -- 192.168.123.109:0/1061937928 shutdown_connections 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.259+0000 7f9380a99640 1 -- 192.168.123.109:0/1061937928 wait complete. 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 Processor -- start 2026-03-09T15:54:58.266 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 -- start start 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c101910 0x7f937c104f50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f937c1022b0 0x7f937c1034e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f937c107910 0x7f937c103a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f937c110ff0 con 0x7f937c107910 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f937c110e70 con 0x7f937c101910 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f937c111170 con 0x7f937c1022b0 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937a575640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c101910 0x7f937c104f50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937a575640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c101910 0x7f937c104f50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:40528/0 (socket says 192.168.123.109:40528) 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937a575640 1 -- 192.168.123.109:0/1415388191 learned_addr learned my addr 192.168.123.109:0/1415388191 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937ad76640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f937c107910 0x7f937c103a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:58.267 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937a575640 1 -- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f937c1022b0 msgr2=0x7f937c1034e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:58.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9379d74640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f937c1022b0 0x7f937c1034e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:58.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937a575640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f937c1022b0 0x7f937c1034e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:58.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937a575640 1 -- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f937c107910 msgr2=0x7f937c103a60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:58.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937a575640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f937c107910 0x7f937c103a60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:58.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937a575640 1 -- 192.168.123.109:0/1415388191 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f937c104320 con 0x7f937c101910 2026-03-09T15:54:58.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937ad76640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f937c107910 0x7f937c103a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:54:58.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9379d74640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f937c1022b0 0x7f937c1034e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:54:58.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f937a575640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c101910 0x7f937c104f50 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f9364009b60 tx=0x7f936402fe10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:58.268 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f93637fe640 1 -- 192.168.123.109:0/1415388191 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9364004340 con 0x7f937c101910 2026-03-09T15:54:58.269 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f93637fe640 1 -- 192.168.123.109:0/1415388191 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f93640044e0 con 0x7f937c101910 2026-03-09T15:54:58.269 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f93637fe640 1 -- 192.168.123.109:0/1415388191 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9364004db0 con 0x7f937c101910 2026-03-09T15:54:58.269 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f937c1ad580 con 0x7f937c101910 2026-03-09T15:54:58.271 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.263+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f937c1ad9e0 con 0x7f937c101910 2026-03-09T15:54:58.271 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.267+0000 7f93637fe640 1 -- 192.168.123.109:0/1415388191 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f9364005030 con 0x7f937c101910 2026-03-09T15:54:58.271 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.267+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9340005180 con 0x7f937c101910 2026-03-09T15:54:58.271 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.267+0000 7f93637fe640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9354077670 0x7f9354079b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:54:58.271 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.267+0000 7f93637fe640 1 -- 192.168.123.109:0/1415388191 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f93640c2050 con 0x7f937c101910 2026-03-09T15:54:58.274 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.271+0000 7f9379d74640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9354077670 0x7f9354079b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:54:58.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.271+0000 7f9379d74640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9354077670 0x7f9354079b30 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f9370006fd0 tx=0x7f9370008040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:54:58.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.271+0000 7f93637fe640 1 -- 192.168.123.109:0/1415388191 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f936408a290 con 0x7f937c101910 2026-03-09T15:54:58.425 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.419+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f9340005740 con 0x7f937c101910 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:57 vm01 bash[28152]: audit 2026-03-09T15:54:56.144917+0000 mgr.y (mgr.14520) 32 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:57 vm01 bash[28152]: audit 2026-03-09T15:54:56.144917+0000 mgr.y (mgr.14520) 32 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:57 vm01 bash[28152]: cluster 2026-03-09T15:54:56.632461+0000 mgr.y (mgr.14520) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:57 vm01 bash[28152]: cluster 2026-03-09T15:54:56.632461+0000 mgr.y (mgr.14520) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:57 vm01 bash[28152]: audit 2026-03-09T15:54:57.688536+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:57 vm01 bash[28152]: audit 2026-03-09T15:54:57.688536+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:57 vm01 bash[28152]: audit 2026-03-09T15:54:57.723675+0000 mon.a (mon.0) 820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:57 vm01 bash[28152]: audit 2026-03-09T15:54:57.723675+0000 mon.a (mon.0) 820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:57 vm01 bash[20728]: audit 2026-03-09T15:54:56.144917+0000 mgr.y (mgr.14520) 32 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:57 vm01 bash[20728]: audit 2026-03-09T15:54:56.144917+0000 mgr.y (mgr.14520) 32 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:57 vm01 bash[20728]: cluster 2026-03-09T15:54:56.632461+0000 mgr.y (mgr.14520) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:57 vm01 bash[20728]: cluster 2026-03-09T15:54:56.632461+0000 mgr.y (mgr.14520) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:57 vm01 bash[20728]: audit 2026-03-09T15:54:57.688536+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:57 vm01 bash[20728]: audit 2026-03-09T15:54:57.688536+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:57 vm01 bash[20728]: audit 2026-03-09T15:54:57.723675+0000 mon.a (mon.0) 820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:57 vm01 bash[20728]: audit 2026-03-09T15:54:57.723675+0000 mon.a (mon.0) 820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:54:58.429 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:54:57 vm01 bash[55869]: ts=2026-03-09T15:54:57.963Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000368419s 2026-03-09T15:54:58.432 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.427+0000 7f93637fe640 1 -- 192.168.123.109:0/1415388191 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v17) ==== 170+0+59 (secure 0 0 0) 0x7f936408f140 con 0x7f937c101910 2026-03-09T15:54:58.433 INFO:teuthology.orchestra.run.vm09.stdout:[client.1] 2026-03-09T15:54:58.433 INFO:teuthology.orchestra.run.vm09.stdout: key = AQBS7a5p6KuHGRAAsxpZzHYjzkz1HB+qCXSpiQ== 2026-03-09T15:54:58.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9354077670 msgr2=0x7f9354079b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:58.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9354077670 0x7f9354079b30 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f9370006fd0 tx=0x7f9370008040 comp rx=0 tx=0).stop 2026-03-09T15:54:58.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c101910 msgr2=0x7f937c104f50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:54:58.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c101910 0x7f937c104f50 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f9364009b60 tx=0x7f936402fe10 comp rx=0 tx=0).stop 2026-03-09T15:54:58.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 shutdown_connections 2026-03-09T15:54:58.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9354077670 0x7f9354079b30 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:58.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f937c107910 0x7f937c103a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:58.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f937c1022b0 0x7f937c1034e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:58.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 --2- 192.168.123.109:0/1415388191 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f937c101910 0x7f937c104f50 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:54:58.436 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 >> 192.168.123.109:0/1415388191 conn(0x7f937c0fd630 msgr2=0x7f937c1056a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:54:58.436 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 shutdown_connections 2026-03-09T15:54:58.436 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T15:54:58.431+0000 7f9380a99640 1 -- 192.168.123.109:0/1415388191 wait complete. 2026-03-09T15:54:58.523 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T15:54:58.523 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T15:54:58.523 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T15:54:58.545 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T15:54:58.545 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T15:54:58.545 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph mgr dump --format=json 2026-03-09T15:54:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:58 vm09 bash[22983]: audit 2026-03-09T15:54:58.425878+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.109:0/1415388191' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:58 vm09 bash[22983]: audit 2026-03-09T15:54:58.425878+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.109:0/1415388191' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:58 vm09 bash[22983]: audit 2026-03-09T15:54:58.428072+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:58 vm09 bash[22983]: audit 2026-03-09T15:54:58.428072+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:58 vm09 bash[22983]: audit 2026-03-09T15:54:58.431687+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:58 vm09 bash[22983]: audit 2026-03-09T15:54:58.431687+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:58 vm01 bash[28152]: audit 2026-03-09T15:54:58.425878+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.109:0/1415388191' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:58 vm01 bash[28152]: audit 2026-03-09T15:54:58.425878+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.109:0/1415388191' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:58 vm01 bash[28152]: audit 2026-03-09T15:54:58.428072+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:58 vm01 bash[28152]: audit 2026-03-09T15:54:58.428072+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:58 vm01 bash[28152]: audit 2026-03-09T15:54:58.431687+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:58 vm01 bash[28152]: audit 2026-03-09T15:54:58.431687+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:58 vm01 bash[20728]: audit 2026-03-09T15:54:58.425878+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.109:0/1415388191' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:58 vm01 bash[20728]: audit 2026-03-09T15:54:58.425878+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.109:0/1415388191' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:58 vm01 bash[20728]: audit 2026-03-09T15:54:58.428072+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:58 vm01 bash[20728]: audit 2026-03-09T15:54:58.428072+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T15:54:59.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:58 vm01 bash[20728]: audit 2026-03-09T15:54:58.431687+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:54:59.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:58 vm01 bash[20728]: audit 2026-03-09T15:54:58.431687+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T15:55:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:59 vm09 bash[22983]: cluster 2026-03-09T15:54:58.632839+0000 mgr.y (mgr.14520) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:55:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:54:59 vm09 bash[22983]: cluster 2026-03-09T15:54:58.632839+0000 mgr.y (mgr.14520) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:55:00.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:59 vm01 bash[28152]: cluster 2026-03-09T15:54:58.632839+0000 mgr.y (mgr.14520) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:55:00.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:54:59 vm01 bash[28152]: cluster 2026-03-09T15:54:58.632839+0000 mgr.y (mgr.14520) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:55:00.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:59 vm01 bash[20728]: cluster 2026-03-09T15:54:58.632839+0000 mgr.y (mgr.14520) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:55:00.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:54:59 vm01 bash[20728]: cluster 2026-03-09T15:54:58.632839+0000 mgr.y (mgr.14520) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T15:55:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:01 vm09 bash[22983]: cluster 2026-03-09T15:55:00.633459+0000 mgr.y (mgr.14520) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:55:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:01 vm09 bash[22983]: cluster 2026-03-09T15:55:00.633459+0000 mgr.y (mgr.14520) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:55:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:01 vm01 bash[20728]: cluster 2026-03-09T15:55:00.633459+0000 mgr.y (mgr.14520) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:55:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:01 vm01 bash[20728]: cluster 2026-03-09T15:55:00.633459+0000 mgr.y (mgr.14520) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:55:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:01 vm01 bash[28152]: cluster 2026-03-09T15:55:00.633459+0000 mgr.y (mgr.14520) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:55:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:01 vm01 bash[28152]: cluster 2026-03-09T15:55:00.633459+0000 mgr.y (mgr.14520) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T15:55:03.179 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:55:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:55:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:03 vm01 bash[28152]: cluster 2026-03-09T15:55:02.633900+0000 mgr.y (mgr.14520) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:03 vm01 bash[28152]: cluster 2026-03-09T15:55:02.633900+0000 mgr.y (mgr.14520) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:03 vm01 bash[20728]: cluster 2026-03-09T15:55:02.633900+0000 mgr.y (mgr.14520) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:03 vm01 bash[20728]: cluster 2026-03-09T15:55:02.633900+0000 mgr.y (mgr.14520) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:03.214 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:03 vm09 bash[22983]: cluster 2026-03-09T15:55:02.633900+0000 mgr.y (mgr.14520) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:03 vm09 bash[22983]: cluster 2026-03-09T15:55:02.633900+0000 mgr.y (mgr.14520) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:03.406 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.403+0000 7f7de9910640 1 -- 192.168.123.101:0/1192482383 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7de4100ce0 msgr2=0x7f7de41010c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:03.406 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.403+0000 7f7de9910640 1 --2- 192.168.123.101:0/1192482383 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7de4100ce0 0x7f7de41010c0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f7dcc009a30 tx=0x7f7dcc02f260 comp rx=0 tx=0).stop 2026-03-09T15:55:03.406 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.403+0000 7f7de9910640 1 -- 192.168.123.101:0/1192482383 shutdown_connections 2026-03-09T15:55:03.406 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.403+0000 7f7de9910640 1 --2- 192.168.123.101:0/1192482383 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7de410f2d0 0x7f7de4111750 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.406 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.403+0000 7f7de9910640 1 --2- 192.168.123.101:0/1192482383 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7de4101690 0x7f7de410ec60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.406 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.403+0000 7f7de9910640 1 --2- 192.168.123.101:0/1192482383 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7de4100ce0 0x7f7de41010c0 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.406 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.403+0000 7f7de9910640 1 -- 192.168.123.101:0/1192482383 >> 192.168.123.101:0/1192482383 conn(0x7f7de40fc910 msgr2=0x7f7de40fed30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:03.406 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.403+0000 7f7de9910640 1 -- 192.168.123.101:0/1192482383 shutdown_connections 2026-03-09T15:55:03.406 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.403+0000 7f7de9910640 1 -- 192.168.123.101:0/1192482383 wait complete. 2026-03-09T15:55:03.407 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 Processor -- start 2026-03-09T15:55:03.407 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 -- start start 2026-03-09T15:55:03.408 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7de4100ce0 0x7f7de41a2670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:03.408 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de2ffd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7de4100ce0 0x7f7de41a2670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:03.408 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de2ffd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7de4100ce0 0x7f7de41a2670 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:56810/0 (socket says 192.168.123.101:56810) 2026-03-09T15:55:03.408 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de2ffd640 1 -- 192.168.123.101:0/898009189 learned_addr learned my addr 192.168.123.101:0/898009189 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:03.408 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7de4101690 0x7f7de41a2bb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7de410f2d0 0x7f7de419c740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f7de4114300 con 0x7f7de4100ce0 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f7de4114180 con 0x7f7de4101690 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f7de4114480 con 0x7f7de410f2d0 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de37fe640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7de410f2d0 0x7f7de419c740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de27fc640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7de4101690 0x7f7de41a2bb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de27fc640 1 -- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7de410f2d0 msgr2=0x7f7de419c740 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de27fc640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7de410f2d0 0x7f7de419c740 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de27fc640 1 -- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7de4100ce0 msgr2=0x7f7de41a2670 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de27fc640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7de4100ce0 0x7f7de41a2670 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de27fc640 1 -- 192.168.123.101:0/898009189 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7de419cfb0 con 0x7f7de4101690 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de27fc640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7de4101690 0x7f7de41a2bb0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f7de41046e0 tx=0x7f7dd800cde0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:03.409 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de890e640 1 -- 192.168.123.101:0/898009189 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7dd8013070 con 0x7f7de4101690 2026-03-09T15:55:03.410 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de890e640 1 -- 192.168.123.101:0/898009189 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f7dd8004480 con 0x7f7de4101690 2026-03-09T15:55:03.410 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de890e640 1 -- 192.168.123.101:0/898009189 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7dd8002ca0 con 0x7f7de4101690 2026-03-09T15:55:03.410 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de2ffd640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7de4100ce0 0x7f7de41a2670 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:03.410 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7de419d2a0 con 0x7f7de4101690 2026-03-09T15:55:03.410 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f7de41a9420 con 0x7f7de4101690 2026-03-09T15:55:03.411 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.407+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7da8005180 con 0x7f7de4101690 2026-03-09T15:55:03.412 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.411+0000 7f7de890e640 1 -- 192.168.123.101:0/898009189 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f7dd8020050 con 0x7f7de4101690 2026-03-09T15:55:03.412 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.411+0000 7f7de890e640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7db8077700 0x7f7db8079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:03.412 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.411+0000 7f7de890e640 1 -- 192.168.123.101:0/898009189 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f7dd809a3f0 con 0x7f7de4101690 2026-03-09T15:55:03.412 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.411+0000 7f7de2ffd640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7db8077700 0x7f7db8079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:03.413 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.411+0000 7f7de2ffd640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7db8077700 0x7f7db8079bc0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f7dcc0097c0 tx=0x7f7dcc0057d0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:03.414 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.411+0000 7f7de890e640 1 -- 192.168.123.101:0/898009189 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7dd8066d90 con 0x7f7de4101690 2026-03-09T15:55:03.547 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.543+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) -- 0x7f7da8005740 con 0x7f7de4101690 2026-03-09T15:55:03.550 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.547+0000 7f7de890e640 1 -- 192.168.123.101:0/898009189 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v20) ==== 74+0+192098 (secure 0 0 0) 0x7f7dd806bc40 con 0x7f7de4101690 2026-03-09T15:55:03.551 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.551+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7db8077700 msgr2=0x7f7db8079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.551+0000 7f7de9910640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7db8077700 0x7f7db8079bc0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f7dcc0097c0 tx=0x7f7dcc0057d0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.551+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7de4101690 msgr2=0x7f7de41a2bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.551+0000 7f7de9910640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7de4101690 0x7f7de41a2bb0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f7de41046e0 tx=0x7f7dd800cde0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.555+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 shutdown_connections 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.555+0000 7f7de9910640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7db8077700 0x7f7db8079bc0 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.555+0000 7f7de9910640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7de410f2d0 0x7f7de419c740 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.555+0000 7f7de9910640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7de4101690 0x7f7de41a2bb0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.555+0000 7f7de9910640 1 --2- 192.168.123.101:0/898009189 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7de4100ce0 0x7f7de41a2670 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.555+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 >> 192.168.123.101:0/898009189 conn(0x7f7de40fc910 msgr2=0x7f7de40fed00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.555+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 shutdown_connections 2026-03-09T15:55:03.555 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:03.555+0000 7f7de9910640 1 -- 192.168.123.101:0/898009189 wait complete. 2026-03-09T15:55:03.630 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":20,"flags":0,"active_gid":14520,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":123914266},{"type":"v1","addr":"192.168.123.101:6801","nonce":123914266}]},"active_addr":"192.168.123.101:6801/123914266","active_change":"2026-03-09T15:54:42.595938+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24407,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.101:8443/","prometheus":"http://192.168.123.101:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":64,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":907533954}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":1168332968}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":2249911185}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":1175306703}]}]} 2026-03-09T15:55:03.632 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T15:55:03.632 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T15:55:03.632 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd dump --format=json 2026-03-09T15:55:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:04 vm09 bash[22983]: audit 2026-03-09T15:55:03.548449+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.101:0/898009189' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T15:55:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:04 vm09 bash[22983]: audit 2026-03-09T15:55:03.548449+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.101:0/898009189' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T15:55:04.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:04 vm01 bash[28152]: audit 2026-03-09T15:55:03.548449+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.101:0/898009189' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T15:55:04.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:04 vm01 bash[28152]: audit 2026-03-09T15:55:03.548449+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.101:0/898009189' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T15:55:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:04 vm01 bash[20728]: audit 2026-03-09T15:55:03.548449+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.101:0/898009189' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T15:55:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:04 vm01 bash[20728]: audit 2026-03-09T15:55:03.548449+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.101:0/898009189' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T15:55:05.455 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:05 vm09 bash[22983]: cluster 2026-03-09T15:55:04.634518+0000 mgr.y (mgr.14520) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:05.455 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:05 vm09 bash[22983]: cluster 2026-03-09T15:55:04.634518+0000 mgr.y (mgr.14520) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:05.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:05 vm01 bash[28152]: cluster 2026-03-09T15:55:04.634518+0000 mgr.y (mgr.14520) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:05.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:05 vm01 bash[28152]: cluster 2026-03-09T15:55:04.634518+0000 mgr.y (mgr.14520) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:05.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:05 vm01 bash[20728]: cluster 2026-03-09T15:55:04.634518+0000 mgr.y (mgr.14520) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:05.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:05 vm01 bash[20728]: cluster 2026-03-09T15:55:04.634518+0000 mgr.y (mgr.14520) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:05.803 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:05.804 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.133 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.134 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.134 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.134 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.134 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.134 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.134 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.134 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.134 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T15:55:06.134 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:05 vm09 systemd[1]: Started Ceph grafana.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:55:06.407 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179000917Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-09T15:55:06Z 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179383524Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179457351Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.1795207Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179573448Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179625687Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179673817Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179721727Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179766991Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179814351Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179860587Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179910239Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.179956646Z level=info msg=Target target=[all] 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.180007472Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.180060351Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.180118609Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.180168082Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.180216082Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=settings t=2026-03-09T15:55:06.180264662Z level=info msg="App mode production" 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=sqlstore t=2026-03-09T15:55:06.180503661Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=sqlstore t=2026-03-09T15:55:06.180571157Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T15:55:06.407 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.181092492Z level=info msg="Starting DB migrations" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.181936704Z level=info msg="Executing migration" id="create migration_log table" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.18256573Z level=info msg="Migration successfully executed" id="create migration_log table" duration=631.03µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.184984508Z level=info msg="Executing migration" id="create user table" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.1855303Z level=info msg="Migration successfully executed" id="create user table" duration=545.651µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.187038684Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.187685044Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=646.089µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.189219907Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.189746022Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=526.225µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.191489426Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.192066395Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=576.709µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.193421002Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.194041854Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=619.26µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.195174784Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.196490838Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.316164ms 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.198013609Z level=info msg="Executing migration" id="create user table v2" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.198661212Z level=info msg="Migration successfully executed" id="create user table v2" duration=648.804µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.200506094Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.201105106Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=599.192µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.202245631Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.202771345Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=525.645µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.204320194Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.204697551Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=377.326µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.206188351Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.206620982Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=432.63µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.208088027Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.20876273Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=674.412µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.209914286Z level=info msg="Executing migration" id="Update user table charset" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.210115954Z level=info msg="Migration successfully executed" id="Update user table charset" duration=202.008µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.21156678Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.212218651Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=652.041µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.213542798Z level=info msg="Executing migration" id="Add missing user data" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.213850535Z level=info msg="Migration successfully executed" id="Add missing user data" duration=307.868µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.21502323Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.215666875Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=643.324µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.217412192Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.218001225Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=589.424µs 2026-03-09T15:55:06.408 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.219726184Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.22047194Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=745.636µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.221662478Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.224909187Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=3.249845ms 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.226347749Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.227275417Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=922.227µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.230030894Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.232983622Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=2.952678ms 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.236374309Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.236726468Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=352.188µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.238085201Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.23844797Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=362.559µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.24019947Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.240548312Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=351.448µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.24187718Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.242205895Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=328.685µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.243581149Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.243930574Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=349.113µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.245250093Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.245588707Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=338.393µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.247244306Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.247258232Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=14.507µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.248365846Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.248682318Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=316.201µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.249643308Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.249952968Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=312.505µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.251534337Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.251856791Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=322.173µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.252885458Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.25320736Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=321.731µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.254203305Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.255371761Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.168066ms 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.256931191Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.257404486Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=469.649µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.258472806Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.258900126Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=427.21µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.26016808Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.260761501Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=593.501µs 2026-03-09T15:55:06.409 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.261961998Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.262569486Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=607.729µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.263938108Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.264467588Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=529.771µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.266014243Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.266388475Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=374.121µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.268043914Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.268490479Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=446.736µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.269622899Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.26999756Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=376.224µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.271487079Z level=info msg="Executing migration" id="create star table" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.272029875Z level=info msg="Migration successfully executed" id="create star table" duration=540.221µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.273613008Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.274145054Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=529.671µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.275596511Z level=info msg="Executing migration" id="create org table v1" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.276137825Z level=info msg="Migration successfully executed" id="create org table v1" duration=541.284µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.277484326Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.278013565Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=529.17µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.279454133Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.279960871Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=504.485µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.281798421Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.282443689Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=643.104µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.283951702Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.284578205Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=626.773µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.285979457Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.286603366Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=624.058µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.288013125Z level=info msg="Executing migration" id="Update org table charset" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.288227666Z level=info msg="Migration successfully executed" id="Update org table charset" duration=214.953µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.289785643Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.289999002Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=213.78µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.291501474Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.291788712Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=287.348µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.293036278Z level=info msg="Executing migration" id="create dashboard table" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.29357687Z level=info msg="Migration successfully executed" id="create dashboard table" duration=540.683µs 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.295021315Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-09T15:55:06.410 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.29558032Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=559.046µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.297389768Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.297968902Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=579.053µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.29939941Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.299895198Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=495.728µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.301310377Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.301932741Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=622.184µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.30363066Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.304178516Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=547.806µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.305354157Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.307210722Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=1.856444ms 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.308640189Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.309248988Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=608.771µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.31077277Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.311401687Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=628.957µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.312891075Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.313508462Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=618.949µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.31492315Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.315360428Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=437.419µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.316709213Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.317292114Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=582.891µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.318785379Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.31902588Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=240.54µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.3204788Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.321325004Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=846.134µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.32278643Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.323641019Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=854.39µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.325391255Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.32616285Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=768.038µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.327395117Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.327922093Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=526.958µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.329053611Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.329874167Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=820.366µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.331574511Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.332098581Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=520.434µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.333323816Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.333789817Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=461.893µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.335279305Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.335291268Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=12.463µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.337092499Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.337103259Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=11.272µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.338315689Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.339152095Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=831.345µs 2026-03-09T15:55:06.411 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.340373172Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.341176505Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=803.124µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.342315567Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.343161291Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=845.392µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.344796892Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.345564839Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=768.178µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.346630004Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.346862168Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=230.592µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.348234097Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.348696442Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=462.414µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.350364144Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.350866665Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=485.959µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.352223916Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.352237582Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=14.176µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.353496067Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.353996464Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=502.641µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.355669295Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.356181254Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=511.588µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.357693405Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.359553256Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.85967ms 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.361039879Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.361515458Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=475.841µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.369108894Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.369667038Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=558.294µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.371138532Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.371637547Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=498.865µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.373043599Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.373320557Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=276.868µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.374364923Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.37473212Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=367.048µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.376369275Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.377164994Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=795.49µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.378251307Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.378716238Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=464.851µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.380510176Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.380694732Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=180.307µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.381850646Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.382022999Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=177.191µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.383171949Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.383679429Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=510.026µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.385048663Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.385916757Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=867.864µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.387754989Z level=info msg="Executing migration" id="create data_source table" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.388352837Z level=info msg="Migration successfully executed" id="create data_source table" duration=597.77µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.389758329Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.390316874Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=558.585µs 2026-03-09T15:55:06.412 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.39168219Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.392192295Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=510.085µs 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.393854507Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.394317743Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=460.231µs 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.395565459Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.39602541Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=459.66µs 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.397092718Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.399133398Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=2.040209ms 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.400814806Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.401449374Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=634.348µs 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.402597864Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.403246819Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=648.794µs 2026-03-09T15:55:06.413 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.404913329Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-09T15:55:06.428 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:05 vm01 bash[55869]: ts=2026-03-09T15:55:05.965Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002265408s 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.407482017Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=2.568558ms 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.408841342Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.409273691Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=432.479µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.41055542Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.412073152Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.52209ms 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.413938353Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.414971557Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.033154ms 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.416573004Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.416734638Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=161.493µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.417972685Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.418205071Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=233.647µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.419925231Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.420880739Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=955.739µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.422309714Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.422539634Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=229.761µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.423879572Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.424103713Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=221.646µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.4252889Z level=info msg="Executing migration" id="Add uid column" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.426319911Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.027093ms 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.427675067Z level=info msg="Executing migration" id="Update uid value" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.427968537Z level=info msg="Migration successfully executed" id="Update uid value" duration=293.5µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.429586506Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.430255388Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=669.111µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.431594584Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.432213263Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=618.829µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.433582477Z level=info msg="Executing migration" id="create api_key table" 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.434208509Z level=info msg="Migration successfully executed" id="create api_key table" duration=622.516µs 2026-03-09T15:55:06.657 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.435807462Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.436390973Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=583.672µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.437693281Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.438297893Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=604.723µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.439710567Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.440303868Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=593.191µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.442058803Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.442638068Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=579.595µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.4440972Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.444765511Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=668.732µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.446611186Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.447139785Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=529.001µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.448502015Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.450765892Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.263737ms 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.452272143Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.452739338Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=468.627µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.454270905Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.454782813Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=509.323µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.456173175Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.456724097Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=549.549µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.457914144Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.458406727Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=492.513µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.460202558Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.460516617Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=317.094µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.461622887Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.462018949Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=396.042µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.463147401Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.463318581Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=171.642µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.464973219Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.465928396Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=954.887µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.467066687Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.468084062Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.016963ms 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.469333691Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.469563902Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=230.06µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.470896708Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.471847176Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=950.278µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.473061379Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.474051122Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=989.653µs 2026-03-09T15:55:06.658 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.479061321Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.479439158Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=377.947µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.481370314Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.481786121Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=415.627µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.482881523Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.48341432Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=531.935µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.48469649Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.48516671Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=470.201µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.486914953Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.487480571Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=565.728µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.488716515Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.489223996Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=507.37µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.490507407Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.490669871Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=161.632µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.492067448Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.492234691Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=165.089µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.493691057Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.494689467Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=998.129µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.495959994Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.496977991Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.017605ms 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.498304664Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.49847356Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=168.746µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.500178392Z level=info msg="Executing migration" id="create quota table v1" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.500638853Z level=info msg="Migration successfully executed" id="create quota table v1" duration=461.614µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.501909282Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.502390973Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=481.632µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.503714662Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.503743325Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=29.174µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.504940456Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.505421927Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=481.351µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.507035678Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.507534132Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=498.274µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.508750598Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.50982993Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.07921ms 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.510943664Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.510975663Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=28.333µs 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.512584215Z level=info msg="Executing migration" id="create session table" 2026-03-09T15:55:06.659 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.513134365Z level=info msg="Migration successfully executed" id="create session table" duration=550.11µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.514436492Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.51463275Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=196.168µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.515874734Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.516102412Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=227.907µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.517467566Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.517941955Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=476.663µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.519282072Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.519768574Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=488.315µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.521031478Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.521207397Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=176.321µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.522341119Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.522368691Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=29.275µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.524033077Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.525128929Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.091834ms 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.526484856Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.527606927Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.118856ms 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.528913754Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.529118537Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=204.532µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.530568331Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.530769697Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=201.087µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.532066565Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.532577461Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=510.666µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.533941886Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.534116833Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=175.248µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.538806272Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.540043136Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.240251ms 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.541732059Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.541962911Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=231.083µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.543110128Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.544259209Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.14854ms 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.545749289Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.546976206Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.224792ms 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.548691056Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.548918151Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=227.617µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.550310237Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.551019044Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=706.383µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.552340367Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.552909253Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=569.006µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.554736462Z level=info msg="Executing migration" id="create alert table v1" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.555595521Z level=info msg="Migration successfully executed" id="create alert table v1" duration=858.918µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.557764511Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.558369924Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=605.654µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.559780725Z level=info msg="Executing migration" id="add index alert state" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.560382843Z level=info msg="Migration successfully executed" id="add index alert state" duration=601.777µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.562044132Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.562592379Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=548.276µs 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.563916317Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-09T15:55:06.660 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.564441981Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=525.455µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.565892125Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.5665888Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=692.247µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.568391174Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.568954669Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=563.615µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.570065257Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.572882972Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=2.821161ms 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.574214576Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.574698261Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=483.816µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.576413692Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.576953963Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=539.971µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.578236794Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.578529983Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=289.823µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.579697338Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.580101335Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=403.937µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.582259765Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.582736918Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=481.511µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.584112724Z level=info msg="Executing migration" id="Add column is_default" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.58536601Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.253085ms 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.586492509Z level=info msg="Executing migration" id="Add column frequency" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.5877811Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.288411ms 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.589331242Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.590634032Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.306566ms 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.59200029Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.593257834Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.259859ms 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.594463481Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.595046312Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=582.712µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.596570515Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.596599239Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=29.074µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.597682907Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.597708625Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=26.179µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.598863847Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.599335741Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=473.426µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.600642698Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.601128657Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=487.331µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.602410417Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.602903149Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=492.843µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.604876583Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.605384174Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=507.199µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.606511514Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.607044773Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=533.049µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.608373579Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.60982131Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.446608ms 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.611491656Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.612986495Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.492614ms 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.614096202Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.614306997Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=210.724µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.615730241Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.616213565Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=483.444µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.617952932Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.618443971Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=491.22µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.61966108Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.620917481Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.256131ms 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.622167912Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.622293217Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=129.181µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.623741789Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.624355777Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=613.909µs 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.62593854Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-09T15:55:06.661 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.626646435Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=707.805µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.628177171Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.628380021Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=202.549µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.630009781Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.630574138Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=563.945µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.632166268Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.63265275Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=491.15µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.634516648Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.63503562Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=520.275µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.636639252Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.637221202Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=581.718µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.639077477Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.639685865Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=607.958µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.641063273Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.64167569Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=612.166µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.643443229Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.643455272Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=12.533µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.644770924Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.646570874Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.79972ms 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.64801686Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.64857735Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=560.69µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.65026544Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.651661784Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.396053ms 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.653209251Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.653648622Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=435.685µs 2026-03-09T15:55:06.662 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.655015922Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.657531241Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=2.515128ms 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.659390331Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.659972901Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=582.06µs 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.661295548Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.665249339Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=3.934064ms 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.667469535Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.668031086Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=561.66µs 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.66987048Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.670429496Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=561.33µs 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.671732956Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.672032777Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=299.56µs 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.673103602Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.673509131Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=405.308µs 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.675014389Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.675223621Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=209.071µs 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.676635524Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.678856862Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=2.220858ms 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.680336382Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.682490303Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.153852ms 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.683866028Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-09T15:55:06.906 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.684504544Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=631.883µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.685900217Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.686444826Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=540.782µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.68786238Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.688116847Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=253.526µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.689315209Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.690863618Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.547758ms 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.692701288Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.69324164Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=537.917µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.694757337Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.694981176Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=223.558µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.696414369Z level=info msg="Executing migration" id="Move region to single row" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.69674014Z level=info msg="Migration successfully executed" id="Move region to single row" duration=325.63µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.698360172Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.699103814Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=743.913µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.700586138Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.701131509Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=545.342µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.702362724Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.702987955Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=624.96µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.7047113Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.705291106Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=574.516µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.706444716Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.707040992Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=596.337µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.708431595Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.708986785Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=555.18µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.710425378Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.710566932Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=141.494µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.712269459Z level=info msg="Executing migration" id="create test_data table" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.712869533Z level=info msg="Migration successfully executed" id="create test_data table" duration=602.278µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.715409448Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.716019519Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=609.712µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.717836561Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.718468384Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=631.792µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.719853206Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.720472194Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=618.938µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.734811837Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.735196248Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=386.815µs 2026-03-09T15:55:06.907 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.737045009Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.737416904Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=367.548µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.738547391Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.738672174Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=85.5µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.740078087Z level=info msg="Executing migration" id="create team table" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.740661989Z level=info msg="Migration successfully executed" id="create team table" duration=583.523µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.742076518Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.742769544Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=692.907µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.744781942Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.745363811Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=581.649µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.746750477Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.748821734Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=2.071378ms 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.750114715Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.750310271Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=195.475µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.751961332Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.752533864Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=572.512µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.753828246Z level=info msg="Executing migration" id="create team member table" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.754423861Z level=info msg="Migration successfully executed" id="create team member table" duration=598.169µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.755942474Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.756494828Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=552.384µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.758041263Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.758642518Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=601.175µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.759950837Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.760516747Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=565.879µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.761796692Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.764011269Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=2.211431ms 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.765623066Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.767764905Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=2.139696ms 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.769122116Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.771839844Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=2.717618ms 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.773340663Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.773961324Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=622.876µs 2026-03-09T15:55:06.908 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.77587138Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.776537498Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=662.39µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.778084794Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.778782519Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=697.564µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.780661547Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.78129312Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=629.81µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.783072992Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.783685338Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=612.557µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.785034243Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.785704787Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=670.816µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.787517952Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.788149635Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=631.532µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.789475967Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.790106478Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=630.621µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.791456474Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.791929761Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=473.176µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.7933017Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.793573137Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=271.068µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.794613395Z level=info msg="Executing migration" id="create tag table" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.795169075Z level=info msg="Migration successfully executed" id="create tag table" duration=559.698µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.796525243Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.79706761Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=543.728µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.798642176Z level=info msg="Executing migration" id="create login attempt table" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.799098741Z level=info msg="Migration successfully executed" id="create login attempt table" duration=456.325µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.800418271Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.80091976Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=501.388µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.802170531Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.802674335Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=503.794µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.804436193Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.809007991Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=4.570287ms 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.810320879Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.81075424Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=433.201µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.812055736Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.812574437Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=518.641µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.814261184Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.814509709Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=248.385µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.815725646Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.816096119Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=370.393µs 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.817110358Z level=info msg="Executing migration" id="create user auth table" 2026-03-09T15:55:06.909 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.817523431Z level=info msg="Migration successfully executed" id="create user auth table" duration=412.812µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.818857709Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.819404313Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=546.463µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.82070126Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.820778104Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=77.145µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.822049134Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.823713189Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.663815ms 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.824965512Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.826584173Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.618501ms 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.828199176Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.829785805Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.585818ms 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.830817998Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.832494367Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.676207ms 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.833738786Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.834260522Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=521.716µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.835799303Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.837419386Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.618189ms 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.838486324Z level=info msg="Executing migration" id="create server_lock table" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.838968435Z level=info msg="Migration successfully executed" id="create server_lock table" duration=482.993µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.840231611Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.840765751Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=533.71µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.842357851Z level=info msg="Executing migration" id="create user auth token table" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.842794638Z level=info msg="Migration successfully executed" id="create user auth token table" duration=436.578µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.844169823Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.844694324Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=524.421µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.845986944Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.846505926Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=518.782µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.848199276Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.84887441Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=674.874µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.850203357Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.851970535Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=1.763902ms 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.85328162Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.853851908Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=570.297µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.855651135Z level=info msg="Executing migration" id="create cache_data table" 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.856157844Z level=info msg="Migration successfully executed" id="create cache_data table" duration=506.739µs 2026-03-09T15:55:06.910 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.857466153Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.858033916Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=567.784µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.859348177Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.859864332Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=515.934µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.861419032Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.861933576Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=514.484µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.863220906Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.863299593Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=85.47µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.864500601Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.864616839Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=118.591µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.866018423Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.866480578Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=461.956µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.86776404Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.868286028Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=521.918µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.869509779Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.870041022Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=531.244µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.87164242Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.871719624Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=77.445µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.872767337Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.873302147Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=534.751µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.874328098Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.874890941Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=559.397µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.883369273Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.883953858Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=584.274µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.885256465Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.885902374Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=645.729µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.887088335Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.889798448Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=2.709822ms 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.893498164Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.894049425Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=551.221µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.895583838Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.895721766Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=138.079µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.89710222Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.89773776Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=635.299µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.899467538Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.900272185Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=804.456µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.901483322Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.90227326Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=791.952µs 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.904174289Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-09T15:55:06.911 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.904371528Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=194.284µs 2026-03-09T15:55:06.912 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.90572386Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-09T15:55:06.912 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.907521535Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.797725ms 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.911173+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.911173+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.919510+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.919510+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.924540+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.924540+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.929117+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.929117+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.943323+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:06 vm01 bash[28152]: audit 2026-03-09T15:55:05.943323+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.911173+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.911173+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.919510+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.919510+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.924540+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.924540+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.929117+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.929117+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.943323+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:06 vm01 bash[20728]: audit 2026-03-09T15:55:05.943323+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.911173+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.911173+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.919510+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.919510+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.924540+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.924540+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.179 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.929117+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.180 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.929117+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:07.180 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.943323+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:07.180 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:06 vm09 bash[22983]: audit 2026-03-09T15:55:05.943323+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.910161948Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.911269781Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.107834ms 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.91369444Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.914246104Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=551.593µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.916126413Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.916809502Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=682.989µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.918107781Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.920363935Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=2.256164ms 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.921606041Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.922132307Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=526.196µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.924060926Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.924731772Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=669.102µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.92595447Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.939364654Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=13.406607ms 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.941127314Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.950607111Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=9.478764ms 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.952336788Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.952870436Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=533.447µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.953978782Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.954494647Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=515.735µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.955851187Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.957709746Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=1.859721ms 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.959147597Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.960904186Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.756458ms 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.962138446Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.962642048Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=505.766µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.963925201Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.964453139Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=527.888µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.966158422Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.966665411Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=506.939µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.967907485Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.968496098Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=588.492µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.969714959Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.9697962Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=81.242µs 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.971132232Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.972991303Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.8589ms 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.974254617Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.976466398Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=2.209717ms 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.977821423Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.979811589Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=1.989895ms 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.981462931Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-09T15:55:07.180 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.981978256Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=515.826µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.983029673Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.983566249Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=536.545µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.984860441Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.986976542Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=2.11576ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.988539708Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.991030261Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=2.490052ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.992330876Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.992893339Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=562.603µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.994173975Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.996287191Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=2.113016ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:06 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.997898848Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:06.999995904Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=2.096794ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.001489901Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.001584067Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=94.809µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.002838315Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.003695129Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=859.959µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.005535604Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.006095362Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=559.728µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.00731739Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.007898397Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=580.857µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.00921368Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.009342871Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=129.271µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.010714559Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.012650473Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.935562ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.0138721Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.015786724Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.914314ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.017004344Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.019255518Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=2.251005ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.024060983Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.026271522Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=2.210679ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.027891545Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.029844269Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=1.952535ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.03092383Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.031033786Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=110.226µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.032364346Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.032806425Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=441.908µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.034411289Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.037016054Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=2.604495ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.038416316Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.038534227Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=117.62µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.039912045Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.042054646Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=2.144695ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.043699886Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.044293368Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=592.54µs 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.045629209Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.04778331Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=2.15356ms 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.049159366Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-09T15:55:07.181 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.049728422Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=568.254µs 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.051191962Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.051872145Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=680.323µs 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.053577878Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.055561641Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.983462ms 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.056906429Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.057472208Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=565.548µs 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.058928644Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.059591746Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=663.001µs 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.060904403Z level=info msg="Executing migration" id="create alert_image table" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.061629621Z level=info msg="Migration successfully executed" id="create alert_image table" duration=724.617µs 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.063020594Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.063826162Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=805.418µs 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.090257821Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.09047627Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=219.981µs 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.091913131Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.09260742Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=693.949µs 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.093829918Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.094416206Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=586.689µs 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.104860928Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.105258122Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T15:55:07.182 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.179330833Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.179794811Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=496.079µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.181271255Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.182092092Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=821.919µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.183194255Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.185757965Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=2.564311ms 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.186990832Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.187606515Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=615.372µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.188904876Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.189565251Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=660.024µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.190741474Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.191296453Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=554.889µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.192504884Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.193113704Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=608.549µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.194299104Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.194894889Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=595.604µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.196181437Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.196334774Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=152.907µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.197322473Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.197495857Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=173.835µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.198463379Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.19877422Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=311.051µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.199951234Z level=info msg="Executing migration" id="create data_keys table" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.200518055Z level=info msg="Migration successfully executed" id="create data_keys table" duration=566.741µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.201672747Z level=info msg="Executing migration" id="create secrets table" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.202180066Z level=info msg="Migration successfully executed" id="create secrets table" duration=507.139µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.203387126Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.213350187Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=9.96266ms 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.214629501Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.217308856Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.677633ms 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.218617116Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.218833911Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=216.635µs 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.22017376Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.230621127Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=10.446766ms 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.231955295Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-09T15:55:07.433 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.24917554Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=17.217711ms 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.250665269Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.251310747Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=645.369µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.252546391Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.253238075Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=689.409µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.254458319Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.254705011Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=246.562µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.25601309Z level=info msg="Executing migration" id="create permission table" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.256571075Z level=info msg="Migration successfully executed" id="create permission table" duration=558.095µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.257748779Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.258344454Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=594.383µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.259518061Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.260135197Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=616.985µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.261303244Z level=info msg="Executing migration" id="create role table" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.26181458Z level=info msg="Migration successfully executed" id="create role table" duration=511.056µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.262981254Z level=info msg="Executing migration" id="add column display_name" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.265397557Z level=info msg="Migration successfully executed" id="add column display_name" duration=2.415882ms 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.266496044Z level=info msg="Executing migration" id="add column group_name" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.268860299Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.363825ms 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.270032714Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.270644489Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=612.075µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.271837352Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.272472051Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=635.92µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.273606835Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.27432542Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=718.275µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.275601889Z level=info msg="Executing migration" id="create team role table" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.276286792Z level=info msg="Migration successfully executed" id="create team role table" duration=684.731µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.277500333Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.278212366Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=712.363µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.279569678Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.28029219Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=721.641µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.281634111Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.282284389Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=650.358µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.283579604Z level=info msg="Executing migration" id="create user role table" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.284103233Z level=info msg="Migration successfully executed" id="create user role table" duration=523.289µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.285258948Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.285861945Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=603.339µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.28703937Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.287633743Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=593.983µs 2026-03-09T15:55:07.434 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.28873243Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.289349414Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=617.156µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.290524444Z level=info msg="Executing migration" id="create builtin role table" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.291057081Z level=info msg="Migration successfully executed" id="create builtin role table" duration=531.585µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.292230408Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.292827665Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=595.524µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.293963151Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.294569616Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=606.835µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.295790111Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.298230119Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.439647ms 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.299439854Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.300201389Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=761.516µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.301411785Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.302016467Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=604.482µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.303163353Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.303883051Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=719.327µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.305059013Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.305722254Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=652.6µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.306750729Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.30730153Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=550.47µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.308598408Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.309281818Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=684.562µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.31038415Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.312914257Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.530116ms 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.313980463Z level=info msg="Executing migration" id="permission kind migration" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.31645754Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.476876ms 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.317586393Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.320106841Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.520127ms 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.321257706Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.323697804Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.440188ms 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.324827868Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.325460263Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=632.464µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.326637606Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.32739795Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=759.842µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.328624396Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.329313636Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=689.702µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.330296105Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.330909704Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=613.148µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.332119929Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.332778001Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=657.922µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.334066964Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.334272649Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=204.393µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.33558699Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.3356085Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=21.992µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.336609964Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.337004714Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=394.769µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.337915709Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.338351644Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=437.249µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.339487602Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.339990552Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=503.122µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.341286409Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.341569228Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=282.789µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.342513946Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.343269882Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=755.745µs 2026-03-09T15:55:07.435 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.344775269Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.345333164Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=557.504µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.346534794Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.347578989Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.046449ms 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.349076361Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.352083019Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=3.005686ms 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.353686281Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.353719012Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=30.79µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.354768857Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.355434684Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=665.517µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.357030361Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.357700757Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=670.055µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.359064519Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.360238166Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.174028ms 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.362204877Z level=info msg="Executing migration" id="add correlation config column" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.367064515Z level=info msg="Migration successfully executed" id="add correlation config column" duration=4.859327ms 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.368249523Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.368902045Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=652.461µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.369798663Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.370447307Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=649.296µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.371967843Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.380324396Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=8.356414ms 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.381539362Z level=info msg="Executing migration" id="create correlation v2" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.382277804Z level=info msg="Migration successfully executed" id="create correlation v2" duration=738.813µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.383498068Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.38419898Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=701.834µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.385344745Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.386174708Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=828.4µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.387427524Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.388048837Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=621.334µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.389181487Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.389437125Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=255.738µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.390381454Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.390920021Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=536.804µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.392072348Z level=info msg="Executing migration" id="add provisioning column" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.394536211Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.46326ms 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.395624017Z level=info msg="Executing migration" id="create entity_events table" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.396141516Z level=info msg="Migration successfully executed" id="create entity_events table" duration=517.489µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.397091083Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.397803467Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=711.933µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.399067774Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.399415705Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.400558534Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.400891558Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.401758391Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.4022656Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=506.868µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.403391067Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.403991661Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=600.344µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.405316832Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.405974763Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=658.042µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.407180401Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.407819646Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=639.296µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.409024242Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.409633473Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=609.341µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.41063137Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.411301074Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=670.345µs 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.413137652Z level=info msg="Executing migration" id="Drop public config table" 2026-03-09T15:55:07.436 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.413699324Z level=info msg="Migration successfully executed" id="Drop public config table" duration=561.752µs 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.414910531Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.415542273Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=631.551µs 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.417069402Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.417775183Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=705.591µs 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.419042565Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.419927823Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=885.448µs 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.421505206Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.422212761Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=709.217µs 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.423613502Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-09T15:55:07.437 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.432438833Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.82487ms 2026-03-09T15:55:07.697 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.434019241Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.436770201Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.7507ms 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.438271282Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.441488313Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=3.216551ms 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.443274457Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.443556555Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=281.787µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.444642538Z level=info msg="Executing migration" id="add share column" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.447194455Z level=info msg="Migration successfully executed" id="add share column" duration=2.552066ms 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.44853734Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.448782488Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=244.818µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.450004255Z level=info msg="Executing migration" id="create file table" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.450638442Z level=info msg="Migration successfully executed" id="create file table" duration=633.836µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.452943457Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.453881122Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=939.368µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.45544518Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.45621449Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=769.631µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.457712304Z level=info msg="Executing migration" id="create file_meta table" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.458269327Z level=info msg="Migration successfully executed" id="create file_meta table" duration=554.739µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.460134819Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.461011791Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=879.858µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.462568465Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.462831717Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=263.624µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.464546889Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.464648448Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=38.162µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.465886817Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.466375552Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=488.525µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.468507022Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.468692298Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=186.729µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.470102389Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.470885423Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=783.165µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.472436087Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.476393615Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=3.957208ms 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.478128403Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.478222679Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=94.566µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.479954761Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.480940857Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=989.342µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.482382677Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.482806911Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=424.796µs 2026-03-09T15:55:07.698 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.484471457Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.484868069Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=397.012µs 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.486106929Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.486532073Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=421.339µs 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.487817139Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.491052897Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=3.233524ms 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.492808754Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.496181808Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=3.369728ms 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.498368431Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.49946293Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.093206ms 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.501141582Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.527508821Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=26.361088ms 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.529679424Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.530728007Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.051118ms 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.532475939Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.533255859Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=781.063µs 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.534672371Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.543123571Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=8.445939ms 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.545292261Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.547962729Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.668204ms 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.549495389Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.549752781Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=254.958µs 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.550973476Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.551164904Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=240.921µs 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.552371874Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.552599109Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=228.928µs 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.55403124Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.55426093Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=230.101µs 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.555530166Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-09T15:55:07.699 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.555752482Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=222.356µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.556854906Z level=info msg="Executing migration" id="create folder table" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.557506134Z level=info msg="Migration successfully executed" id="create folder table" duration=650.988µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.558749393Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.559526717Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=777.586µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.560935174Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.561546208Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=611.705µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.562739362Z level=info msg="Executing migration" id="Update folder title length" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.562821014Z level=info msg="Migration successfully executed" id="Update folder title length" duration=85.019µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.564746989Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.56539361Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=646.811µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.566608434Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.567213257Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=604.381µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.568439091Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.569056127Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=617.045µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.570636635Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.571006407Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=369.661µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.572160287Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.572394215Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=233.707µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.573382365Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.573970586Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=588.723µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.575579899Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.576533664Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=955.358µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.577847183Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.578625429Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=778.587µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.580089529Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.580903634Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=815.598µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.582519139Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.583363419Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=846.335µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.584793114Z level=info msg="Executing migration" id="create anon_device table" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.585490771Z level=info msg="Migration successfully executed" id="create anon_device table" duration=697.636µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.587093331Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.587939745Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=846.756µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.589526125Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.5902981Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=771.625µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.591797878Z level=info msg="Executing migration" id="create signing_key table" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.59250933Z level=info msg="Migration successfully executed" id="create signing_key table" duration=710.831µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.594350597Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.595282732Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=931.092µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.596807496Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.597801377Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=994.671µs 2026-03-09T15:55:07.700 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.598901977Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.599233988Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=335.318µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.600592791Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.604675956Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=4.077904ms 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.608012501Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.608670784Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=661.287µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.609908721Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.610556614Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=648.063µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.611960262Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.612580723Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=621.222µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.614038592Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.614799316Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=761.125µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.616220917Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.616953839Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=732.14µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.618487049Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.61916061Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=673.931µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.620452017Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.621006315Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=554.047µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.622332428Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.622912673Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=581.398µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.624649004Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.624926243Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=278.1µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.626367Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.626498847Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=131.715µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.627803069Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.630684562Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.880532ms 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.632011647Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.634702334Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.688232ms 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.636883938Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.637272104Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=389.368µs 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=migrator t=2026-03-09T15:55:07.638406869Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.456502907s 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=sqlstore t=2026-03-09T15:55:07.63930003Z level=info msg="Created default organization" 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=secrets t=2026-03-09T15:55:07.641061919Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-09T15:55:07.701 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=plugin.store t=2026-03-09T15:55:07.650872473Z level=info msg="Loading plugins..." 2026-03-09T15:55:08.105 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:07 vm09 bash[22983]: audit 2026-03-09T15:55:06.155031+0000 mgr.y (mgr.14520) 38 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:08.105 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:07 vm09 bash[22983]: audit 2026-03-09T15:55:06.155031+0000 mgr.y (mgr.14520) 38 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:08.105 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:07 vm09 bash[22983]: cluster 2026-03-09T15:55:06.634874+0000 mgr.y (mgr.14520) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:08.105 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:07 vm09 bash[22983]: cluster 2026-03-09T15:55:06.634874+0000 mgr.y (mgr.14520) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:08.105 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:07 vm09 bash[22983]: audit 2026-03-09T15:55:07.701952+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:08.105 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:07 vm09 bash[22983]: audit 2026-03-09T15:55:07.701952+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=local.finder t=2026-03-09T15:55:07.696702546Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=plugin.store t=2026-03-09T15:55:07.69673168Z level=info msg="Plugins loaded" count=55 duration=45.860009ms 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=query_data t=2026-03-09T15:55:07.701118733Z level=info msg="Query Service initialization" 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=live.push_http t=2026-03-09T15:55:07.70375082Z level=info msg="Live Push Gateway initialization" 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.migration t=2026-03-09T15:55:07.706418093Z level=info msg=Starting 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.migration t=2026-03-09T15:55:07.706814295Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.migration orgID=1 t=2026-03-09T15:55:07.707258877Z level=info msg="Migrating alerts for organisation" 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.migration orgID=1 t=2026-03-09T15:55:07.707751339Z level=info msg="Alerts found to migrate" alerts=0 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.migration t=2026-03-09T15:55:07.708872077Z level=info msg="Completed alerting migration" 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.state.manager t=2026-03-09T15:55:07.725702162Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=infra.usagestats.collector t=2026-03-09T15:55:07.727069763Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-09T15:55:08.105 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=provisioning.datasources t=2026-03-09T15:55:07.728369465Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=provisioning.datasources t=2026-03-09T15:55:07.73466454Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=provisioning.alerting t=2026-03-09T15:55:07.740806838Z level=info msg="starting to provision alerting" 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=provisioning.alerting t=2026-03-09T15:55:07.740818329Z level=info msg="finished to provision alerting" 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=grafanaStorageLogger t=2026-03-09T15:55:07.743048334Z level=info msg="Storage starting" 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=http.server t=2026-03-09T15:55:07.743410822Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=http.server t=2026-03-09T15:55:07.743711556Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.state.manager t=2026-03-09T15:55:07.743882495Z level=info msg="Warming state cache for startup" 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.state.manager t=2026-03-09T15:55:07.744273758Z level=info msg="State cache has been initialized" states=0 duration=391.474µs 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=provisioning.dashboard t=2026-03-09T15:55:07.751146273Z level=info msg="starting to provision dashboards" 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.multiorg.alertmanager t=2026-03-09T15:55:07.765097098Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ngalert.scheduler t=2026-03-09T15:55:07.765121655Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=ticker t=2026-03-09T15:55:07.765150137Z level=info msg=starting first_tick=2026-03-09T15:55:10Z 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=plugins.update.checker t=2026-03-09T15:55:07.850396736Z level=info msg="Update check succeeded" duration=89.562989ms 2026-03-09T15:55:08.106 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:07 vm09 bash[50619]: logger=provisioning.dashboard t=2026-03-09T15:55:07.927529887Z level=info msg="finished to provision dashboards" 2026-03-09T15:55:08.293 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:07 vm01 bash[28152]: audit 2026-03-09T15:55:06.155031+0000 mgr.y (mgr.14520) 38 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:07 vm01 bash[28152]: audit 2026-03-09T15:55:06.155031+0000 mgr.y (mgr.14520) 38 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:07 vm01 bash[28152]: cluster 2026-03-09T15:55:06.634874+0000 mgr.y (mgr.14520) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:07 vm01 bash[28152]: cluster 2026-03-09T15:55:06.634874+0000 mgr.y (mgr.14520) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:07 vm01 bash[28152]: audit 2026-03-09T15:55:07.701952+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:07 vm01 bash[28152]: audit 2026-03-09T15:55:07.701952+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:07 vm01 bash[20728]: audit 2026-03-09T15:55:06.155031+0000 mgr.y (mgr.14520) 38 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:07 vm01 bash[20728]: audit 2026-03-09T15:55:06.155031+0000 mgr.y (mgr.14520) 38 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:07 vm01 bash[20728]: cluster 2026-03-09T15:55:06.634874+0000 mgr.y (mgr.14520) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:07 vm01 bash[20728]: cluster 2026-03-09T15:55:06.634874+0000 mgr.y (mgr.14520) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:07 vm01 bash[20728]: audit 2026-03-09T15:55:07.701952+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:08.319 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:07 vm01 bash[20728]: audit 2026-03-09T15:55:07.701952+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:08.383 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:08 vm09 bash[50619]: logger=grafana-apiserver t=2026-03-09T15:55:08.104652034Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-09T15:55:08.383 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:08 vm09 bash[50619]: logger=grafana-apiserver t=2026-03-09T15:55:08.105153572Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-09T15:55:08.465 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 -- 192.168.123.101:0/3210459101 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 msgr2=0x7f4608109940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 --2- 192.168.123.101:0/3210459101 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 0x7f4608109940 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f45fc009a30 tx=0x7f45fc02f220 comp rx=0 tx=0).stop 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 -- 192.168.123.101:0/3210459101 shutdown_connections 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 --2- 192.168.123.101:0/3210459101 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f460810a070 0x7f4608111bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 --2- 192.168.123.101:0/3210459101 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 0x7f4608109940 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 --2- 192.168.123.101:0/3210459101 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4608104f40 0x7f4608105320 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 -- 192.168.123.101:0/3210459101 >> 192.168.123.101:0/3210459101 conn(0x7f46081009e0 msgr2=0x7f4608102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 -- 192.168.123.101:0/3210459101 shutdown_connections 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 -- 192.168.123.101:0/3210459101 wait complete. 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 Processor -- start 2026-03-09T15:55:08.466 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.463+0000 7f460d2f0640 1 -- start start 2026-03-09T15:55:08.467 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f460d2f0640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4608104f40 0x7f46081a2630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:08.467 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f460d2f0640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 0x7f46081a2b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:08.467 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f460d2f0640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f460810a070 0x7f460819c700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:08.467 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f460d2f0640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f46081142d0 con 0x7f460810a070 2026-03-09T15:55:08.467 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f460d2f0640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f4608114150 con 0x7f46081058f0 2026-03-09T15:55:08.467 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f460d2f0640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f4608114450 con 0x7f4608104f40 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46067fc640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 0x7f46081a2b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46067fc640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 0x7f46081a2b70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:39670/0 (socket says 192.168.123.101:39670) 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46067fc640 1 -- 192.168.123.101:0/2145318752 learned_addr learned my addr 192.168.123.101:0/2145318752 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46077fe640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f460810a070 0x7f460819c700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f4606ffd640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4608104f40 0x7f46081a2630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46067fc640 1 -- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4608104f40 msgr2=0x7f46081a2630 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46067fc640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4608104f40 0x7f46081a2630 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46067fc640 1 -- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f460810a070 msgr2=0x7f460819c700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46067fc640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f460810a070 0x7f460819c700 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46067fc640 1 -- 192.168.123.101:0/2145318752 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f460819cf90 con 0x7f46081058f0 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f4606ffd640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4608104f40 0x7f46081a2630 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:08.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46077fe640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f460810a070 0x7f460819c700 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:55:08.469 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f46067fc640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 0x7f46081a2b70 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f45fc009b60 tx=0x7f45fc002ce0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:08.471 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f45e7fff640 1 -- 192.168.123.101:0/2145318752 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f45fc046070 con 0x7f46081058f0 2026-03-09T15:55:08.471 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f45e7fff640 1 -- 192.168.123.101:0/2145318752 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f45fc02fc90 con 0x7f46081058f0 2026-03-09T15:55:08.471 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f45e7fff640 1 -- 192.168.123.101:0/2145318752 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f45fc041410 con 0x7f46081058f0 2026-03-09T15:55:08.471 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f460819d220 con 0x7f46081058f0 2026-03-09T15:55:08.471 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.467+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f46081a93e0 con 0x7f46081058f0 2026-03-09T15:55:08.472 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.471+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f45cc005180 con 0x7f46081058f0 2026-03-09T15:55:08.472 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.471+0000 7f45e7fff640 1 -- 192.168.123.101:0/2145318752 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f45fc038470 con 0x7f46081058f0 2026-03-09T15:55:08.472 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.471+0000 7f45e7fff640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f45dc077750 0x7f45dc079c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:08.472 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.471+0000 7f4606ffd640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f45dc077750 0x7f45dc079c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:08.473 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.471+0000 7f45e7fff640 1 -- 192.168.123.101:0/2145318752 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f45fc0be4e0 con 0x7f46081058f0 2026-03-09T15:55:08.473 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.471+0000 7f4606ffd640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f45dc077750 0x7f45dc079c10 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f45f0005e00 tx=0x7f45f000a600 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:08.476 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.475+0000 7f45e7fff640 1 -- 192.168.123.101:0/2145318752 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f45fc08ae80 con 0x7f46081058f0 2026-03-09T15:55:08.578 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.575+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f45cc005470 con 0x7f46081058f0 2026-03-09T15:55:08.579 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.579+0000 7f45e7fff640 1 -- 192.168.123.101:0/2145318752 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v64) ==== 74+0+23444 (secure 0 0 0) 0x7f45fc08fd30 con 0x7f46081058f0 2026-03-09T15:55:08.580 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:55:08.580 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":64,"fsid":"397fadc0-1bcf-11f1-8481-edc1430c2c03","created":"2026-03-09T15:48:02.072991+0000","modified":"2026-03-09T15:54:42.595660+0000","last_up_change":"2026-03-09T15:53:49.113988+0000","last_in_change":"2026-03-09T15:53:32.372004+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T15:51:01.105730+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-09T15:54:09.432657+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-09T15:54:11.429030+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"57","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-09T15:54:12.268095+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":62,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T15:54:13.373110+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"59","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T15:54:15.451800+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"61","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"85259aee-a52d-45ab-8429-e3d0212392b7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6803","nonce":115286186}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6805","nonce":115286186}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6809","nonce":115286186}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6807","nonce":115286186}]},"public_addr":"192.168.123.101:6803/115286186","cluster_addr":"192.168.123.101:6805/115286186","heartbeat_back_addr":"192.168.123.101:6809/115286186","heartbeat_front_addr":"192.168.123.101:6807/115286186","state":["exists","up"]},{"osd":1,"uuid":"e7c85482-6eb5-4953-8a19-029686ffe773","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6811","nonce":4163266826}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6813","nonce":4163266826}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6817","nonce":4163266826}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6815","nonce":4163266826}]},"public_addr":"192.168.123.101:6811/4163266826","cluster_addr":"192.168.123.101:6813/4163266826","heartbeat_back_addr":"192.168.123.101:6817/4163266826","heartbeat_front_addr":"192.168.123.101:6815/4163266826","state":["exists","up"]},{"osd":2,"uuid":"d1982b6d-a77c-466e-996b-c1ff61952b4b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6818","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6819","nonce":1701239335}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6820","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6821","nonce":1701239335}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6824","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6825","nonce":1701239335}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6822","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6823","nonce":1701239335}]},"public_addr":"192.168.123.101:6819/1701239335","cluster_addr":"192.168.123.101:6821/1701239335","heartbeat_back_addr":"192.168.123.101:6825/1701239335","heartbeat_front_addr":"192.168.123.101:6823/1701239335","state":["exists","up"]},{"osd":3,"uuid":"59646c31-d8a8-4171-8402-970963810d37","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6826","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6827","nonce":994063283}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6828","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6829","nonce":994063283}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6832","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6833","nonce":994063283}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6830","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6831","nonce":994063283}]},"public_addr":"192.168.123.101:6827/994063283","cluster_addr":"192.168.123.101:6829/994063283","heartbeat_back_addr":"192.168.123.101:6833/994063283","heartbeat_front_addr":"192.168.123.101:6831/994063283","state":["exists","up"]},{"osd":4,"uuid":"642a6d0d-91ea-4433-b755-50a0d7442acf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6801","nonce":2242917856}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6803","nonce":2242917856}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6807","nonce":2242917856}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6805","nonce":2242917856}]},"public_addr":"192.168.123.109:6801/2242917856","cluster_addr":"192.168.123.109:6803/2242917856","heartbeat_back_addr":"192.168.123.109:6807/2242917856","heartbeat_front_addr":"192.168.123.109:6805/2242917856","state":["exists","up"]},{"osd":5,"uuid":"b983b1a8-523b-4ebf-b245-ff2849d684be","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":37,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6809","nonce":2799407982}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6811","nonce":2799407982}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6815","nonce":2799407982}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6813","nonce":2799407982}]},"public_addr":"192.168.123.109:6809/2799407982","cluster_addr":"192.168.123.109:6811/2799407982","heartbeat_back_addr":"192.168.123.109:6815/2799407982","heartbeat_front_addr":"192.168.123.109:6813/2799407982","state":["exists","up"]},{"osd":6,"uuid":"a674ed00-04ea-4bd3-ab96-9d977052e290","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":57,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6817","nonce":920695066}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6819","nonce":920695066}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6823","nonce":920695066}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6821","nonce":920695066}]},"public_addr":"192.168.123.109:6817/920695066","cluster_addr":"192.168.123.109:6819/920695066","heartbeat_back_addr":"192.168.123.109:6823/920695066","heartbeat_front_addr":"192.168.123.109:6821/920695066","state":["exists","up"]},{"osd":7,"uuid":"d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":50,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6825","nonce":1747724061}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6827","nonce":1747724061}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6831","nonce":1747724061}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6829","nonce":1747724061}]},"public_addr":"192.168.123.109:6825/1747724061","cluster_addr":"192.168.123.109:6827/1747724061","heartbeat_back_addr":"192.168.123.109:6831/1747724061","heartbeat_front_addr":"192.168.123.109:6829/1747724061","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:49:48.509437+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:50:22.208224+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:50:56.403701+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:51:31.273145+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:52:05.018838+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:52:39.069791+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:53:12.641048+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:53:47.432546+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:6801/1421049061":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/3709146351":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/1483779036":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/4136833719":"2026-03-10T15:48:23.034006+0000","192.168.123.101:0/3119601777":"2026-03-10T15:48:12.689768+0000","192.168.123.101:0/957411864":"2026-03-10T15:48:12.689768+0000","192.168.123.101:0/407132826":"2026-03-10T15:48:12.689768+0000","192.168.123.101:6800/2299265276":"2026-03-10T15:48:12.689768+0000","192.168.123.101:0/328174782":"2026-03-10T15:48:23.034006+0000","192.168.123.101:0/3195752727":"2026-03-10T15:48:23.034006+0000","192.168.123.101:6800/1421049061":"2026-03-10T15:54:42.595542+0000","192.168.123.101:6801/2530303036":"2026-03-10T15:48:23.034006+0000","192.168.123.101:6801/2299265276":"2026-03-10T15:48:12.689768+0000","192.168.123.101:6800/2530303036":"2026-03-10T15:48:23.034006+0000","192.168.123.101:0/2136341138":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/3771755931":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/311374531":"2026-03-10T15:54:42.595542+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T15:55:08.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f45dc077750 msgr2=0x7f45dc079c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:08.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f45dc077750 0x7f45dc079c10 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f45f0005e00 tx=0x7f45f000a600 comp rx=0 tx=0).stop 2026-03-09T15:55:08.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 msgr2=0x7f46081a2b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:08.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 0x7f46081a2b70 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f45fc009b60 tx=0x7f45fc002ce0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 shutdown_connections 2026-03-09T15:55:08.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f45dc077750 0x7f45dc079c10 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f460810a070 0x7f460819c700 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f46081058f0 0x7f46081a2b70 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 --2- 192.168.123.101:0/2145318752 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4608104f40 0x7f46081a2630 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:08.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 >> 192.168.123.101:0/2145318752 conn(0x7f46081009e0 msgr2=0x7f4608101ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:08.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 shutdown_connections 2026-03-09T15:55:08.584 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:08.583+0000 7f460d2f0640 1 -- 192.168.123.101:0/2145318752 wait complete. 2026-03-09T15:55:08.659 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T15:55:08.659 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd dump --format=json 2026-03-09T15:55:09.316 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:08 vm01 bash[28152]: audit 2026-03-09T15:55:08.578767+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.101:0/2145318752' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:09.316 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:08 vm01 bash[28152]: audit 2026-03-09T15:55:08.578767+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.101:0/2145318752' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:09.316 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:08 vm01 bash[20728]: audit 2026-03-09T15:55:08.578767+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.101:0/2145318752' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:09.316 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:08 vm01 bash[20728]: audit 2026-03-09T15:55:08.578767+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.101:0/2145318752' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:08 vm09 bash[22983]: audit 2026-03-09T15:55:08.578767+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.101:0/2145318752' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:09.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:08 vm09 bash[22983]: audit 2026-03-09T15:55:08.578767+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.101:0/2145318752' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:10.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:09 vm09 bash[22983]: cluster 2026-03-09T15:55:08.641704+0000 mgr.y (mgr.14520) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:10.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:09 vm09 bash[22983]: cluster 2026-03-09T15:55:08.641704+0000 mgr.y (mgr.14520) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:10.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:09 vm01 bash[20728]: cluster 2026-03-09T15:55:08.641704+0000 mgr.y (mgr.14520) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:10.440 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:09 vm01 bash[20728]: cluster 2026-03-09T15:55:08.641704+0000 mgr.y (mgr.14520) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:10.440 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:09 vm01 bash[28152]: cluster 2026-03-09T15:55:08.641704+0000 mgr.y (mgr.14520) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:10.440 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:09 vm01 bash[28152]: cluster 2026-03-09T15:55:08.641704+0000 mgr.y (mgr.14520) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: cluster 2026-03-09T15:55:10.642305+0000 mgr.y (mgr.14520) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: cluster 2026-03-09T15:55:10.642305+0000 mgr.y (mgr.14520) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:10.755416+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:10.755416+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:10.762468+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:10.762468+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.440305+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.440305+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.449793+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.449793+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.452207+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.452207+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.453037+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:12.048 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.453037+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.460757+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:11 vm01 bash[20728]: audit 2026-03-09T15:55:11.460757+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: cluster 2026-03-09T15:55:10.642305+0000 mgr.y (mgr.14520) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: cluster 2026-03-09T15:55:10.642305+0000 mgr.y (mgr.14520) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:10.755416+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:10.755416+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:10.762468+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:10.762468+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.440305+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.440305+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.449793+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.449793+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.452207+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.452207+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.453037+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.453037+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.460757+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:11 vm01 bash[28152]: audit 2026-03-09T15:55:11.460757+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.049 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 systemd[1]: Stopping Ceph alertmanager.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T15:55:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: cluster 2026-03-09T15:55:10.642305+0000 mgr.y (mgr.14520) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: cluster 2026-03-09T15:55:10.642305+0000 mgr.y (mgr.14520) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:10.755416+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:10.755416+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:10.762468+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:10.762468+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.440305+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.440305+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.449793+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.449793+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.452207+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.452207+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.453037+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.453037+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.460757+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:11 vm09 bash[22983]: audit 2026-03-09T15:55:11.460757+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.316 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[55869]: ts=2026-03-09T15:55:12.097Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T15:55:12.316 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[56624]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-alertmanager-a 2026-03-09T15:55:12.316 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@alertmanager.a.service: Deactivated successfully. 2026-03-09T15:55:12.316 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 systemd[1]: Stopped Ceph alertmanager.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:55:12.316 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 systemd[1]: Started Ceph alertmanager.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:55:12.317 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[56700]: ts=2026-03-09T15:55:12.316Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T15:55:12.317 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[56700]: ts=2026-03-09T15:55:12.316Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T15:55:12.318 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[56700]: ts=2026-03-09T15:55:12.318Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.101 port=9094 2026-03-09T15:55:12.325 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:12.525 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.523+0000 7f0213ae2640 1 -- 192.168.123.101:0/335563427 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f020c101760 msgr2=0x7f020c101b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:12.525 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.523+0000 7f0213ae2640 1 --2- 192.168.123.101:0/335563427 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f020c101760 0x7f020c101b40 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7f01f40099b0 tx=0x7f01f402f190 comp rx=0 tx=0).stop 2026-03-09T15:55:12.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.523+0000 7f0213ae2640 1 -- 192.168.123.101:0/335563427 shutdown_connections 2026-03-09T15:55:12.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.523+0000 7f0213ae2640 1 --2- 192.168.123.101:0/335563427 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f020c10f490 0x7f020c111880 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:12.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.523+0000 7f0213ae2640 1 --2- 192.168.123.101:0/335563427 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f020c102080 0x7f020c10ef50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:12.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.523+0000 7f0213ae2640 1 --2- 192.168.123.101:0/335563427 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f020c101760 0x7f020c101b40 unknown :-1 s=CLOSED pgs=157 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:12.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.523+0000 7f0213ae2640 1 -- 192.168.123.101:0/335563427 >> 192.168.123.101:0/335563427 conn(0x7f020c0fd630 msgr2=0x7f020c0ffa50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:12.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.523+0000 7f0213ae2640 1 -- 192.168.123.101:0/335563427 shutdown_connections 2026-03-09T15:55:12.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.523+0000 7f0213ae2640 1 -- 192.168.123.101:0/335563427 wait complete. 2026-03-09T15:55:12.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 Processor -- start 2026-03-09T15:55:12.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 -- start start 2026-03-09T15:55:12.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f020c101760 0x7f020c1a2620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:12.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f020c102080 0x7f020c1a2b60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:12.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f020c10f490 0x7f020c19c7a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:12.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f020c1143b0 con 0x7f020c10f490 2026-03-09T15:55:12.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f020c114230 con 0x7f020c101760 2026-03-09T15:55:12.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f020c114530 con 0x7f020c102080 2026-03-09T15:55:12.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211857640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f020c101760 0x7f020c1a2620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:12.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211857640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f020c101760 0x7f020c1a2620 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:39684/0 (socket says 192.168.123.101:39684) 2026-03-09T15:55:12.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211857640 1 -- 192.168.123.101:0/1807263361 learned_addr learned my addr 192.168.123.101:0/1807263361 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:12.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0212058640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f020c10f490 0x7f020c19c7a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211056640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f020c102080 0x7f020c1a2b60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211857640 1 -- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f020c102080 msgr2=0x7f020c1a2b60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211857640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f020c102080 0x7f020c1a2b60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211857640 1 -- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f020c10f490 msgr2=0x7f020c19c7a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211857640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f020c10f490 0x7f020c19c7a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211857640 1 -- 192.168.123.101:0/1807263361 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f020c19cfd0 con 0x7f020c101760 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211056640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f020c102080 0x7f020c1a2b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0211857640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f020c101760 0x7f020c1a2620 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f01f4009980 tx=0x7f01f402fd60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0212058640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f020c10f490 0x7f020c19c7a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0202ffd640 1 -- 192.168.123.101:0/1807263361 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f01f403d070 con 0x7f020c101760 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f020c19d260 con 0x7f020c101760 2026-03-09T15:55:12.529 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f020c1a9480 con 0x7f020c101760 2026-03-09T15:55:12.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.527+0000 7f0202ffd640 1 -- 192.168.123.101:0/1807263361 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f01f4004440 con 0x7f020c101760 2026-03-09T15:55:12.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.531+0000 7f0202ffd640 1 -- 192.168.123.101:0/1807263361 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f01f4038d30 con 0x7f020c101760 2026-03-09T15:55:12.532 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.531+0000 7f0202ffd640 1 -- 192.168.123.101:0/1807263361 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f01f4038470 con 0x7f020c101760 2026-03-09T15:55:12.532 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.531+0000 7f0202ffd640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f01e8077670 0x7f01e8079b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:12.532 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.531+0000 7f0202ffd640 1 -- 192.168.123.101:0/1807263361 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f01f40beaa0 con 0x7f020c101760 2026-03-09T15:55:12.532 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.531+0000 7f0211056640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f01e8077670 0x7f01e8079b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:12.533 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.531+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f01d4005180 con 0x7f020c101760 2026-03-09T15:55:12.533 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.531+0000 7f0211056640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f01e8077670 0x7f01e8079b30 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f020c111520 tx=0x7f01fc009340 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:12.537 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.535+0000 7f0202ffd640 1 -- 192.168.123.101:0/1807263361 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f01f408b3c0 con 0x7f020c101760 2026-03-09T15:55:12.647 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.643+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f01d4005470 con 0x7f020c101760 2026-03-09T15:55:12.648 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.647+0000 7f0202ffd640 1 -- 192.168.123.101:0/1807263361 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v64) ==== 74+0+23444 (secure 0 0 0) 0x7f01f4090270 con 0x7f020c101760 2026-03-09T15:55:12.648 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:55:12.649 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":64,"fsid":"397fadc0-1bcf-11f1-8481-edc1430c2c03","created":"2026-03-09T15:48:02.072991+0000","modified":"2026-03-09T15:54:42.595660+0000","last_up_change":"2026-03-09T15:53:49.113988+0000","last_in_change":"2026-03-09T15:53:32.372004+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T15:51:01.105730+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-09T15:54:09.432657+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-09T15:54:11.429030+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"57","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-09T15:54:12.268095+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":62,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T15:54:13.373110+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"59","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T15:54:15.451800+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"61","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"85259aee-a52d-45ab-8429-e3d0212392b7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6803","nonce":115286186}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6805","nonce":115286186}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6809","nonce":115286186}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":115286186},{"type":"v1","addr":"192.168.123.101:6807","nonce":115286186}]},"public_addr":"192.168.123.101:6803/115286186","cluster_addr":"192.168.123.101:6805/115286186","heartbeat_back_addr":"192.168.123.101:6809/115286186","heartbeat_front_addr":"192.168.123.101:6807/115286186","state":["exists","up"]},{"osd":1,"uuid":"e7c85482-6eb5-4953-8a19-029686ffe773","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6811","nonce":4163266826}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6813","nonce":4163266826}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6817","nonce":4163266826}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":4163266826},{"type":"v1","addr":"192.168.123.101:6815","nonce":4163266826}]},"public_addr":"192.168.123.101:6811/4163266826","cluster_addr":"192.168.123.101:6813/4163266826","heartbeat_back_addr":"192.168.123.101:6817/4163266826","heartbeat_front_addr":"192.168.123.101:6815/4163266826","state":["exists","up"]},{"osd":2,"uuid":"d1982b6d-a77c-466e-996b-c1ff61952b4b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6818","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6819","nonce":1701239335}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6820","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6821","nonce":1701239335}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6824","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6825","nonce":1701239335}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6822","nonce":1701239335},{"type":"v1","addr":"192.168.123.101:6823","nonce":1701239335}]},"public_addr":"192.168.123.101:6819/1701239335","cluster_addr":"192.168.123.101:6821/1701239335","heartbeat_back_addr":"192.168.123.101:6825/1701239335","heartbeat_front_addr":"192.168.123.101:6823/1701239335","state":["exists","up"]},{"osd":3,"uuid":"59646c31-d8a8-4171-8402-970963810d37","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6826","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6827","nonce":994063283}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6828","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6829","nonce":994063283}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6832","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6833","nonce":994063283}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6830","nonce":994063283},{"type":"v1","addr":"192.168.123.101:6831","nonce":994063283}]},"public_addr":"192.168.123.101:6827/994063283","cluster_addr":"192.168.123.101:6829/994063283","heartbeat_back_addr":"192.168.123.101:6833/994063283","heartbeat_front_addr":"192.168.123.101:6831/994063283","state":["exists","up"]},{"osd":4,"uuid":"642a6d0d-91ea-4433-b755-50a0d7442acf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6801","nonce":2242917856}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6803","nonce":2242917856}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6807","nonce":2242917856}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":2242917856},{"type":"v1","addr":"192.168.123.109:6805","nonce":2242917856}]},"public_addr":"192.168.123.109:6801/2242917856","cluster_addr":"192.168.123.109:6803/2242917856","heartbeat_back_addr":"192.168.123.109:6807/2242917856","heartbeat_front_addr":"192.168.123.109:6805/2242917856","state":["exists","up"]},{"osd":5,"uuid":"b983b1a8-523b-4ebf-b245-ff2849d684be","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":37,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6809","nonce":2799407982}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6811","nonce":2799407982}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6815","nonce":2799407982}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":2799407982},{"type":"v1","addr":"192.168.123.109:6813","nonce":2799407982}]},"public_addr":"192.168.123.109:6809/2799407982","cluster_addr":"192.168.123.109:6811/2799407982","heartbeat_back_addr":"192.168.123.109:6815/2799407982","heartbeat_front_addr":"192.168.123.109:6813/2799407982","state":["exists","up"]},{"osd":6,"uuid":"a674ed00-04ea-4bd3-ab96-9d977052e290","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":57,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6817","nonce":920695066}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6819","nonce":920695066}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6823","nonce":920695066}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":920695066},{"type":"v1","addr":"192.168.123.109:6821","nonce":920695066}]},"public_addr":"192.168.123.109:6817/920695066","cluster_addr":"192.168.123.109:6819/920695066","heartbeat_back_addr":"192.168.123.109:6823/920695066","heartbeat_front_addr":"192.168.123.109:6821/920695066","state":["exists","up"]},{"osd":7,"uuid":"d7f62b71-8ddc-49ac-b9d6-bebdba1cf51b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":50,"up_thru":59,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6825","nonce":1747724061}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6827","nonce":1747724061}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6831","nonce":1747724061}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":1747724061},{"type":"v1","addr":"192.168.123.109:6829","nonce":1747724061}]},"public_addr":"192.168.123.109:6825/1747724061","cluster_addr":"192.168.123.109:6827/1747724061","heartbeat_back_addr":"192.168.123.109:6831/1747724061","heartbeat_front_addr":"192.168.123.109:6829/1747724061","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:49:48.509437+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:50:22.208224+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:50:56.403701+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:51:31.273145+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:52:05.018838+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:52:39.069791+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:53:12.641048+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T15:53:47.432546+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:6801/1421049061":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/3709146351":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/1483779036":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/4136833719":"2026-03-10T15:48:23.034006+0000","192.168.123.101:0/3119601777":"2026-03-10T15:48:12.689768+0000","192.168.123.101:0/957411864":"2026-03-10T15:48:12.689768+0000","192.168.123.101:0/407132826":"2026-03-10T15:48:12.689768+0000","192.168.123.101:6800/2299265276":"2026-03-10T15:48:12.689768+0000","192.168.123.101:0/328174782":"2026-03-10T15:48:23.034006+0000","192.168.123.101:0/3195752727":"2026-03-10T15:48:23.034006+0000","192.168.123.101:6800/1421049061":"2026-03-10T15:54:42.595542+0000","192.168.123.101:6801/2530303036":"2026-03-10T15:48:23.034006+0000","192.168.123.101:6801/2299265276":"2026-03-10T15:48:12.689768+0000","192.168.123.101:6800/2530303036":"2026-03-10T15:48:23.034006+0000","192.168.123.101:0/2136341138":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/3771755931":"2026-03-10T15:54:42.595542+0000","192.168.123.101:0/311374531":"2026-03-10T15:54:42.595542+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f01e8077670 msgr2=0x7f01e8079b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f01e8077670 0x7f01e8079b30 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f020c111520 tx=0x7f01fc009340 comp rx=0 tx=0).stop 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f020c101760 msgr2=0x7f020c1a2620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f020c101760 0x7f020c1a2620 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f01f4009980 tx=0x7f01f402fd60 comp rx=0 tx=0).stop 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 shutdown_connections 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f01e8077670 0x7f01e8079b30 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f020c10f490 0x7f020c19c7a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f020c102080 0x7f020c1a2b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 --2- 192.168.123.101:0/1807263361 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f020c101760 0x7f020c1a2620 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 >> 192.168.123.101:0/1807263361 conn(0x7f020c0fd630 msgr2=0x7f020c10fbf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:12.653 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 shutdown_connections 2026-03-09T15:55:12.654 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:12.651+0000 7f0213ae2640 1 -- 192.168.123.101:0/1807263361 wait complete. 2026-03-09T15:55:12.666 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[56700]: ts=2026-03-09T15:55:12.319Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T15:55:12.667 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[56700]: ts=2026-03-09T15:55:12.348Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T15:55:12.667 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[56700]: ts=2026-03-09T15:55:12.348Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T15:55:12.667 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[56700]: ts=2026-03-09T15:55:12.351Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T15:55:12.667 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[56700]: ts=2026-03-09T15:55:12.351Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T15:55:12.755 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph tell osd.0 flush_pg_stats 2026-03-09T15:55:12.755 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph tell osd.1 flush_pg_stats 2026-03-09T15:55:12.755 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph tell osd.2 flush_pg_stats 2026-03-09T15:55:12.755 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph tell osd.3 flush_pg_stats 2026-03-09T15:55:12.755 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph tell osd.4 flush_pg_stats 2026-03-09T15:55:12.756 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph tell osd.5 flush_pg_stats 2026-03-09T15:55:12.756 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph tell osd.6 flush_pg_stats 2026-03-09T15:55:12.756 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph tell osd.7 flush_pg_stats 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: cephadm 2026-03-09T15:55:11.479928+0000 mgr.y (mgr.14520) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: cephadm 2026-03-09T15:55:11.479928+0000 mgr.y (mgr.14520) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: cephadm 2026-03-09T15:55:11.484855+0000 mgr.y (mgr.14520) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm01 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: cephadm 2026-03-09T15:55:11.484855+0000 mgr.y (mgr.14520) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm01 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: audit 2026-03-09T15:55:12.223122+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: audit 2026-03-09T15:55:12.223122+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: audit 2026-03-09T15:55:12.235285+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: audit 2026-03-09T15:55:12.235285+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: audit 2026-03-09T15:55:12.647522+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.101:0/1807263361' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: audit 2026-03-09T15:55:12.647522+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.101:0/1807263361' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: audit 2026-03-09T15:55:12.733652+0000 mon.a (mon.0) 838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:12 vm01 bash[20728]: audit 2026-03-09T15:55:12.733652+0000 mon.a (mon.0) 838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:55:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: cephadm 2026-03-09T15:55:11.479928+0000 mgr.y (mgr.14520) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: cephadm 2026-03-09T15:55:11.479928+0000 mgr.y (mgr.14520) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T15:55:12.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: cephadm 2026-03-09T15:55:11.484855+0000 mgr.y (mgr.14520) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm01 2026-03-09T15:55:12.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: cephadm 2026-03-09T15:55:11.484855+0000 mgr.y (mgr.14520) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm01 2026-03-09T15:55:12.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: audit 2026-03-09T15:55:12.223122+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: audit 2026-03-09T15:55:12.223122+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: audit 2026-03-09T15:55:12.235285+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: audit 2026-03-09T15:55:12.235285+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:12.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: audit 2026-03-09T15:55:12.647522+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.101:0/1807263361' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:12.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: audit 2026-03-09T15:55:12.647522+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.101:0/1807263361' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:12.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: audit 2026-03-09T15:55:12.733652+0000 mon.a (mon.0) 838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:12.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:12 vm01 bash[28152]: audit 2026-03-09T15:55:12.733652+0000 mon.a (mon.0) 838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: cephadm 2026-03-09T15:55:11.479928+0000 mgr.y (mgr.14520) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: cephadm 2026-03-09T15:55:11.479928+0000 mgr.y (mgr.14520) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: cephadm 2026-03-09T15:55:11.484855+0000 mgr.y (mgr.14520) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm01 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: cephadm 2026-03-09T15:55:11.484855+0000 mgr.y (mgr.14520) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm01 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: audit 2026-03-09T15:55:12.223122+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: audit 2026-03-09T15:55:12.223122+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: audit 2026-03-09T15:55:12.235285+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: audit 2026-03-09T15:55:12.235285+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: audit 2026-03-09T15:55:12.647522+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.101:0/1807263361' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: audit 2026-03-09T15:55:12.647522+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.101:0/1807263361' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: audit 2026-03-09T15:55:12.733652+0000 mon.a (mon.0) 838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:13.074 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:12 vm09 bash[22983]: audit 2026-03-09T15:55:12.733652+0000 mon.a (mon.0) 838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:13.338 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 systemd[1]: Stopping Ceph prometheus.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.381Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.381Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.381Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.381Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.381Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.381Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.381Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.381Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.381Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.383Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.383Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T15:55:13.633 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[49361]: ts=2026-03-09T15:55:13.383Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T15:55:13.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:13 vm09 bash[22983]: cephadm 2026-03-09T15:55:12.238818+0000 mgr.y (mgr.14520) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T15:55:13.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:13 vm09 bash[22983]: cephadm 2026-03-09T15:55:12.238818+0000 mgr.y (mgr.14520) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T15:55:13.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:13 vm09 bash[22983]: cephadm 2026-03-09T15:55:12.449606+0000 mgr.y (mgr.14520) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm09 2026-03-09T15:55:13.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:13 vm09 bash[22983]: cephadm 2026-03-09T15:55:12.449606+0000 mgr.y (mgr.14520) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm09 2026-03-09T15:55:13.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:13 vm09 bash[22983]: cluster 2026-03-09T15:55:12.642792+0000 mgr.y (mgr.14520) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:13.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:13 vm09 bash[22983]: cluster 2026-03-09T15:55:12.642792+0000 mgr.y (mgr.14520) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:13.917 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51184]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-prometheus-a 2026-03-09T15:55:13.917 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@prometheus.a.service: Deactivated successfully. 2026-03-09T15:55:13.917 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 systemd[1]: Stopped Ceph prometheus.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:55:13.917 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 systemd[1]: Started Ceph prometheus.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:13 vm01 bash[21002]: [09/Mar/2026:15:55:13] ENGINE Bus STOPPING 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:13 vm01 bash[21002]: [09/Mar/2026:15:55:13] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:13 vm01 bash[21002]: [09/Mar/2026:15:55:13] ENGINE Bus STOPPED 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:13 vm01 bash[21002]: [09/Mar/2026:15:55:13] ENGINE Bus STARTING 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:13 vm01 bash[21002]: [09/Mar/2026:15:55:13] ENGINE Serving on http://:::9283 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:13 vm01 bash[21002]: [09/Mar/2026:15:55:13] ENGINE Bus STARTED 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:13 vm01 bash[21002]: [09/Mar/2026:15:55:13] ENGINE Bus STOPPING 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE Bus STOPPED 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE Bus STARTING 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE Serving on http://:::9283 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE Bus STARTED 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE Bus STOPPING 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:13 vm01 bash[20728]: cephadm 2026-03-09T15:55:12.238818+0000 mgr.y (mgr.14520) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:13 vm01 bash[20728]: cephadm 2026-03-09T15:55:12.238818+0000 mgr.y (mgr.14520) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:13 vm01 bash[20728]: cephadm 2026-03-09T15:55:12.449606+0000 mgr.y (mgr.14520) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm09 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:13 vm01 bash[20728]: cephadm 2026-03-09T15:55:12.449606+0000 mgr.y (mgr.14520) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm09 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:13 vm01 bash[20728]: cluster 2026-03-09T15:55:12.642792+0000 mgr.y (mgr.14520) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:13 vm01 bash[20728]: cluster 2026-03-09T15:55:12.642792+0000 mgr.y (mgr.14520) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:13 vm01 bash[28152]: cephadm 2026-03-09T15:55:12.238818+0000 mgr.y (mgr.14520) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:13 vm01 bash[28152]: cephadm 2026-03-09T15:55:12.238818+0000 mgr.y (mgr.14520) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:13 vm01 bash[28152]: cephadm 2026-03-09T15:55:12.449606+0000 mgr.y (mgr.14520) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm09 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:13 vm01 bash[28152]: cephadm 2026-03-09T15:55:12.449606+0000 mgr.y (mgr.14520) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm09 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:13 vm01 bash[28152]: cluster 2026-03-09T15:55:12.642792+0000 mgr.y (mgr.14520) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:14.105 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:13 vm01 bash[28152]: cluster 2026-03-09T15:55:12.642792+0000 mgr.y (mgr.14520) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.916Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.917Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.917Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm09 (none))" 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.918Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.918Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.921Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.922Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.927Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.927Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.725µs 2026-03-09T15:55:14.383 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.927Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.927Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.939Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.939Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=26.089µs wal_replay_duration=11.876243ms wbl_replay_duration=131ns total_replay_duration=11.920274ms 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.963Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.963Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.963Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.963Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.963Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.983Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=19.963611ms db_storage=1.423µs remote_storage=1.432µs web_handler=310ns query_engine=1.452µs scrape=1.485791ms scrape_sd=126.517µs notify=8.606µs notify_sd=10.048µs rules=18.018019ms tracing=21.6µs 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.983Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T15:55:14.384 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 15:55:13 vm09 bash[51261]: ts=2026-03-09T15:55:13.984Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T15:55:14.429 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T15:55:14.429 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE Bus STOPPED 2026-03-09T15:55:14.429 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE Bus STARTING 2026-03-09T15:55:14.429 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE Serving on http://:::9283 2026-03-09T15:55:14.429 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:14 vm01 bash[21002]: [09/Mar/2026:15:55:14] ENGINE Bus STARTED 2026-03-09T15:55:14.429 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[56700]: ts=2026-03-09T15:55:14.319Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000686455s 2026-03-09T15:55:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.788107+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.788107+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.800366+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.800366+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.805442+0000 mon.a (mon.0) 841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.805442+0000 mon.a (mon.0) 841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.805924+0000 mgr.y (mgr.14520) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.805924+0000 mgr.y (mgr.14520) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.807273+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.807273+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.807675+0000 mgr.y (mgr.14520) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.807675+0000 mgr.y (mgr.14520) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.832659+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.832659+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.846345+0000 mon.a (mon.0) 844 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.846345+0000 mon.a (mon.0) 844 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.846780+0000 mgr.y (mgr.14520) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.846780+0000 mgr.y (mgr.14520) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.847597+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.847597+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.847847+0000 mgr.y (mgr.14520) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.847847+0000 mgr.y (mgr.14520) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.852654+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.852654+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.866168+0000 mon.a (mon.0) 847 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.866168+0000 mon.a (mon.0) 847 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.866486+0000 mgr.y (mgr.14520) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.866486+0000 mgr.y (mgr.14520) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.868850+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.868850+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.869286+0000 mgr.y (mgr.14520) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.869286+0000 mgr.y (mgr.14520) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.875011+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.875011+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.918836+0000 mon.a (mon.0) 850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:15.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:14 vm09 bash[22983]: audit 2026-03-09T15:55:13.918836+0000 mon.a (mon.0) 850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.788107+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.788107+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.800366+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.800366+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.805442+0000 mon.a (mon.0) 841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.805442+0000 mon.a (mon.0) 841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.805924+0000 mgr.y (mgr.14520) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.805924+0000 mgr.y (mgr.14520) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.807273+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.807273+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.807675+0000 mgr.y (mgr.14520) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.807675+0000 mgr.y (mgr.14520) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.832659+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.832659+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.846345+0000 mon.a (mon.0) 844 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.846345+0000 mon.a (mon.0) 844 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.846780+0000 mgr.y (mgr.14520) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.846780+0000 mgr.y (mgr.14520) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.847597+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.847597+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.847847+0000 mgr.y (mgr.14520) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.847847+0000 mgr.y (mgr.14520) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.852654+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.852654+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.866168+0000 mon.a (mon.0) 847 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.866168+0000 mon.a (mon.0) 847 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.866486+0000 mgr.y (mgr.14520) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.866486+0000 mgr.y (mgr.14520) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.868850+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.868850+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.869286+0000 mgr.y (mgr.14520) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.869286+0000 mgr.y (mgr.14520) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.875011+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.875011+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.918836+0000 mon.a (mon.0) 850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:14 vm01 bash[28152]: audit 2026-03-09T15:55:13.918836+0000 mon.a (mon.0) 850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.788107+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.788107+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.800366+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.800366+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.805442+0000 mon.a (mon.0) 841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.805442+0000 mon.a (mon.0) 841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.805924+0000 mgr.y (mgr.14520) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.805924+0000 mgr.y (mgr.14520) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.807273+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.807273+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.807675+0000 mgr.y (mgr.14520) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.807675+0000 mgr.y (mgr.14520) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.832659+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.832659+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.846345+0000 mon.a (mon.0) 844 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.846345+0000 mon.a (mon.0) 844 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.846780+0000 mgr.y (mgr.14520) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.846780+0000 mgr.y (mgr.14520) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.847597+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.847597+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.847847+0000 mgr.y (mgr.14520) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.847847+0000 mgr.y (mgr.14520) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.852654+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.852654+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.866168+0000 mon.a (mon.0) 847 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.866168+0000 mon.a (mon.0) 847 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.866486+0000 mgr.y (mgr.14520) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.866486+0000 mgr.y (mgr.14520) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.868850+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.868850+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.869286+0000 mgr.y (mgr.14520) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.869286+0000 mgr.y (mgr.14520) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.875011+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.875011+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.918836+0000 mon.a (mon.0) 850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:15.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:14 vm01 bash[20728]: audit 2026-03-09T15:55:13.918836+0000 mon.a (mon.0) 850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:55:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:15 vm09 bash[22983]: cluster 2026-03-09T15:55:14.643398+0000 mgr.y (mgr.14520) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:15 vm09 bash[22983]: cluster 2026-03-09T15:55:14.643398+0000 mgr.y (mgr.14520) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:16.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:15 vm01 bash[28152]: cluster 2026-03-09T15:55:14.643398+0000 mgr.y (mgr.14520) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:16.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:15 vm01 bash[28152]: cluster 2026-03-09T15:55:14.643398+0000 mgr.y (mgr.14520) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:15 vm01 bash[20728]: cluster 2026-03-09T15:55:14.643398+0000 mgr.y (mgr.14520) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:16.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:15 vm01 bash[20728]: cluster 2026-03-09T15:55:14.643398+0000 mgr.y (mgr.14520) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:16.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:55:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:55:17.712 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:17.714 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:17.714 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:17.715 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:17.719 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:17.719 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:17.720 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:17.721 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:18.025 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:17 vm01 bash[28152]: audit 2026-03-09T15:55:16.166048+0000 mgr.y (mgr.14520) 54 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:18.025 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:17 vm01 bash[28152]: audit 2026-03-09T15:55:16.166048+0000 mgr.y (mgr.14520) 54 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:18.025 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:17 vm01 bash[28152]: cluster 2026-03-09T15:55:16.643720+0000 mgr.y (mgr.14520) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:18.025 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:17 vm01 bash[28152]: cluster 2026-03-09T15:55:16.643720+0000 mgr.y (mgr.14520) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:18.025 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:17 vm01 bash[20728]: audit 2026-03-09T15:55:16.166048+0000 mgr.y (mgr.14520) 54 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:18.025 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:17 vm01 bash[20728]: audit 2026-03-09T15:55:16.166048+0000 mgr.y (mgr.14520) 54 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:18.025 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:17 vm01 bash[20728]: cluster 2026-03-09T15:55:16.643720+0000 mgr.y (mgr.14520) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:18.025 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:17 vm01 bash[20728]: cluster 2026-03-09T15:55:16.643720+0000 mgr.y (mgr.14520) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 -- 192.168.123.101:0/3993720221 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7d9410a850 msgr2=0x7f7d9410acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 --2- 192.168.123.101:0/3993720221 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7d9410a850 0x7f7d9410acd0 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7f7d8c01c4c0 tx=0x7f7d8c040850 comp rx=0 tx=0).stop 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 -- 192.168.123.101:0/3993720221 shutdown_connections 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 --2- 192.168.123.101:0/3993720221 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7d9411c780 0x7f7d9411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 --2- 192.168.123.101:0/3993720221 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7d9410a850 0x7f7d9410acd0 unknown :-1 s=CLOSED pgs=158 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 --2- 192.168.123.101:0/3993720221 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7d9410a470 0x7f7d941114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 -- 192.168.123.101:0/3993720221 >> 192.168.123.101:0/3993720221 conn(0x7f7d9406db00 msgr2=0x7f7d9406df10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 -- 192.168.123.101:0/3993720221 shutdown_connections 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 -- 192.168.123.101:0/3993720221 wait complete. 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 Processor -- start 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 -- start start 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7d9410a470 0x7f7d9411c1a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7d9410a850 0x7f7d94117250 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7d9411c780 0x7f7d94117940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f7d941139e0 con 0x7f7d9410a850 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f7d94113860 con 0x7f7d9411c780 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d9aacc640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f7d94113b60 con 0x7f7d9410a470 2026-03-09T15:55:18.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d98841640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7d9410a470 0x7f7d9411c1a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d93fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7d9410a850 0x7f7d94117250 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d93fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7d9410a850 0x7f7d94117250 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:58134/0 (socket says 192.168.123.101:58134) 2026-03-09T15:55:18.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d93fff640 1 -- 192.168.123.101:0/2220209825 learned_addr learned my addr 192.168.123.101:0/2220209825 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:18.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d93fff640 1 -- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7d9410a470 msgr2=0x7f7d9411c1a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d93fff640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f7d9410a470 0x7f7d9411c1a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d93fff640 1 -- 192.168.123.101:0/2220209825 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7d9411c780 msgr2=0x7f7d94117940 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:55:18.053 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.047+0000 7f7d99042640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7d9411c780 0x7f7d94117940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.054 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.051+0000 7f7d93fff640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7d9411c780 0x7f7d94117940 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.054 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.051+0000 7f7d93fff640 1 -- 192.168.123.101:0/2220209825 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7d941ad300 con 0x7f7d9410a850 2026-03-09T15:55:18.054 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.051+0000 7f7d99042640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7d9411c780 0x7f7d94117940 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:18.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.051+0000 7f7d93fff640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7d9410a850 0x7f7d94117250 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7f7d8c002fa0 tx=0x7f7d8c005be0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.051+0000 7f7d91ffb640 1 -- 192.168.123.101:0/2220209825 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7d8c057070 con 0x7f7d9410a850 2026-03-09T15:55:18.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.051+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7d941ad4d0 con 0x7f7d9410a850 2026-03-09T15:55:18.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.051+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f7d941ada10 con 0x7f7d9410a850 2026-03-09T15:55:18.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.051+0000 7f7d91ffb640 1 -- 192.168.123.101:0/2220209825 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f7d8c018c60 con 0x7f7d9410a850 2026-03-09T15:55:18.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.051+0000 7f7d91ffb640 1 -- 192.168.123.101:0/2220209825 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7d8c0487d0 con 0x7f7d9410a850 2026-03-09T15:55:18.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.055+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f7d94073050 con 0x7f7d9410a850 2026-03-09T15:55:18.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.055+0000 7f7d91ffb640 1 -- 192.168.123.101:0/2220209825 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f7d8c01a0d0 con 0x7f7d9410a850 2026-03-09T15:55:18.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.055+0000 7f7d91ffb640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7d700777d0 0x7f7d70079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.055+0000 7f7d91ffb640 1 -- 192.168.123.101:0/2220209825 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f7d8c0cff60 con 0x7f7d9410a850 2026-03-09T15:55:18.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.055+0000 7f7d91ffb640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] conn(0x7f7d700816c0 0x7f7d70083b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.055+0000 7f7d91ffb640 1 -- 192.168.123.101:0/2220209825 --> [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f7d8c0059c0 con 0x7f7d700816c0 2026-03-09T15:55:18.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.055+0000 7f7d91ffb640 1 -- 192.168.123.101:0/2220209825 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_get_version_reply(handle=1 version=64) ==== 24+0+0 (secure 0 0 0) 0x7f7d8c0d02d0 con 0x7f7d9410a850 2026-03-09T15:55:18.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.055+0000 7f7d98841640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7d700777d0 0x7f7d70079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.059+0000 7f7d98841640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7d700777d0 0x7f7d70079c90 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f7d84007c40 tx=0x7f7d840073d0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.066 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.063+0000 7f7d99042640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] conn(0x7f7d700816c0 0x7f7d70083b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.075 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.075+0000 7f7d99042640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] conn(0x7f7d700816c0 0x7f7d70083b00 crc :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.7 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.081 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.075+0000 7f7d91ffb640 1 -- 192.168.123.101:0/2220209825 <== osd.7 v2:192.168.123.109:6824/1747724061 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f7d8c0059c0 con 0x7f7d700816c0 2026-03-09T15:55:18.115 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 --> [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f7d94073260 con 0x7f7d700816c0 2026-03-09T15:55:18.117 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.091+0000 7f59c556a640 1 -- 192.168.123.101:0/338904202 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f59c0075470 msgr2=0x7f59c007be20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.117 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.091+0000 7f59c556a640 1 --2- 192.168.123.101:0/338904202 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f59c0075470 0x7f59c007be20 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f59b800b0a0 tx=0x7f59b8030580 comp rx=0 tx=0).stop 2026-03-09T15:55:18.117 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.099+0000 7f59c556a640 1 -- 192.168.123.101:0/338904202 shutdown_connections 2026-03-09T15:55:18.117 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.099+0000 7f59c556a640 1 --2- 192.168.123.101:0/338904202 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f59c0075470 0x7f59c007be20 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.117 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.099+0000 7f59c556a640 1 --2- 192.168.123.101:0/338904202 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f59c010b080 0x7f59c0074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.117 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.099+0000 7f59c556a640 1 --2- 192.168.123.101:0/338904202 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f59c010a6d0 0x7f59c010aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.117 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.099+0000 7f59c556a640 1 -- 192.168.123.101:0/338904202 >> 192.168.123.101:0/338904202 conn(0x7f59c006d9f0 msgr2=0x7f59c006de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.117 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.099+0000 7f59c556a640 1 -- 192.168.123.101:0/338904202 shutdown_connections 2026-03-09T15:55:18.117 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59c556a640 1 -- 192.168.123.101:0/338904202 wait complete. 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59c556a640 1 Processor -- start 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59c556a640 1 -- start start 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59c556a640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f59c0075470 0x7f59c0085be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59c556a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f59c010a6d0 0x7f59c0086120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59c556a640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f59c010b080 0x7f59c007fe40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59c556a640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f59c007e250 con 0x7f59c010a6d0 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59c556a640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f59c007e0d0 con 0x7f59c010b080 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59c556a640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f59c007e3d0 con 0x7f59c0075470 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59bf7fe640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f59c010b080 0x7f59c007fe40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59bf7fe640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f59c010b080 0x7f59c007fe40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:46632/0 (socket says 192.168.123.101:46632) 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59bf7fe640 1 -- 192.168.123.101:0/1981250023 learned_addr learned my addr 192.168.123.101:0/1981250023 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59be7fc640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f59c010a6d0 0x7f59c0086120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59bf7fe640 1 -- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f59c0075470 msgr2=0x7f59c0085be0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59bf7fe640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f59c0075470 0x7f59c0085be0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59bf7fe640 1 -- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f59c010a6d0 msgr2=0x7f59c0086120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59bf7fe640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f59c010a6d0 0x7f59c0086120 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59bf7fe640 1 -- 192.168.123.101:0/1981250023 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f59c00806d0 con 0x7f59c010b080 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59bf7fe640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f59c010b080 0x7f59c007fe40 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f59b8009580 tx=0x7f59b8002a60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.111+0000 7f59be7fc640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f59c010a6d0 0x7f59c0086120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.115+0000 7f599ffff640 1 -- 192.168.123.101:0/1981250023 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f59b8002da0 con 0x7f59c010b080 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.115+0000 7f59c556a640 1 -- 192.168.123.101:0/1981250023 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f59c0080960 con 0x7f59c010b080 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.115+0000 7f59c556a640 1 -- 192.168.123.101:0/1981250023 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f59c0131f00 con 0x7f59c010b080 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.115+0000 7f599ffff640 1 -- 192.168.123.101:0/1981250023 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f59b80044b0 con 0x7f59c010b080 2026-03-09T15:55:18.118 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.115+0000 7f599ffff640 1 -- 192.168.123.101:0/1981250023 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f59b8038a70 con 0x7f59c010b080 2026-03-09T15:55:18.121 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d91ffb640 1 -- 192.168.123.101:0/2220209825 <== osd.7 v2:192.168.123.109:6824/1747724061 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f7d94073260 con 0x7f7d700816c0 2026-03-09T15:55:18.121 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 >> [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] conn(0x7f7d700816c0 msgr2=0x7f7d70083b00 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.121 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.109:6824/1747724061,v1:192.168.123.109:6825/1747724061] conn(0x7f7d700816c0 0x7f7d70083b00 crc :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.121 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7d700777d0 msgr2=0x7f7d70079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.121 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f7d700777d0 0x7f7d70079c90 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f7d84007c40 tx=0x7f7d840073d0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.121 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7d9410a850 msgr2=0x7f7d94117250 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.121 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 --2- 192.168.123.101:0/2220209825 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f7d9410a850 0x7f7d94117250 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7f7d8c002fa0 tx=0x7f7d8c005be0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.121 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d99042640 1 -- 192.168.123.101:0/2220209825 reap_dead start 2026-03-09T15:55:18.127 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f599ffff640 1 -- 192.168.123.101:0/1981250023 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f59b80040a0 con 0x7f59c010b080 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.127+0000 7f599ffff640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f59900777d0 0x7f5990079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.127+0000 7f59beffd640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f59900777d0 0x7f5990079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.127+0000 7f599ffff640 1 -- 192.168.123.101:0/1981250023 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f59b80be460 con 0x7f59c010b080 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.127+0000 7f599dffb640 1 -- 192.168.123.101:0/1981250023 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f598c000f80 con 0x7f59c010b080 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.127+0000 7f599ffff640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] conn(0x7f5990081640 0x7f5990083a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.127+0000 7f59beffd640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f59900777d0 0x7f5990079c90 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f59b4005fd0 tx=0x7f59b4005d00 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.127+0000 7f599ffff640 1 -- 192.168.123.101:0/1981250023 --> [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f5990084150 con 0x7f5990081640 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.131+0000 7f59be7fc640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] conn(0x7f5990081640 0x7f5990083a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.131+0000 7f599ffff640 1 -- 192.168.123.101:0/1981250023 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_get_version_reply(handle=1 version=64) ==== 24+0+0 (secure 0 0 0) 0x7f59b80868e0 con 0x7f59c010b080 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 shutdown_connections 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 >> 192.168.123.101:0/2220209825 conn(0x7f7d9406db00 msgr2=0x7f7d94072670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 shutdown_connections 2026-03-09T15:55:18.136 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.119+0000 7f7d9aacc640 1 -- 192.168.123.101:0/2220209825 wait complete. 2026-03-09T15:55:18.137 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.135+0000 7f59be7fc640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] conn(0x7f5990081640 0x7f5990083a80 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.6 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.151 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.135+0000 7f599ffff640 1 -- 192.168.123.101:0/1981250023 <== osd.6 v2:192.168.123.109:6816/920695066 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f5990084150 con 0x7f5990081640 2026-03-09T15:55:18.175 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.171+0000 7f599dffb640 1 -- 192.168.123.101:0/1981250023 --> [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f598c002d70 con 0x7f5990081640 2026-03-09T15:55:18.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.175+0000 7f599ffff640 1 -- 192.168.123.101:0/1981250023 <== osd.6 v2:192.168.123.109:6816/920695066 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f598c002d70 con 0x7f5990081640 2026-03-09T15:55:18.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.175+0000 7f1722638640 1 -- 192.168.123.101:0/2857734402 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f171c10b080 msgr2=0x7f171c074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.175+0000 7f1722638640 1 --2- 192.168.123.101:0/2857734402 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f171c10b080 0x7f171c074d30 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f171400b0a0 tx=0x7f17140316e0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 -- 192.168.123.101:0/2857734402 shutdown_connections 2026-03-09T15:55:18.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 --2- 192.168.123.101:0/2857734402 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f171c075470 0x7f171c07be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 --2- 192.168.123.101:0/2857734402 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f171c10b080 0x7f171c074d30 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 --2- 192.168.123.101:0/2857734402 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f171c10a6d0 0x7f171c10aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.179 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 -- 192.168.123.101:0/2857734402 >> 192.168.123.101:0/2857734402 conn(0x7f171c06d9f0 msgr2=0x7f171c06de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 -- 192.168.123.101:0/2857734402 shutdown_connections 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 -- 192.168.123.101:0/2857734402 wait complete. 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 Processor -- start 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 -- start start 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f171c075470 0x7f171c07b6c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f171c10a6d0 0x7f171c07bc00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f171c075dd0 0x7f171c076250 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f171c07db90 con 0x7f171c075470 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f171c07da10 con 0x7f171c075dd0 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f1722638640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f171c07dd10 con 0x7f171c10a6d0 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171bfff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f171c075470 0x7f171c07b6c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171bfff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f171c075470 0x7f171c07b6c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:58180/0 (socket says 192.168.123.101:58180) 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171bfff640 1 -- 192.168.123.101:0/1081852313 learned_addr learned my addr 192.168.123.101:0/1081852313 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171b7fe640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f171c10a6d0 0x7f171c07bc00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.183 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171bfff640 1 -- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f171c10a6d0 msgr2=0x7f171c07bc00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171bfff640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f171c10a6d0 0x7f171c07bc00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171bfff640 1 -- 192.168.123.101:0/1081852313 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f171c075dd0 msgr2=0x7f171c076250 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:55:18.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171bfff640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f171c075dd0 0x7f171c076250 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171bfff640 1 -- 192.168.123.101:0/1081852313 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f171c076ae0 con 0x7f171c075470 2026-03-09T15:55:18.184 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f171bfff640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f171c075470 0x7f171c07b6c0 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7f170c00da60 tx=0x7f170c00df30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.185 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f17197fa640 1 -- 192.168.123.101:0/1081852313 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f170c00cbf0 con 0x7f171c075470 2026-03-09T15:55:18.185 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f1722638640 1 -- 192.168.123.101:0/1081852313 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f171c137d00 con 0x7f171c075470 2026-03-09T15:55:18.185 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f1722638640 1 -- 192.168.123.101:0/1081852313 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f171c138240 con 0x7f171c075470 2026-03-09T15:55:18.185 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f17197fa640 1 -- 192.168.123.101:0/1081852313 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f170c004510 con 0x7f171c075470 2026-03-09T15:55:18.185 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f17197fa640 1 -- 192.168.123.101:0/1081852313 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f170c002e30 con 0x7f171c075470 2026-03-09T15:55:18.186 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f17197fa640 1 -- 192.168.123.101:0/1081852313 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f170c020430 con 0x7f171c075470 2026-03-09T15:55:18.186 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f17197fa640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f16f40777d0 0x7f16f4079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f599dffb640 1 -- 192.168.123.101:0/1981250023 >> [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] conn(0x7f5990081640 msgr2=0x7f5990083a80 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.179+0000 7f599dffb640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.109:6816/920695066,v1:192.168.123.109:6817/920695066] conn(0x7f5990081640 0x7f5990083a80 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f599dffb640 1 -- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f59900777d0 msgr2=0x7f5990079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f599dffb640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f59900777d0 0x7f5990079c90 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f59b4005fd0 tx=0x7f59b4005d00 comp rx=0 tx=0).stop 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f599dffb640 1 -- 192.168.123.101:0/1981250023 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f59c010b080 msgr2=0x7f59c007fe40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f599dffb640 1 --2- 192.168.123.101:0/1981250023 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f59c010b080 0x7f59c007fe40 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f59b8009580 tx=0x7f59b8002a60 comp rx=0 tx=0).stop 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f59bf7fe640 1 -- 192.168.123.101:0/1981250023 reap_dead start 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f599dffb640 1 -- 192.168.123.101:0/1981250023 shutdown_connections 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f599dffb640 1 -- 192.168.123.101:0/1981250023 >> 192.168.123.101:0/1981250023 conn(0x7f59c006d9f0 msgr2=0x7f59c0075dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.183+0000 7f599dffb640 1 -- 192.168.123.101:0/1981250023 shutdown_connections 2026-03-09T15:55:18.188 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.187+0000 7f599dffb640 1 -- 192.168.123.101:0/1981250023 wait complete. 2026-03-09T15:55:18.189 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.187+0000 7f171b7fe640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f16f40777d0 0x7f16f4079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.189 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.187+0000 7f17197fa640 1 -- 192.168.123.101:0/1081852313 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f170c09ac70 con 0x7f171c075470 2026-03-09T15:55:18.189 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.187+0000 7f171b7fe640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f16f40777d0 0x7f16f4079c90 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f1714002790 tx=0x7f171400d040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.189 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.187+0000 7f1722638640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] conn(0x7f171c0630c0 0x7f171c0634a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.189 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.187+0000 7f1722638640 1 -- 192.168.123.101:0/1081852313 --> [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f171c10b080 con 0x7f171c0630c0 2026-03-09T15:55:18.192 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.191+0000 7f1720bae640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] conn(0x7f171c0630c0 0x7f171c0634a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.192 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.191+0000 7f1720bae640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] conn(0x7f171c0630c0 0x7f171c0634a0 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.5 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.201 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.191+0000 7f17197fa640 1 -- 192.168.123.101:0/1081852313 <== osd.5 v2:192.168.123.109:6808/2799407982 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f171c10b080 con 0x7f171c0630c0 2026-03-09T15:55:18.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 -- 192.168.123.101:0/1081852313 --> [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f171c0639e0 con 0x7f171c0630c0 2026-03-09T15:55:18.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f17197fa640 1 -- 192.168.123.101:0/1081852313 <== osd.5 v2:192.168.123.109:6808/2799407982 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f171c0639e0 con 0x7f171c0630c0 2026-03-09T15:55:18.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 -- 192.168.123.101:0/1081852313 >> [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] conn(0x7f171c0630c0 msgr2=0x7f171c0634a0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.109:6808/2799407982,v1:192.168.123.109:6809/2799407982] conn(0x7f171c0630c0 0x7f171c0634a0 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 -- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f16f40777d0 msgr2=0x7f16f4079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.205 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f16f40777d0 0x7f16f4079c90 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f1714002790 tx=0x7f171400d040 comp rx=0 tx=0).stop 2026-03-09T15:55:18.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 -- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f171c075470 msgr2=0x7f171c07b6c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 --2- 192.168.123.101:0/1081852313 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f171c075470 0x7f171c07b6c0 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7f170c00da60 tx=0x7f170c00df30 comp rx=0 tx=0).stop 2026-03-09T15:55:18.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f1720bae640 1 -- 192.168.123.101:0/1081852313 reap_dead start 2026-03-09T15:55:18.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 -- 192.168.123.101:0/1081852313 shutdown_connections 2026-03-09T15:55:18.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 -- 192.168.123.101:0/1081852313 >> 192.168.123.101:0/1081852313 conn(0x7f171c06d9f0 msgr2=0x7f171c07fe60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 -- 192.168.123.101:0/1081852313 shutdown_connections 2026-03-09T15:55:18.206 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.203+0000 7f16faffd640 1 -- 192.168.123.101:0/1081852313 wait complete. 2026-03-09T15:55:18.286 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.283+0000 7f764ac80640 1 -- 192.168.123.101:0/2568271235 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f763c0b7ee0 msgr2=0x7f763c0ba2d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.286 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.283+0000 7f764ac80640 1 --2- 192.168.123.101:0/2568271235 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f763c0b7ee0 0x7f763c0ba2d0 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7f764001c080 tx=0x7f7640040470 comp rx=0 tx=0).stop 2026-03-09T15:55:18.287 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.283+0000 7f764ac80640 1 -- 192.168.123.101:0/2568271235 shutdown_connections 2026-03-09T15:55:18.287 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.283+0000 7f764ac80640 1 --2- 192.168.123.101:0/2568271235 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f763c0b7ee0 0x7f763c0ba2d0 unknown :-1 s=CLOSED pgs=161 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.287 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.283+0000 7f764ac80640 1 --2- 192.168.123.101:0/2568271235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763c0a5700 0x7f763c0b79a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.287 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.283+0000 7f764ac80640 1 --2- 192.168.123.101:0/2568271235 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763c0a4de0 0x7f763c0a51c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.287 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 -- 192.168.123.101:0/2568271235 >> 192.168.123.101:0/2568271235 conn(0x7f763c01a730 msgr2=0x7f763c01ab40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.287 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 -- 192.168.123.101:0/2568271235 shutdown_connections 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 -- 192.168.123.101:0/2568271235 wait complete. 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 Processor -- start 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 -- start start 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f763c0a4de0 0x7f763c0b7840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763c0a5700 0x7f763c0b28d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763c0b7ee0 0x7f763c0b2f70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f763c0aebb0 con 0x7f763c0a4de0 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f763c0aea30 con 0x7f763c0b7ee0 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f763c0aed30 con 0x7f763c0a5700 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764947d640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763c0a5700 0x7f763c0b28d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7649c7e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f763c0a4de0 0x7f763c0b7840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.288 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7649c7e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f763c0a4de0 0x7f763c0b7840 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:58190/0 (socket says 192.168.123.101:58190) 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7649c7e640 1 -- 192.168.123.101:0/3066470046 learned_addr learned my addr 192.168.123.101:0/3066470046 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764947d640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763c0a5700 0x7f763c0b28d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:54586/0 (socket says 192.168.123.101:54586) 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7649c7e640 1 -- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763c0a5700 msgr2=0x7f763c0b28d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7649c7e640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763c0a5700 0x7f763c0b28d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7649c7e640 1 -- 192.168.123.101:0/3066470046 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763c0b7ee0 msgr2=0x7f763c0b2f70 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7649c7e640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f763c0b7ee0 0x7f763c0b2f70 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7649c7e640 1 -- 192.168.123.101:0/3066470046 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f763c148b30 con 0x7f763c0a4de0 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764947d640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f763c0a5700 0x7f763c0b28d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7649c7e640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f763c0a4de0 0x7f763c0b7840 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f763800b570 tx=0x7f763800ba40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.289 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7632ffd640 1 -- 192.168.123.101:0/3066470046 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7638013020 con 0x7f763c0a4de0 2026-03-09T15:55:18.291 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 -- 192.168.123.101:0/3066470046 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f763c148d00 con 0x7f763c0a4de0 2026-03-09T15:55:18.291 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f764ac80640 1 -- 192.168.123.101:0/3066470046 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f763c1491c0 con 0x7f763c0a4de0 2026-03-09T15:55:18.293 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7632ffd640 1 -- 192.168.123.101:0/3066470046 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f7638004480 con 0x7f763c0a4de0 2026-03-09T15:55:18.293 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.287+0000 7f7632ffd640 1 -- 192.168.123.101:0/3066470046 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f763800f950 con 0x7f763c0a4de0 2026-03-09T15:55:18.293 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.291+0000 7f7630ff9640 1 -- 192.168.123.101:0/3066470046 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f760c000f80 con 0x7f763c0a4de0 2026-03-09T15:55:18.297 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.295+0000 7f7632ffd640 1 -- 192.168.123.101:0/3066470046 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f763800fbd0 con 0x7f763c0a4de0 2026-03-09T15:55:18.300 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.295+0000 7f7632ffd640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f76200777d0 0x7f7620079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.300 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.299+0000 7f764947d640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f76200777d0 0x7f7620079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.310 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.303+0000 7f764947d640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f76200777d0 0x7f7620079c90 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f7644064240 tx=0x7f7644051260 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.310 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.303+0000 7f7632ffd640 1 -- 192.168.123.101:0/3066470046 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f763809e050 con 0x7f763c0a4de0 2026-03-09T15:55:18.310 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.303+0000 7f7632ffd640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] conn(0x7f76200816c0 0x7f7620083b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.310 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.303+0000 7f764a47f640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] conn(0x7f76200816c0 0x7f7620083b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.310 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.303+0000 7f7632ffd640 1 -- 192.168.123.101:0/3066470046 --> [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f76200841d0 con 0x7f76200816c0 2026-03-09T15:55:18.310 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.303+0000 7f7632ffd640 1 -- 192.168.123.101:0/3066470046 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_get_version_reply(handle=1 version=64) ==== 24+0+0 (secure 0 0 0) 0x7f76380a23f0 con 0x7f763c0a4de0 2026-03-09T15:55:18.310 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.307+0000 7f764a47f640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] conn(0x7f76200816c0 0x7f7620083b00 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.3 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.311 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.307+0000 7f7632ffd640 1 -- 192.168.123.101:0/3066470046 <== osd.3 v2:192.168.123.101:6826/994063283 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f76200841d0 con 0x7f76200816c0 2026-03-09T15:55:18.329 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.327+0000 7efdc3c77640 1 -- 192.168.123.101:0/2738036974 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7efdbc10a6d0 msgr2=0x7efdbc10aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.329 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.327+0000 7efdc3c77640 1 --2- 192.168.123.101:0/2738036974 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7efdbc10a6d0 0x7efdbc10aab0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7efdb4009a30 tx=0x7efdb40303b0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.333 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.327+0000 7efdc3c77640 1 -- 192.168.123.101:0/2738036974 shutdown_connections 2026-03-09T15:55:18.333 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.327+0000 7efdc3c77640 1 --2- 192.168.123.101:0/2738036974 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efdbc075470 0x7efdbc07be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.333 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.327+0000 7efdc3c77640 1 --2- 192.168.123.101:0/2738036974 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7efdbc10b080 0x7efdbc074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.333 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.327+0000 7efdc3c77640 1 --2- 192.168.123.101:0/2738036974 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7efdbc10a6d0 0x7efdbc10aab0 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.333 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.327+0000 7efdc3c77640 1 -- 192.168.123.101:0/2738036974 >> 192.168.123.101:0/2738036974 conn(0x7efdbc06d9f0 msgr2=0x7efdbc06de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.333 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.327+0000 7efdc3c77640 1 -- 192.168.123.101:0/2738036974 shutdown_connections 2026-03-09T15:55:18.339 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 -- 192.168.123.101:0/790972960 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5b5010b080 msgr2=0x7f5b50074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.339 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 --2- 192.168.123.101:0/790972960 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5b5010b080 0x7f5b50074d30 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f5b4800b0a0 tx=0x7f5b480316e0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.339 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 -- 192.168.123.101:0/790972960 shutdown_connections 2026-03-09T15:55:18.339 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 --2- 192.168.123.101:0/790972960 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5b50075470 0x7f5b5007be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.339 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 --2- 192.168.123.101:0/790972960 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5b5010b080 0x7f5b50074d30 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.339 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 --2- 192.168.123.101:0/790972960 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5b5010a6d0 0x7f5b5010aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.339 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 -- 192.168.123.101:0/790972960 >> 192.168.123.101:0/790972960 conn(0x7f5b5006d9f0 msgr2=0x7f5b5006de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc3c77640 1 -- 192.168.123.101:0/2738036974 wait complete. 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc3c77640 1 Processor -- start 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc3c77640 1 -- start start 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc3c77640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7efdbc075470 0x7efdbc085c70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc3c77640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efdbc10a6d0 0x7efdbc07fd40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc3c77640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7efdbc10b080 0x7efdbc080280 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc3c77640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7efdbc07e2d0 con 0x7efdbc10b080 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc3c77640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7efdbc07e150 con 0x7efdbc10a6d0 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc3c77640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7efdbc07e450 con 0x7efdbc075470 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc19ec640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7efdbc075470 0x7efdbc085c70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc19ec640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7efdbc075470 0x7efdbc085c70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:54616/0 (socket says 192.168.123.101:54616) 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc19ec640 1 -- 192.168.123.101:0/784111984 learned_addr learned my addr 192.168.123.101:0/784111984 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc19ec640 1 -- 192.168.123.101:0/784111984 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efdbc10a6d0 msgr2=0x7efdbc07fd40 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.331+0000 7efdc19ec640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efdbc10a6d0 0x7efdbc07fd40 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7efdc19ec640 1 -- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7efdbc10b080 msgr2=0x7efdbc080280 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:55:18.341 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7efdc21ed640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7efdbc10b080 0x7efdbc080280 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.341 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdc19ec640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7efdbc10b080 0x7efdbc080280 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.341 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdc19ec640 1 -- 192.168.123.101:0/784111984 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efdbc080840 con 0x7efdbc075470 2026-03-09T15:55:18.341 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdc21ed640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7efdbc10b080 0x7efdbc080280 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:18.341 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdc19ec640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7efdbc075470 0x7efdbc085c70 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7efdb40040c0 tx=0x7efdb4039900 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.341 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdb2ffd640 1 -- 192.168.123.101:0/784111984 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efdb4039cf0 con 0x7efdbc075470 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 -- 192.168.123.101:0/790972960 shutdown_connections 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 -- 192.168.123.101:0/790972960 wait complete. 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 Processor -- start 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 -- start start 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5b50075470 0x7f5b5007b6c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5b5010a6d0 0x7f5b5007bc00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5b50075dd0 0x7f5b50076250 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f5b5007db90 con 0x7f5b5010a6d0 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f5b5007da10 con 0x7f5b50075470 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.335+0000 7f5b56bde640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f5b5007dd10 con 0x7f5b50075dd0 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b54953640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5b50075470 0x7f5b5007b6c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b4ffff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5b5010a6d0 0x7f5b5007bc00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b4ffff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5b5010a6d0 0x7f5b5007bc00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:58234/0 (socket says 192.168.123.101:58234) 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b4ffff640 1 -- 192.168.123.101:0/3147115689 learned_addr learned my addr 192.168.123.101:0/3147115689 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b55154640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5b50075dd0 0x7f5b50076250 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdc3c77640 1 -- 192.168.123.101:0/784111984 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7efdbc080ad0 con 0x7efdbc075470 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdc3c77640 1 -- 192.168.123.101:0/784111984 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7efdbc1c75e0 con 0x7efdbc075470 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdb2ffd640 1 -- 192.168.123.101:0/784111984 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7efdb4034070 con 0x7efdbc075470 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdb2ffd640 1 -- 192.168.123.101:0/784111984 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efdb4037370 con 0x7efdbc075470 2026-03-09T15:55:18.343 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7efdb0ff9640 1 -- 192.168.123.101:0/784111984 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_get_version(what=osdmap handle=1) -- 0x7efd84000f80 con 0x7efdbc075470 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b4ffff640 1 -- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5b50075dd0 msgr2=0x7f5b50076250 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b4ffff640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5b50075dd0 0x7f5b50076250 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b4ffff640 1 -- 192.168.123.101:0/3147115689 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5b50075470 msgr2=0x7f5b5007b6c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b4ffff640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5b50075470 0x7f5b5007b6c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.339+0000 7f5b4ffff640 1 -- 192.168.123.101:0/3147115689 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5b50076ae0 con 0x7f5b5010a6d0 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.343+0000 7f5b4ffff640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5b5010a6d0 0x7f5b5007bc00 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7f5b4800b1d0 tx=0x7f5b48009bf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.343+0000 7f5b4dffb640 1 -- 192.168.123.101:0/3147115689 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5b48009420 con 0x7f5b5010a6d0 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.343+0000 7f5b56bde640 1 -- 192.168.123.101:0/3147115689 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5b50137d00 con 0x7f5b5010a6d0 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.343+0000 7f5b56bde640 1 -- 192.168.123.101:0/3147115689 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5b501381f0 con 0x7f5b5010a6d0 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.343+0000 7f5b4dffb640 1 -- 192.168.123.101:0/3147115689 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f5b48007c90 con 0x7f5b5010a6d0 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.343+0000 7f5b4dffb640 1 -- 192.168.123.101:0/3147115689 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5b48039670 con 0x7f5b5010a6d0 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7f5b4dffb640 1 -- 192.168.123.101:0/3147115689 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f5b4804c020 con 0x7f5b5010a6d0 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7f5b4dffb640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5b3c0777d0 0x7f5b3c079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.349 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7f5b54953640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5b3c0777d0 0x7f5b3c079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.350 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7f5b4dffb640 1 -- 192.168.123.101:0/3147115689 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f5b480bef40 con 0x7f5b5010a6d0 2026-03-09T15:55:18.350 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7efdb2ffd640 1 -- 192.168.123.101:0/784111984 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7efdb404a020 con 0x7efdbc075470 2026-03-09T15:55:18.350 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7efdb2ffd640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7efd980777d0 0x7efd98079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7f5b54953640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5b3c0777d0 0x7f5b3c079c90 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f5b400044f0 tx=0x7f5b400091e0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.351+0000 7f5b56bde640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] conn(0x7f5b500630c0 0x7f5b500634a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.351+0000 7f5b55154640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] conn(0x7f5b500630c0 0x7f5b500634a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.351+0000 7f5b55154640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] conn(0x7f5b500630c0 0x7f5b500634a0 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.351+0000 7f5b56bde640 1 -- 192.168.123.101:0/3147115689 --> [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f5b5013a520 con 0x7f5b500630c0 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7efdc11eb640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7efd980777d0 0x7efd98079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7efdc11eb640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7efd980777d0 0x7efd98079c90 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7efdbc0750d0 tx=0x7efdb8007570 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7efdb2ffd640 1 -- 192.168.123.101:0/784111984 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7efdb40bf7a0 con 0x7efdbc075470 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7efdb2ffd640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] conn(0x7efd98081640 0x7efd98083a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7efdb2ffd640 1 -- 192.168.123.101:0/784111984 --> [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7efdb403ae50 con 0x7efd98081640 2026-03-09T15:55:18.352 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.347+0000 7efdb2ffd640 1 -- 192.168.123.101:0/784111984 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_get_version_reply(handle=1 version=64) ==== 24+0+0 (secure 0 0 0) 0x7efdb40c6050 con 0x7efdbc075470 2026-03-09T15:55:18.354 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.351+0000 7f5b4dffb640 1 -- 192.168.123.101:0/3147115689 <== osd.1 v2:192.168.123.101:6810/4163266826 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f5b44009ab0 con 0x7f5b500630c0 2026-03-09T15:55:18.355 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.351+0000 7efdc21ed640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] conn(0x7efd98081640 0x7efd98083a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.355 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.351+0000 7efdc21ed640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] conn(0x7efd98081640 0x7efd98083a80 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.4 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.359 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.359+0000 7efdb2ffd640 1 -- 192.168.123.101:0/784111984 <== osd.4 v2:192.168.123.109:6800/2242917856 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7efdb403ae50 con 0x7efd98081640 2026-03-09T15:55:18.367 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.367+0000 7f7630ff9640 1 -- 192.168.123.101:0/3066470046 --> [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f760c002d70 con 0x7f76200816c0 2026-03-09T15:55:18.369 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.367+0000 7f5b1f7fe640 1 -- 192.168.123.101:0/3147115689 --> [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f5b5010b080 con 0x7f5b500630c0 2026-03-09T15:55:18.370 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.367+0000 7f7632ffd640 1 -- 192.168.123.101:0/3066470046 <== osd.3 v2:192.168.123.101:6826/994063283 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f7640044020 con 0x7f76200816c0 2026-03-09T15:55:18.370 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.367+0000 7f5b4dffb640 1 -- 192.168.123.101:0/3147115689 <== osd.1 v2:192.168.123.101:6810/4163266826 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f5b5010b080 con 0x7f5b500630c0 2026-03-09T15:55:18.370 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.367+0000 7f5b1f7fe640 1 -- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] conn(0x7f5b500630c0 msgr2=0x7f5b500634a0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.370 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.367+0000 7f5b1f7fe640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6810/4163266826,v1:192.168.123.101:6811/4163266826] conn(0x7f5b500630c0 0x7f5b500634a0 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.371 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f7630ff9640 1 -- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] conn(0x7f76200816c0 msgr2=0x7f7620083b00 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f5b1f7fe640 1 -- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5b3c0777d0 msgr2=0x7f5b3c079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f5b1f7fe640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5b3c0777d0 0x7f5b3c079c90 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f5b400044f0 tx=0x7f5b400091e0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f5b1f7fe640 1 -- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5b5010a6d0 msgr2=0x7f5b5007bc00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f5b1f7fe640 1 --2- 192.168.123.101:0/3147115689 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5b5010a6d0 0x7f5b5007bc00 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7f5b4800b1d0 tx=0x7f5b48009bf0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f5b55154640 1 -- 192.168.123.101:0/3147115689 reap_dead start 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f5b1f7fe640 1 -- 192.168.123.101:0/3147115689 shutdown_connections 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f5b1f7fe640 1 -- 192.168.123.101:0/3147115689 >> 192.168.123.101:0/3147115689 conn(0x7f5b5006d9f0 msgr2=0x7f5b5007fe60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f5b1f7fe640 1 -- 192.168.123.101:0/3147115689 shutdown_connections 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f5b1f7fe640 1 -- 192.168.123.101:0/3147115689 wait complete. 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f7630ff9640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6826/994063283,v1:192.168.123.101:6827/994063283] conn(0x7f76200816c0 0x7f7620083b00 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f7630ff9640 1 -- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f76200777d0 msgr2=0x7f7620079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f7630ff9640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f76200777d0 0x7f7620079c90 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f7644064240 tx=0x7f7644051260 comp rx=0 tx=0).stop 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f7630ff9640 1 -- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f763c0a4de0 msgr2=0x7f763c0b7840 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f7630ff9640 1 --2- 192.168.123.101:0/3066470046 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f763c0a4de0 0x7f763c0b7840 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f763800b570 tx=0x7f763800ba40 comp rx=0 tx=0).stop 2026-03-09T15:55:18.373 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.371+0000 7f764a47f640 1 -- 192.168.123.101:0/3066470046 reap_dead start 2026-03-09T15:55:18.375 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.375+0000 7f7630ff9640 1 -- 192.168.123.101:0/3066470046 shutdown_connections 2026-03-09T15:55:18.375 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.375+0000 7f7630ff9640 1 -- 192.168.123.101:0/3066470046 >> 192.168.123.101:0/3066470046 conn(0x7f763c01a730 msgr2=0x7f763c0b97a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.375 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.375+0000 7f7630ff9640 1 -- 192.168.123.101:0/3066470046 shutdown_connections 2026-03-09T15:55:18.380 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.375+0000 7f7630ff9640 1 -- 192.168.123.101:0/3066470046 wait complete. 2026-03-09T15:55:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:17 vm09 bash[22983]: audit 2026-03-09T15:55:16.166048+0000 mgr.y (mgr.14520) 54 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:17 vm09 bash[22983]: audit 2026-03-09T15:55:16.166048+0000 mgr.y (mgr.14520) 54 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:17 vm09 bash[22983]: cluster 2026-03-09T15:55:16.643720+0000 mgr.y (mgr.14520) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:17 vm09 bash[22983]: cluster 2026-03-09T15:55:16.643720+0000 mgr.y (mgr.14520) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T15:55:18.394 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.391+0000 7efdb0ff9640 1 -- 192.168.123.101:0/784111984 --> [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7efd84002d70 con 0x7efd98081640 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.395+0000 7efdb2ffd640 1 -- 192.168.123.101:0/784111984 <== osd.4 v2:192.168.123.109:6800/2242917856 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7efd84002d70 con 0x7efd98081640 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.399+0000 7efdb0ff9640 1 -- 192.168.123.101:0/784111984 >> [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] conn(0x7efd98081640 msgr2=0x7efd98083a80 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.399+0000 7efdb0ff9640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.109:6800/2242917856,v1:192.168.123.109:6801/2242917856] conn(0x7efd98081640 0x7efd98083a80 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7efdb0ff9640 1 -- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7efd980777d0 msgr2=0x7efd98079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7efdb0ff9640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7efd980777d0 0x7efd98079c90 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7efdbc0750d0 tx=0x7efdb8007570 comp rx=0 tx=0).stop 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7efdb0ff9640 1 -- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7efdbc075470 msgr2=0x7efdbc085c70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7efdb0ff9640 1 --2- 192.168.123.101:0/784111984 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7efdbc075470 0x7efdbc085c70 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7efdb40040c0 tx=0x7efdb4039900 comp rx=0 tx=0).stop 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7efdc21ed640 1 -- 192.168.123.101:0/784111984 reap_dead start 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7efdb0ff9640 1 -- 192.168.123.101:0/784111984 shutdown_connections 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7efdb0ff9640 1 -- 192.168.123.101:0/784111984 >> 192.168.123.101:0/784111984 conn(0x7efdbc06d9f0 msgr2=0x7efdbc072d40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7efdb0ff9640 1 -- 192.168.123.101:0/784111984 shutdown_connections 2026-03-09T15:55:18.404 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7efdb0ff9640 1 -- 192.168.123.101:0/784111984 wait complete. 2026-03-09T15:55:18.416 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7fc8cf577640 1 -- 192.168.123.101:0/2149735070 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fc8d010a470 msgr2=0x7fc8d010a850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.416 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.403+0000 7fc8cf577640 1 --2- 192.168.123.101:0/2149735070 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fc8d010a470 0x7fc8d010a850 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fc8c4009a30 tx=0x7fc8c402f240 comp rx=0 tx=0).stop 2026-03-09T15:55:18.416 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.411+0000 7fc8cf577640 1 -- 192.168.123.101:0/2149735070 shutdown_connections 2026-03-09T15:55:18.416 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.411+0000 7fc8cf577640 1 --2- 192.168.123.101:0/2149735070 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8d0114b70 0x7fc8d0116f60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.416 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.411+0000 7fc8cf577640 1 --2- 192.168.123.101:0/2149735070 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc8d010ad90 0x7fc8d0114630 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.416 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.411+0000 7fc8cf577640 1 --2- 192.168.123.101:0/2149735070 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fc8d010a470 0x7fc8d010a850 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.416 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.411+0000 7fc8cf577640 1 -- 192.168.123.101:0/2149735070 >> 192.168.123.101:0/2149735070 conn(0x7fc8d006de50 msgr2=0x7fc8d006e260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.416 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.411+0000 7fc8cf577640 1 -- 192.168.123.101:0/2149735070 shutdown_connections 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cf577640 1 -- 192.168.123.101:0/2149735070 wait complete. 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cf577640 1 Processor -- start 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cf577640 1 -- start start 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cf577640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc8d010a470 0x7fc8d011ed00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cf577640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8d010ad90 0x7fc8d011f240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cf577640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fc8d0114b70 0x7fc8d0126a30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cf577640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fc8d0118730 con 0x7fc8d010a470 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cf577640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fc8d01185b0 con 0x7fc8d010ad90 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cf577640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7fc8d01188b0 con 0x7fc8d0114b70 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8cdd74640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8d010ad90 0x7fc8d011f240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ced76640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fc8d0114b70 0x7fc8d0126a30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ced76640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fc8d0114b70 0x7fc8d0126a30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:54646/0 (socket says 192.168.123.101:54646) 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ced76640 1 -- 192.168.123.101:0/1747309748 learned_addr learned my addr 192.168.123.101:0/1747309748 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:18.417 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ce575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc8d010a470 0x7fc8d011ed00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.418 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ced76640 1 -- 192.168.123.101:0/1747309748 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8d010ad90 msgr2=0x7fc8d011f240 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.418 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ced76640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8d010ad90 0x7fc8d011f240 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.418 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ced76640 1 -- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc8d010a470 msgr2=0x7fc8d011ed00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.418 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ced76640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fc8d010a470 0x7fc8d011ed00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.418 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ced76640 1 -- 192.168.123.101:0/1747309748 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc8d01272f0 con 0x7fc8d0114b70 2026-03-09T15:55:18.418 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.415+0000 7fc8ced76640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fc8d0114b70 0x7fc8d0126a30 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fc8c0004830 tx=0x7fc8c000d4a0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.420 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8bf7fe640 1 -- 192.168.123.101:0/1747309748 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc8c00090d0 con 0x7fc8d0114b70 2026-03-09T15:55:18.420 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8cf577640 1 -- 192.168.123.101:0/1747309748 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc8d0119ea0 con 0x7fc8d0114b70 2026-03-09T15:55:18.420 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8cf577640 1 -- 192.168.123.101:0/1747309748 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fc8d011a380 con 0x7fc8d0114b70 2026-03-09T15:55:18.420 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8bf7fe640 1 -- 192.168.123.101:0/1747309748 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fc8c0009270 con 0x7fc8d0114b70 2026-03-09T15:55:18.420 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8bf7fe640 1 -- 192.168.123.101:0/1747309748 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc8c0013670 con 0x7fc8d0114b70 2026-03-09T15:55:18.420 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8cf577640 1 -- 192.168.123.101:0/1747309748 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_get_version(what=osdmap handle=1) -- 0x7fc894000f80 con 0x7fc8d0114b70 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8bf7fe640 1 -- 192.168.123.101:0/1747309748 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fc8c0012070 con 0x7fc8d0114b70 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8bf7fe640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fc8a00777d0 0x7fc8a0079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8bf7fe640 1 -- 192.168.123.101:0/1747309748 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7fc8c009a8c0 con 0x7fc8d0114b70 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8bf7fe640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] conn(0x7fc8a00816c0 0x7fc8a0083b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8bf7fe640 1 -- 192.168.123.101:0/1747309748 --> [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fc8a00841d0 con 0x7fc8a00816c0 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.419+0000 7fc8bf7fe640 1 -- 192.168.123.101:0/1747309748 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_get_version_reply(handle=1 version=64) ==== 24+0+0 (secure 0 0 0) 0x7fc8c009f2a0 con 0x7fc8d0114b70 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.423+0000 7fc8cdd74640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] conn(0x7fc8a00816c0 0x7fc8a0083b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.423+0000 7fc8ce575640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fc8a00777d0 0x7fc8a0079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.423+0000 7fc8cdd74640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] conn(0x7fc8a00816c0 0x7fc8a0083b00 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.425 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.423+0000 7fc8ce575640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fc8a00777d0 0x7fc8a0079c90 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fc8c4002410 tx=0x7fc8c403a040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.428 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.423+0000 7fc8bf7fe640 1 -- 192.168.123.101:0/1747309748 <== osd.2 v2:192.168.123.101:6818/1701239335 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7fc8a00841d0 con 0x7fc8a00816c0 2026-03-09T15:55:18.496 INFO:teuthology.orchestra.run.vm01.stdout:184683593753 2026-03-09T15:55:18.496 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd last-stat-seq osd.6 2026-03-09T15:55:18.511 INFO:teuthology.orchestra.run.vm01.stdout:158913789984 2026-03-09T15:55:18.511 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd last-stat-seq osd.5 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- 192.168.123.101:0/1431099445 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f464011c780 msgr2=0x7f464011eb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 --2- 192.168.123.101:0/1431099445 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f464011c780 0x7f464011eb70 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7f463c00b3e0 tx=0x7f463c02f5f0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- 192.168.123.101:0/1431099445 shutdown_connections 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 --2- 192.168.123.101:0/1431099445 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f464011c780 0x7f464011eb70 unknown :-1 s=CLOSED pgs=164 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 --2- 192.168.123.101:0/1431099445 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f464010a850 0x7f464010acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 --2- 192.168.123.101:0/1431099445 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f464010a470 0x7f46401114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- 192.168.123.101:0/1431099445 >> 192.168.123.101:0/1431099445 conn(0x7f464006d9f0 msgr2=0x7f464006de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- 192.168.123.101:0/1431099445 shutdown_connections 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- 192.168.123.101:0/1431099445 wait complete. 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 Processor -- start 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- start start 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f464010a470 0x7f46401124d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f464010a850 0x7f4640112a10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4640112f50 0x7f46401bdf20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f46401211f0 con 0x7f464010a850 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f4640121070 con 0x7f464010a470 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f4640121370 con 0x7f4640112f50 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f4644d24640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f464010a850 0x7f4640112a10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f4644d24640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f464010a850 0x7f4640112a10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:58278/0 (socket says 192.168.123.101:58278) 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f4644d24640 1 -- 192.168.123.101:0/2073486235 learned_addr learned my addr 192.168.123.101:0/2073486235 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f4644d24640 1 -- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4640112f50 msgr2=0x7f46401bdf20 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f4644d24640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4640112f50 0x7f46401bdf20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f4644d24640 1 -- 192.168.123.101:0/2073486235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f464010a470 msgr2=0x7f46401124d0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f4644d24640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f464010a470 0x7f46401124d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f4644d24640 1 -- 192.168.123.101:0/2073486235 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f46401be560 con 0x7f464010a850 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f4644d24640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f464010a850 0x7f4640112a10 secure :-1 s=READY pgs=165 cs=0 l=1 rev1=1 crypto rx=0x7f463400b810 tx=0x7f463400bce0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f462e7fc640 1 -- 192.168.123.101:0/2073486235 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4634004270 con 0x7f464010a850 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- 192.168.123.101:0/2073486235 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f46401be850 con 0x7f464010a850 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.495+0000 7f46477b0640 1 -- 192.168.123.101:0/2073486235 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f46401bed90 con 0x7f464010a850 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.503+0000 7f462e7fc640 1 -- 192.168.123.101:0/2073486235 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f4634010070 con 0x7f464010a850 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.503+0000 7f462e7fc640 1 -- 192.168.123.101:0/2073486235 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f463400c940 con 0x7f464010a850 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.503+0000 7f462e7fc640 1 -- 192.168.123.101:0/2073486235 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f463400cba0 con 0x7f464010a850 2026-03-09T15:55:18.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.511+0000 7f462e7fc640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f461c077800 0x7f461c079cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.517 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.507+0000 7fc8cf577640 1 -- 192.168.123.101:0/1747309748 --> [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fc894002d70 con 0x7fc8a00816c0 2026-03-09T15:55:18.517 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.515+0000 7fc8bf7fe640 1 -- 192.168.123.101:0/1747309748 <== osd.2 v2:192.168.123.101:6818/1701239335 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7fc894002d70 con 0x7fc8a00816c0 2026-03-09T15:55:18.520 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.515+0000 7fc8bd7fa640 1 -- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] conn(0x7fc8a00816c0 msgr2=0x7fc8a0083b00 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.520 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.515+0000 7fc8bd7fa640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6818/1701239335,v1:192.168.123.101:6819/1701239335] conn(0x7fc8a00816c0 0x7fc8a0083b00 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.520 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.515+0000 7fc8bd7fa640 1 -- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fc8a00777d0 msgr2=0x7fc8a0079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.520 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.515+0000 7fc8bd7fa640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fc8a00777d0 0x7fc8a0079c90 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fc8c4002410 tx=0x7fc8c403a040 comp rx=0 tx=0).stop 2026-03-09T15:55:18.520 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.515+0000 7fc8bd7fa640 1 -- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fc8d0114b70 msgr2=0x7fc8d0126a30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.520 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.515+0000 7fc8bd7fa640 1 --2- 192.168.123.101:0/1747309748 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fc8d0114b70 0x7fc8d0126a30 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fc8c0004830 tx=0x7fc8c000d4a0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.519+0000 7fc8ced76640 1 -- 192.168.123.101:0/1747309748 reap_dead start 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.519+0000 7f4645525640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f461c077800 0x7f461c079cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.519+0000 7f4645525640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f461c077800 0x7f461c079cc0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f4630009bc0 tx=0x7f4630009340 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.519+0000 7f462e7fc640 1 -- 192.168.123.101:0/2073486235 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f463409a230 con 0x7f464010a850 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.527+0000 7f460ffff640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] conn(0x7f4608001650 0x7f4608003b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.527+0000 7f4645d26640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] conn(0x7f4608001650 0x7f4608003b10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.527+0000 7f4645d26640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] conn(0x7f4608001650 0x7f4608003b10 crc :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.527+0000 7f460ffff640 1 -- 192.168.123.101:0/2073486235 --> [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f4608006c00 con 0x7f4608001650 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.519+0000 7fc8bd7fa640 1 -- 192.168.123.101:0/1747309748 shutdown_connections 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.519+0000 7fc8bd7fa640 1 -- 192.168.123.101:0/1747309748 >> 192.168.123.101:0/1747309748 conn(0x7fc8d006de50 msgr2=0x7fc8d0071400 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.519+0000 7fc8bd7fa640 1 -- 192.168.123.101:0/1747309748 shutdown_connections 2026-03-09T15:55:18.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.519+0000 7fc8bd7fa640 1 -- 192.168.123.101:0/1747309748 wait complete. 2026-03-09T15:55:18.551 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.531+0000 7f462e7fc640 1 -- 192.168.123.101:0/2073486235 <== osd.0 v2:192.168.123.101:6802/115286186 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f4608006c00 con 0x7f4608001650 2026-03-09T15:55:18.572 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.555+0000 7f460ffff640 1 -- 192.168.123.101:0/2073486235 --> [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f4608005ce0 con 0x7f4608001650 2026-03-09T15:55:18.572 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f462e7fc640 1 -- 192.168.123.101:0/2073486235 <== osd.0 v2:192.168.123.101:6802/115286186 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f4608005ce0 con 0x7f4608001650 2026-03-09T15:55:18.572 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f460ffff640 1 -- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] conn(0x7f4608001650 msgr2=0x7f4608003b10 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.572 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f460ffff640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6802/115286186,v1:192.168.123.101:6803/115286186] conn(0x7f4608001650 0x7f4608003b10 crc :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.572 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f460ffff640 1 -- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f461c077800 msgr2=0x7f461c079cc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.572 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f460ffff640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f461c077800 0x7f461c079cc0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f4630009bc0 tx=0x7f4630009340 comp rx=0 tx=0).stop 2026-03-09T15:55:18.572 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f460ffff640 1 -- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f464010a850 msgr2=0x7f4640112a10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:18.572 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f460ffff640 1 --2- 192.168.123.101:0/2073486235 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f464010a850 0x7f4640112a10 secure :-1 s=READY pgs=165 cs=0 l=1 rev1=1 crypto rx=0x7f463400b810 tx=0x7f463400bce0 comp rx=0 tx=0).stop 2026-03-09T15:55:18.572 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f4645d26640 1 -- 192.168.123.101:0/2073486235 reap_dead start 2026-03-09T15:55:18.575 INFO:teuthology.orchestra.run.vm01.stdout:214748364818 2026-03-09T15:55:18.575 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd last-stat-seq osd.7 2026-03-09T15:55:18.582 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f460ffff640 1 -- 192.168.123.101:0/2073486235 shutdown_connections 2026-03-09T15:55:18.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.567+0000 7f460ffff640 1 -- 192.168.123.101:0/2073486235 >> 192.168.123.101:0/2073486235 conn(0x7f464006d9f0 msgr2=0x7f464011cca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:18.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.571+0000 7f460ffff640 1 -- 192.168.123.101:0/2073486235 shutdown_connections 2026-03-09T15:55:18.583 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:18.571+0000 7f460ffff640 1 -- 192.168.123.101:0/2073486235 wait complete. 2026-03-09T15:55:18.776 INFO:teuthology.orchestra.run.vm01.stdout:133143986217 2026-03-09T15:55:18.776 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd last-stat-seq osd.4 2026-03-09T15:55:18.809 INFO:teuthology.orchestra.run.vm01.stdout:77309411380 2026-03-09T15:55:18.810 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd last-stat-seq osd.2 2026-03-09T15:55:18.835 INFO:teuthology.orchestra.run.vm01.stdout:55834574907 2026-03-09T15:55:18.836 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd last-stat-seq osd.1 2026-03-09T15:55:18.843 INFO:teuthology.orchestra.run.vm01.stdout:111669149742 2026-03-09T15:55:18.844 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd last-stat-seq osd.3 2026-03-09T15:55:18.844 INFO:teuthology.orchestra.run.vm01.stdout:34359738434 2026-03-09T15:55:18.844 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph osd last-stat-seq osd.0 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: cluster 2026-03-09T15:55:18.644329+0000 mgr.y (mgr.14520) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: cluster 2026-03-09T15:55:18.644329+0000 mgr.y (mgr.14520) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.474967+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.474967+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.483056+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.483056+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.682136+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.682136+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.687877+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.687877+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.688707+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.688707+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.689263+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.689263+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.693376+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:19 vm01 bash[28152]: audit 2026-03-09T15:55:19.693376+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: cluster 2026-03-09T15:55:18.644329+0000 mgr.y (mgr.14520) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: cluster 2026-03-09T15:55:18.644329+0000 mgr.y (mgr.14520) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.474967+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.474967+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.483056+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.483056+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.682136+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.682136+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.687877+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.687877+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.688707+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.688707+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.689263+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.689263+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.693376+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:19 vm01 bash[20728]: audit 2026-03-09T15:55:19.693376+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: cluster 2026-03-09T15:55:18.644329+0000 mgr.y (mgr.14520) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: cluster 2026-03-09T15:55:18.644329+0000 mgr.y (mgr.14520) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.474967+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.474967+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.483056+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.483056+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.682136+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.682136+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.687877+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.687877+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.688707+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.688707+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.689263+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.689263+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.693376+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:19 vm09 bash[22983]: audit 2026-03-09T15:55:19.693376+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:55:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:21 vm01 bash[28152]: cluster 2026-03-09T15:55:20.644820+0000 mgr.y (mgr.14520) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:22.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:21 vm01 bash[28152]: cluster 2026-03-09T15:55:20.644820+0000 mgr.y (mgr.14520) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:22.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:21 vm01 bash[20728]: cluster 2026-03-09T15:55:20.644820+0000 mgr.y (mgr.14520) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:22.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:21 vm01 bash[20728]: cluster 2026-03-09T15:55:20.644820+0000 mgr.y (mgr.14520) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:21 vm09 bash[22983]: cluster 2026-03-09T15:55:20.644820+0000 mgr.y (mgr.14520) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:21 vm09 bash[22983]: cluster 2026-03-09T15:55:20.644820+0000 mgr.y (mgr.14520) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:22.679 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 15:55:22 vm01 bash[56700]: ts=2026-03-09T15:55:22.322Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002982636s 2026-03-09T15:55:23.168 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:23.168 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:23.176 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:23.176 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:23.176 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:23.177 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:23.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:55:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:55:23.180 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:23.180 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:23.508 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- 192.168.123.101:0/2975569967 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06410a470 msgr2=0x7fa0641114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.508 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2975569967 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06410a470 0x7fa0641114d0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7fa05c00b0a0 tx=0x7fa05c02f430 comp rx=0 tx=0).stop 2026-03-09T15:55:23.508 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- 192.168.123.101:0/2975569967 shutdown_connections 2026-03-09T15:55:23.508 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2975569967 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa06411c780 0x7fa06411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.508 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2975569967 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa06410a850 0x7fa06410acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.508 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2975569967 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06410a470 0x7fa0641114d0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.508 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- 192.168.123.101:0/2975569967 >> 192.168.123.101:0/2975569967 conn(0x7fa06406d9f0 msgr2=0x7fa06406de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- 192.168.123.101:0/2975569967 shutdown_connections 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- 192.168.123.101:0/2975569967 wait complete. 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 Processor -- start 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- start start 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa06410a470 0x7fa064119b60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa06410a850 0x7fa06411a0a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06411c780 0x7fa064112d10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fa0641211f0 con 0x7fa06410a470 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fa064121070 con 0x7fa06410a850 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7fa064121370 con 0x7fa06411c780 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0691ae640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06411c780 0x7fa064112d10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0691ae640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06411c780 0x7fa064112d10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:54670/0 (socket says 192.168.123.101:54670) 2026-03-09T15:55:23.509 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0691ae640 1 -- 192.168.123.101:0/2925051888 learned_addr learned my addr 192.168.123.101:0/2925051888 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0691ae640 1 -- 192.168.123.101:0/2925051888 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa06410a850 msgr2=0x7fa06411a0a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0689ad640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa06410a470 0x7fa064119b60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa063fff640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa06410a850 0x7fa06411a0a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0691ae640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa06410a850 0x7fa06411a0a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0691ae640 1 -- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa06410a470 msgr2=0x7fa064119b60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0691ae640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa06410a470 0x7fa064119b60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0691ae640 1 -- 192.168.123.101:0/2925051888 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa0641135d0 con 0x7fa06411c780 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0689ad640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa06410a470 0x7fa064119b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa0691ae640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06411c780 0x7fa064112d10 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7fa05800d950 tx=0x7fa05800de20 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.510 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa061ffb640 1 -- 192.168.123.101:0/2925051888 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa058014070 con 0x7fa06411c780 2026-03-09T15:55:23.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa0641138c0 con 0x7fa06411c780 2026-03-09T15:55:23.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.507+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fa0641178b0 con 0x7fa06411c780 2026-03-09T15:55:23.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.511+0000 7fa061ffb640 1 -- 192.168.123.101:0/2925051888 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fa05800bd40 con 0x7fa06411c780 2026-03-09T15:55:23.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.511+0000 7fa061ffb640 1 -- 192.168.123.101:0/2925051888 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa058004db0 con 0x7fa06411c780 2026-03-09T15:55:23.513 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.511+0000 7fa061ffb640 1 -- 192.168.123.101:0/2925051888 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fa058020020 con 0x7fa06411c780 2026-03-09T15:55:23.516 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7fa061ffb640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fa048077800 0x7fa048079cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.516 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- 192.168.123.101:0/4221134094 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c10a470 msgr2=0x7f3a9c1114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.516 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 --2- 192.168.123.101:0/4221134094 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c10a470 0x7f3a9c1114d0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f3a9001adc0 tx=0x7f3a900401c0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.517 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- 192.168.123.101:0/4221134094 shutdown_connections 2026-03-09T15:55:23.517 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 --2- 192.168.123.101:0/4221134094 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a9c11c780 0x7f3a9c11eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.517 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 --2- 192.168.123.101:0/4221134094 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3a9c10a850 0x7f3a9c10acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.517 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 --2- 192.168.123.101:0/4221134094 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c10a470 0x7f3a9c1114d0 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.517 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- 192.168.123.101:0/4221134094 >> 192.168.123.101:0/4221134094 conn(0x7f3a9c06d9f0 msgr2=0x7f3a9c06de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:23.517 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- 192.168.123.101:0/4221134094 shutdown_connections 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- 192.168.123.101:0/4221134094 wait complete. 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 Processor -- start 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- start start 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3a9c10a470 0x7f3a9c11c010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a9c10a850 0x7f3a9c1170a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c11c780 0x7f3a9c1175e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3a9c1139c0 con 0x7f3a9c10a470 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f3a9c113840 con 0x7f3a9c10a850 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f3a9c113b40 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa0ecb640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c11c780 0x7f3a9c1175e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa0ecb640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c11c780 0x7f3a9c1175e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:54682/0 (socket says 192.168.123.101:54682) 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa0ecb640 1 -- 192.168.123.101:0/220951136 learned_addr learned my addr 192.168.123.101:0/220951136 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3a9b7fe640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a9c10a850 0x7f3a9c1170a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa0ecb640 1 -- 192.168.123.101:0/220951136 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a9c10a850 msgr2=0x7f3a9c1170a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa0ecb640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a9c10a850 0x7f3a9c1170a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa0ecb640 1 -- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3a9c10a470 msgr2=0x7f3a9c11c010 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3a9bfff640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3a9c10a470 0x7f3a9c11c010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa0ecb640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3a9c10a470 0x7f3a9c11c010 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa0ecb640 1 -- 192.168.123.101:0/220951136 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3a9c117e70 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa0ecb640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c11c780 0x7f3a9c1175e0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f3a8c00c970 tx=0x7f3a8c00ce40 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3a997fa640 1 -- 192.168.123.101:0/220951136 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3a8c007bf0 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- 192.168.123.101:0/220951136 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3a9c1ad2c0 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7f3aa2955640 1 -- 192.168.123.101:0/220951136 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f3a9c1ad800 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.519+0000 7f3a997fa640 1 -- 192.168.123.101:0/220951136 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3a8c007d90 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.519+0000 7f3a997fa640 1 -- 192.168.123.101:0/220951136 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3a8c0056c0 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.519+0000 7f3a997fa640 1 -- 192.168.123.101:0/220951136 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f3a8c020020 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.519+0000 7f3a997fa640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3a740777d0 0x7f3a74079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.519+0000 7f3a997fa640 1 -- 192.168.123.101:0/220951136 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f3a8c09a670 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.519+0000 7f3aa2955640 1 -- 192.168.123.101:0/220951136 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3a68005180 con 0x7f3a9c11c780 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7fa0689ad640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fa048077800 0x7fa048079cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.521 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7fa061ffb640 1 -- 192.168.123.101:0/2925051888 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7fa05809a7d0 con 0x7fa06411c780 2026-03-09T15:55:23.544 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.523+0000 7f3a997fa640 1 -- 192.168.123.101:0/220951136 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3a8c014030 con 0x7f3a9c11c780 2026-03-09T15:55:23.544 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.527+0000 7f3a9bfff640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3a740777d0 0x7f3a74079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.544 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.515+0000 7fa0689ad640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fa048077800 0x7fa048079cc0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fa05c0062a0 tx=0x7fa05c002750 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.544 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.523+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa02c005180 con 0x7fa06411c780 2026-03-09T15:55:23.544 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.527+0000 7fa061ffb640 1 -- 192.168.123.101:0/2925051888 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa058067170 con 0x7fa06411c780 2026-03-09T15:55:23.553 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.543+0000 7f3a9bfff640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3a740777d0 0x7f3a74079c90 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f3a900406d0 tx=0x7f3a900023d0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.746 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.743+0000 7fb3aee42640 1 -- 192.168.123.101:0/1914354529 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00a50a0 msgr2=0x7fb3a00a5480 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.746 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.743+0000 7fb3aee42640 1 --2- 192.168.123.101:0/1914354529 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00a50a0 0x7fb3a00a5480 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fb3a400c550 tx=0x7fb3a4030900 comp rx=0 tx=0).stop 2026-03-09T15:55:23.751 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 -- 192.168.123.101:0/1914354529 shutdown_connections 2026-03-09T15:55:23.751 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 --2- 192.168.123.101:0/1914354529 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3a00aa1d0 0x7fb3a00b1d50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.751 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 --2- 192.168.123.101:0/1914354529 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3a00a5a50 0x7fb3a00a9aa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.751 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 --2- 192.168.123.101:0/1914354529 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00a50a0 0x7fb3a00a5480 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.752 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 -- 192.168.123.101:0/1914354529 >> 192.168.123.101:0/1914354529 conn(0x7fb3a001ab90 msgr2=0x7fb3a001afa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:23.752 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 -- 192.168.123.101:0/1914354529 shutdown_connections 2026-03-09T15:55:23.752 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 -- 192.168.123.101:0/1914354529 wait complete. 2026-03-09T15:55:23.753 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 Processor -- start 2026-03-09T15:55:23.753 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 -- start start 2026-03-09T15:55:23.753 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3a00a50a0 0x7fb3a0141370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.753 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3a00a5a50 0x7fb3a01418b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.753 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00aa1d0 0x7fb3a013c510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.753 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fb3a00b40d0 con 0x7fb3a00a5a50 2026-03-09T15:55:23.753 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fb3a00b3f50 con 0x7fb3a00a50a0 2026-03-09T15:55:23.753 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3aee42640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7fb3a00b4250 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.762 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3ae641640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00aa1d0 0x7fb3a013c510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.762 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3ae641640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00aa1d0 0x7fb3a013c510 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:54698/0 (socket says 192.168.123.101:54698) 2026-03-09T15:55:23.763 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3ae641640 1 -- 192.168.123.101:0/2463022338 learned_addr learned my addr 192.168.123.101:0/2463022338 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:23.763 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.751+0000 7fb3ae641640 1 -- 192.168.123.101:0/2463022338 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3a00a50a0 msgr2=0x7fb3a0141370 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:55:23.763 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.759+0000 7fb3ad63f640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3a00a5a50 0x7fb3a01418b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.763 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.759+0000 7fb3ade40640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3a00a50a0 0x7fb3a0141370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.763 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.759+0000 7fb3ae641640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3a00a50a0 0x7fb3a0141370 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.763 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.759+0000 7fb3ae641640 1 -- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3a00a5a50 msgr2=0x7fb3a01418b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.763 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.759+0000 7fb3ae641640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3a00a5a50 0x7fb3a01418b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.763 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.759+0000 7fb3ae641640 1 -- 192.168.123.101:0/2463022338 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb3a013cdd0 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.763 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.759+0000 7fb3ad63f640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3a00a5a50 0x7fb3a01418b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:23.776 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.759+0000 7fb3ae641640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00aa1d0 0x7fb3a013c510 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7fb39c00b520 tx=0x7fb39c00b9f0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.776 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.771+0000 7fb396ffd640 1 -- 192.168.123.101:0/2463022338 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb39c013020 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.776 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.771+0000 7fb3aee42640 1 -- 192.168.123.101:0/2463022338 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb3a013d0c0 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.776 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.771+0000 7fb3aee42640 1 -- 192.168.123.101:0/2463022338 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fb3a0147260 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.776 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.771+0000 7fb3aee42640 1 -- 192.168.123.101:0/2463022338 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb3a00a6580 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.776 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.775+0000 7fb396ffd640 1 -- 192.168.123.101:0/2463022338 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fb39c004480 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.776 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.775+0000 7fb396ffd640 1 -- 192.168.123.101:0/2463022338 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb39c00f980 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.776 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.775+0000 7fb396ffd640 1 -- 192.168.123.101:0/2463022338 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fb39c020020 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.776 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.775+0000 7fb396ffd640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb370077800 0x7fb370079cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.777 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.775+0000 7fb3ade40640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb370077800 0x7fb370079cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.777 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.775+0000 7fb396ffd640 1 -- 192.168.123.101:0/2463022338 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7fb39c09aa10 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.777 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.775+0000 7fb396ffd640 1 -- 192.168.123.101:0/2463022338 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fb39c0caa90 con 0x7fb3a00aa1d0 2026-03-09T15:55:23.779 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.775+0000 7f2f6cd4f640 1 -- 192.168.123.101:0/1520808261 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2f6810b080 msgr2=0x7f2f681134e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.779 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.775+0000 7f2f6cd4f640 1 --2- 192.168.123.101:0/1520808261 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2f6810b080 0x7f2f681134e0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f2f5800c2f0 tx=0x7f2f58030690 comp rx=0 tx=0).stop 2026-03-09T15:55:23.779 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 -- 192.168.123.101:0/1520808261 shutdown_connections 2026-03-09T15:55:23.779 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 --2- 192.168.123.101:0/1520808261 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2f68113b50 0x7f2f68115fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.779 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 --2- 192.168.123.101:0/1520808261 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2f6810b080 0x7f2f681134e0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.779 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 --2- 192.168.123.101:0/1520808261 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2f6810a6d0 0x7f2f6810aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.779 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 -- 192.168.123.101:0/1520808261 >> 192.168.123.101:0/1520808261 conn(0x7f2f6806deb0 msgr2=0x7f2f6806e2c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:23.779 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 -- 192.168.123.101:0/1520808261 shutdown_connections 2026-03-09T15:55:23.781 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 -- 192.168.123.101:0/1520808261 wait complete. 2026-03-09T15:55:23.781 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 Processor -- start 2026-03-09T15:55:23.781 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 -- start start 2026-03-09T15:55:23.781 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2f6810a6d0 0x7f2f68112c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.781 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2f68113b50 0x7f2f681131a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.781 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2f6810fe30 0x7f2f681102b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.781 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f2f681185d0 con 0x7f2f6810a6d0 2026-03-09T15:55:23.781 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f2f68118450 con 0x7f2f6810fe30 2026-03-09T15:55:23.781 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f6cd4f640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f2f68118750 con 0x7f2f68113b50 2026-03-09T15:55:23.782 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f66575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2f6810a6d0 0x7f2f68112c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.782 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/163217993 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7c00a50b0 msgr2=0x7fe7c00a5490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.782 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/163217993 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7c00a50b0 0x7fe7c00a5490 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7fe7bc007570 tx=0x7fe7bc02ff10 comp rx=0 tx=0).stop 2026-03-09T15:55:23.782 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f66575640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2f6810a6d0 0x7f2f68112c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:58366/0 (socket says 192.168.123.101:58366) 2026-03-09T15:55:23.782 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7f2f66575640 1 -- 192.168.123.101:0/3828871474 learned_addr learned my addr 192.168.123.101:0/3828871474 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:23.782 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.779+0000 7fb3ade40640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb370077800 0x7fb370079cc0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fb3a4030e10 tx=0x7fb3a403c040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.787 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.783+0000 7f2f66575640 1 -- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2f68113b50 msgr2=0x7f2f681131a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:55:23.787 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.783+0000 7f2f66575640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2f68113b50 0x7f2f681131a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.787 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.783+0000 7f2f66575640 1 -- 192.168.123.101:0/3828871474 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2f6810fe30 msgr2=0x7f2f681102b0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:55:23.787 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.783+0000 7f2f66575640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2f6810fe30 0x7f2f681102b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.787 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.783+0000 7f2f66575640 1 -- 192.168.123.101:0/3828871474 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2f68110af0 con 0x7f2f6810a6d0 2026-03-09T15:55:23.787 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.783+0000 7f2f66575640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2f6810a6d0 0x7f2f68112c60 secure :-1 s=READY pgs=167 cs=0 l=1 rev1=1 crypto rx=0x7f2f5c00dab0 tx=0x7f2f5c00df80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.787 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.783+0000 7f2f577fe640 1 -- 192.168.123.101:0/3828871474 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2f5c014070 con 0x7f2f6810a6d0 2026-03-09T15:55:23.787 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.783+0000 7f2f6cd4f640 1 -- 192.168.123.101:0/3828871474 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2f6811d840 con 0x7f2f6810a6d0 2026-03-09T15:55:23.787 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.783+0000 7f2f6cd4f640 1 -- 192.168.123.101:0/3828871474 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2f6811dd20 con 0x7f2f6810a6d0 2026-03-09T15:55:23.794 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.791+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/163217993 shutdown_connections 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.791+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/163217993 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fe7c00aa1e0 0x7fe7c00b1d60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.791+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/163217993 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7c00a5a60 0x7fe7c00a9ab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.791+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/163217993 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7c00a50b0 0x7fe7c00a5490 unknown :-1 s=CLOSED pgs=166 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.791+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/163217993 >> 192.168.123.101:0/163217993 conn(0x7fe7c001abc0 msgr2=0x7fe7c001afd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.787+0000 7f2f577fe640 1 -- 192.168.123.101:0/3828871474 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f2f5c004510 con 0x7f2f6810a6d0 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.787+0000 7f2f577fe640 1 -- 192.168.123.101:0/3828871474 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2f5c002de0 con 0x7f2f6810a6d0 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.787+0000 7f2f577fe640 1 -- 192.168.123.101:0/3828871474 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f2f5c021020 con 0x7f2f6810a6d0 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.787+0000 7f2f577fe640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f2f40077800 0x7f2f40079cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.787+0000 7f2f577fe640 1 -- 192.168.123.101:0/3828871474 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f2f5c09ad10 con 0x7f2f6810a6d0 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.787+0000 7f2f6cd4f640 1 -- 192.168.123.101:0/3828871474 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2f2c005180 con 0x7f2f6810a6d0 2026-03-09T15:55:23.795 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.791+0000 7f2f65d74640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f2f40077800 0x7f2f40079cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.799 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.795+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/163217993 shutdown_connections 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/163217993 wait complete. 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 Processor -- start 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 -- start start 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7c00a50b0 0x7fe7c013c670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7c00a5a60 0x7fe7c013cbb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fe7c00aa1e0 0x7fe7c01597c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fe7c00b40a0 con 0x7fe7c00a5a60 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fe7c00b3f20 con 0x7fe7c00a50b0 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7fe7c00b4220 con 0x7fe7c00aa1e0 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7cdffc640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fe7c00aa1e0 0x7fe7c01597c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7cdffc640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fe7c00aa1e0 0x7fe7c01597c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:54722/0 (socket says 192.168.123.101:54722) 2026-03-09T15:55:23.800 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7cdffc640 1 -- 192.168.123.101:0/1406944844 learned_addr learned my addr 192.168.123.101:0/1406944844 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:23.801 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7cd7fb640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7c00a50b0 0x7fe7c013c670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.801 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ccffa640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7c00a5a60 0x7fe7c013cbb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.801 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ccffa640 1 -- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fe7c00aa1e0 msgr2=0x7fe7c01597c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.801 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ccffa640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fe7c00aa1e0 0x7fe7c01597c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.801 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ccffa640 1 -- 192.168.123.101:0/1406944844 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7c00a50b0 msgr2=0x7fe7c013c670 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.801 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ccffa640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7c00a50b0 0x7fe7c013c670 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.801 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ccffa640 1 -- 192.168.123.101:0/1406944844 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe7c0159e00 con 0x7fe7c00a5a60 2026-03-09T15:55:23.803 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ccffa640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7c00a5a60 0x7fe7c013cbb0 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7fe7b800e9f0 tx=0x7fe7b800eec0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.805 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7b67fc640 1 -- 192.168.123.101:0/1406944844 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe7b800cd80 con 0x7fe7c00a5a60 2026-03-09T15:55:23.805 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe7c015a0f0 con 0x7fe7c00a5a60 2026-03-09T15:55:23.806 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe7c015a630 con 0x7fe7c00a5a60 2026-03-09T15:55:23.806 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7b67fc640 1 -- 192.168.123.101:0/1406944844 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fe7b8004510 con 0x7fe7c00a5a60 2026-03-09T15:55:23.806 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.799+0000 7fe7b67fc640 1 -- 192.168.123.101:0/1406944844 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe7b8010690 con 0x7fe7c00a5a60 2026-03-09T15:55:23.806 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.795+0000 7f2f577fe640 1 -- 192.168.123.101:0/3828871474 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2f5c0676b0 con 0x7f2f6810a6d0 2026-03-09T15:55:23.806 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.803+0000 7f2f65d74640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f2f40077800 0x7f2f40079cc0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f2f681115f0 tx=0x7f2f58002750 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.807 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.803+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe798005180 con 0x7fe7c00a5a60 2026-03-09T15:55:23.807 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.803+0000 7fe7b67fc640 1 -- 192.168.123.101:0/1406944844 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fe7b80040d0 con 0x7fe7c00a5a60 2026-03-09T15:55:23.807 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.803+0000 7fe7b67fc640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fe7a00777d0 0x7fe7a0079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.807 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.803+0000 7fe7b67fc640 1 -- 192.168.123.101:0/1406944844 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7fe7b809a580 con 0x7fe7c00a5a60 2026-03-09T15:55:23.812 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.811+0000 7fe7cd7fb640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fe7a00777d0 0x7fe7a0079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.813 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.811+0000 7f994ae14640 1 -- 192.168.123.101:0/1634100278 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f994410fbd0 msgr2=0x7f9944116570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.815 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.811+0000 7fe7cd7fb640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fe7a00777d0 0x7fe7a0079c90 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fe7bc030420 tx=0x7fe7bc032040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.815 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.811+0000 7f994ae14640 1 --2- 192.168.123.101:0/1634100278 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f994410fbd0 0x7f9944116570 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f9938008290 tx=0x7f993802f780 comp rx=0 tx=0).stop 2026-03-09T15:55:23.815 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 -- 192.168.123.101:0/1634100278 shutdown_connections 2026-03-09T15:55:23.815 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 --2- 192.168.123.101:0/1634100278 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f994410fbd0 0x7f9944116570 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.815 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 --2- 192.168.123.101:0/1634100278 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f994410b080 0x7f994410f4c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.815 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 --2- 192.168.123.101:0/1634100278 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f994410a6d0 0x7f994410aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.815 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 -- 192.168.123.101:0/1634100278 >> 192.168.123.101:0/1634100278 conn(0x7f994406dfc0 msgr2=0x7f994406e3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:23.815 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 -- 192.168.123.101:0/1634100278 shutdown_connections 2026-03-09T15:55:23.816 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 -- 192.168.123.101:0/1634100278 wait complete. 2026-03-09T15:55:23.816 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 Processor -- start 2026-03-09T15:55:23.820 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 -- start start 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f994410a6d0 0x7f99441a0990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f994410b080 0x7f99441a0ed0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f99441a1410 0x7f99441be000 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f99441188c0 con 0x7f994410b080 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f9944118740 con 0x7f994410a6d0 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.815+0000 7f994ae14640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f9944118a40 con 0x7f99441a1410 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.819+0000 7f9948b89640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f994410a6d0 0x7f99441a0990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.819+0000 7f9943fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f994410b080 0x7f99441a0ed0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.819+0000 7f9943fff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f994410b080 0x7f99441a0ed0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:58384/0 (socket says 192.168.123.101:58384) 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.819+0000 7f9943fff640 1 -- 192.168.123.101:0/2650669516 learned_addr learned my addr 192.168.123.101:0/2650669516 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.819+0000 7f994938a640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f99441a1410 0x7f99441be000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.821 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.819+0000 7fe7b67fc640 1 -- 192.168.123.101:0/1406944844 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe7b8066f20 con 0x7fe7c00a5a60 2026-03-09T15:55:23.823 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.819+0000 7f9948b89640 1 -- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f99441a1410 msgr2=0x7f99441be000 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.824 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.823+0000 7f9948b89640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f99441a1410 0x7f99441be000 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.824 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.823+0000 7f9948b89640 1 -- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f994410b080 msgr2=0x7f99441a0ed0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.824 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.823+0000 7f9948b89640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f994410b080 0x7f99441a0ed0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.824 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.823+0000 7f9948b89640 1 -- 192.168.123.101:0/2650669516 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f99441be6d0 con 0x7f994410a6d0 2026-03-09T15:55:23.824 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.823+0000 7f9943fff640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f994410b080 0x7f99441a0ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:55:23.824 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.823+0000 7f994938a640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f99441a1410 0x7f99441be000 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:23.824 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.823+0000 7f9948b89640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f994410a6d0 0x7f99441a0990 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f9930009860 tx=0x7f9930009d30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.832 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.831+0000 7f9941ffb640 1 -- 192.168.123.101:0/2650669516 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f993000cb20 con 0x7f994410a6d0 2026-03-09T15:55:23.832 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.831+0000 7f994ae14640 1 -- 192.168.123.101:0/2650669516 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f99441be960 con 0x7f994410a6d0 2026-03-09T15:55:23.832 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.831+0000 7f994ae14640 1 -- 192.168.123.101:0/2650669516 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f99441beea0 con 0x7f994410a6d0 2026-03-09T15:55:23.832 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.831+0000 7f9941ffb640 1 -- 192.168.123.101:0/2650669516 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f9930004510 con 0x7f994410a6d0 2026-03-09T15:55:23.832 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.831+0000 7f9941ffb640 1 -- 192.168.123.101:0/2650669516 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9930002da0 con 0x7f994410a6d0 2026-03-09T15:55:23.832 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.831+0000 7f99177fe640 1 -- 192.168.123.101:0/2650669516 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f990c005180 con 0x7f994410a6d0 2026-03-09T15:55:23.840 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.839+0000 7f9941ffb640 1 -- 192.168.123.101:0/2650669516 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f9930028080 con 0x7f994410a6d0 2026-03-09T15:55:23.840 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.839+0000 7f9941ffb640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f99180777d0 0x7f9918079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.841 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.839+0000 7f9941ffb640 1 -- 192.168.123.101:0/2650669516 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f993009b120 con 0x7f994410a6d0 2026-03-09T15:55:23.842 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.839+0000 7f9943fff640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f99180777d0 0x7f9918079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.842 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.839+0000 7f9943fff640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f99180777d0 0x7f9918079c90 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f99441a2150 tx=0x7f993400f040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.843 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.839+0000 7f9941ffb640 1 -- 192.168.123.101:0/2650669516 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9930067ac0 con 0x7f994410a6d0 2026-03-09T15:55:23.896 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.895+0000 7f3aa2955640 1 -- 192.168.123.101:0/220951136 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 3} v 0) -- 0x7f3a68005470 con 0x7f3a9c11c780 2026-03-09T15:55:23.899 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.895+0000 7f3a997fa640 1 -- 192.168.123.101:0/220951136 <== mon.2 v2:192.168.123.101:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 3}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f3a8c067010 con 0x7f3a9c11c780 2026-03-09T15:55:23.899 INFO:teuthology.orchestra.run.vm01.stdout:111669149743 2026-03-09T15:55:23.900 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 -- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3a740777d0 msgr2=0x7f3a74079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.900 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3a740777d0 0x7f3a74079c90 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f3a900406d0 tx=0x7f3a900023d0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.900 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 -- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c11c780 msgr2=0x7f3a9c1175e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.900 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c11c780 0x7f3a9c1175e0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f3a8c00c970 tx=0x7f3a8c00ce40 comp rx=0 tx=0).stop 2026-03-09T15:55:23.900 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 -- 192.168.123.101:0/220951136 shutdown_connections 2026-03-09T15:55:23.900 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3a740777d0 0x7f3a74079c90 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.900 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3a9c11c780 0x7f3a9c1175e0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.900 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a9c10a850 0x7f3a9c1170a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.901 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 --2- 192.168.123.101:0/220951136 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3a9c10a470 0x7f3a9c11c010 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.901 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 -- 192.168.123.101:0/220951136 >> 192.168.123.101:0/220951136 conn(0x7f3a9c06d9f0 msgr2=0x7f3a9c071400 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:23.901 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 -- 192.168.123.101:0/220951136 shutdown_connections 2026-03-09T15:55:23.901 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.899+0000 7f3a7effd640 1 -- 192.168.123.101:0/220951136 wait complete. 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 -- 192.168.123.101:0/3206249750 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4811c780 msgr2=0x7f4d4811eb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 --2- 192.168.123.101:0/3206249750 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4811c780 0x7f4d4811eb70 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f4d4000b0d0 tx=0x7f4d4002f450 comp rx=0 tx=0).stop 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 -- 192.168.123.101:0/3206249750 shutdown_connections 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 --2- 192.168.123.101:0/3206249750 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4811c780 0x7f4d4811eb70 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 --2- 192.168.123.101:0/3206249750 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d4810a850 0x7f4d4810acb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 --2- 192.168.123.101:0/3206249750 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d4810a470 0x7f4d481114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 -- 192.168.123.101:0/3206249750 >> 192.168.123.101:0/3206249750 conn(0x7f4d4806d9c0 msgr2=0x7f4d4806ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 -- 192.168.123.101:0/3206249750 shutdown_connections 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 -- 192.168.123.101:0/3206249750 wait complete. 2026-03-09T15:55:23.920 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 Processor -- start 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 -- start start 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d4810a470 0x7f4d48119b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4810a850 0x7f4d4811a060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d4811c780 0x7f4d48112bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f4d481211c0 con 0x7f4d4811c780 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f4d48121040 con 0x7f4d4810a470 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d4cf95640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f4d48121340 con 0x7f4d4810a850 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d46ffd640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4810a850 0x7f4d4811a060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d46ffd640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4810a850 0x7f4d4811a060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:54750/0 (socket says 192.168.123.101:54750) 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d46ffd640 1 -- 192.168.123.101:0/2404491854 learned_addr learned my addr 192.168.123.101:0/2404491854 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d47fff640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d4811c780 0x7f4d48112bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.921 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d477fe640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d4810a470 0x7f4d48119b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.924 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d46ffd640 1 -- 192.168.123.101:0/2404491854 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d4810a470 msgr2=0x7f4d48119b20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.924 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d46ffd640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d4810a470 0x7f4d48119b20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.924 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d46ffd640 1 -- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d4811c780 msgr2=0x7f4d48112bd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.924 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d46ffd640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d4811c780 0x7f4d48112bd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.924 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.919+0000 7f4d46ffd640 1 -- 192.168.123.101:0/2404491854 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4d48113490 con 0x7f4d4810a850 2026-03-09T15:55:23.928 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.927+0000 7f4d46ffd640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4810a850 0x7f4d4811a060 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f4d3000cce0 tx=0x7f4d30007590 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.928 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.927+0000 7f4d44ff9640 1 -- 192.168.123.101:0/2404491854 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4d300048a0 con 0x7f4d4810a850 2026-03-09T15:55:23.928 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.927+0000 7f4d4cf95640 1 -- 192.168.123.101:0/2404491854 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4d48113780 con 0x7f4d4810a850 2026-03-09T15:55:23.929 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.927+0000 7f4d4cf95640 1 -- 192.168.123.101:0/2404491854 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f4d48117820 con 0x7f4d4810a850 2026-03-09T15:55:23.929 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.927+0000 7f4d44ff9640 1 -- 192.168.123.101:0/2404491854 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f4d30005020 con 0x7f4d4810a850 2026-03-09T15:55:23.929 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.927+0000 7f4d44ff9640 1 -- 192.168.123.101:0/2404491854 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4d3000f670 con 0x7f4d4810a850 2026-03-09T15:55:23.940 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.939+0000 7f4d44ff9640 1 -- 192.168.123.101:0/2404491854 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f4d30004a40 con 0x7f4d4810a850 2026-03-09T15:55:23.940 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.939+0000 7f4d44ff9640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f4d200777d0 0x7f4d20079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:23.940 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.939+0000 7f4d44ff9640 1 -- 192.168.123.101:0/2404491854 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7f4d30099b80 con 0x7f4d4810a850 2026-03-09T15:55:23.943 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.939+0000 7f4d477fe640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f4d200777d0 0x7f4d20079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:23.943 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.943+0000 7f4d477fe640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f4d200777d0 0x7f4d20079c90 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f4d3c005f10 tx=0x7f4d3c005ea0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:23.947 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.943+0000 7f4d4cf95640 1 -- 192.168.123.101:0/2404491854 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4d14005180 con 0x7f4d4810a850 2026-03-09T15:55:23.954 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.951+0000 7f4d44ff9640 1 -- 192.168.123.101:0/2404491854 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4d30014030 con 0x7f4d4810a850 2026-03-09T15:55:23.974 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.971+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7fa02c005470 con 0x7fa06411c780 2026-03-09T15:55:23.975 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.975+0000 7fa061ffb640 1 -- 192.168.123.101:0/2925051888 <== mon.2 v2:192.168.123.101:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7fa05806c020 con 0x7fa06411c780 2026-03-09T15:55:23.975 INFO:teuthology.orchestra.run.vm01.stdout:77309411381 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fa048077800 msgr2=0x7fa048079cc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fa048077800 0x7fa048079cc0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fa05c0062a0 tx=0x7fa05c002750 comp rx=0 tx=0).stop 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06411c780 msgr2=0x7fa064112d10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06411c780 0x7fa064112d10 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7fa05800d950 tx=0x7fa05800de20 comp rx=0 tx=0).stop 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 shutdown_connections 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fa048077800 0x7fa048079cc0 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fa06411c780 0x7fa064112d10 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa06410a850 0x7fa06411a0a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 --2- 192.168.123.101:0/2925051888 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fa06410a470 0x7fa064119b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 >> 192.168.123.101:0/2925051888 conn(0x7fa06406d9f0 msgr2=0x7fa06411cf00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 shutdown_connections 2026-03-09T15:55:23.981 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:23.979+0000 7fa06ac38640 1 -- 192.168.123.101:0/2925051888 wait complete. 2026-03-09T15:55:24.025 INFO:tasks.cephadm.ceph_manager.ceph:need seq 111669149742 got 111669149743 for osd.3 2026-03-09T15:55:24.025 DEBUG:teuthology.parallel:result is None 2026-03-09T15:55:24.044 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.043+0000 7fb3c0b10640 1 -- 192.168.123.101:0/606165699 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3bc113b00 msgr2=0x7fb3bc115f90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.044 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.043+0000 7fb3c0b10640 1 --2- 192.168.123.101:0/606165699 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3bc113b00 0x7fb3bc115f90 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fb3b00077d0 tx=0x7fb3b00301e0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.045 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.043+0000 7fb3c0b10640 1 -- 192.168.123.101:0/606165699 shutdown_connections 2026-03-09T15:55:24.045 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.043+0000 7fb3c0b10640 1 --2- 192.168.123.101:0/606165699 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3bc113b00 0x7fb3bc115f90 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.045 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.043+0000 7fb3c0b10640 1 --2- 192.168.123.101:0/606165699 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3bc10aff0 0x7fb3bc1134e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.045 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.043+0000 7fb3c0b10640 1 --2- 192.168.123.101:0/606165699 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3bc10a6d0 0x7fb3bc10aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.045 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.043+0000 7fb3c0b10640 1 -- 192.168.123.101:0/606165699 >> 192.168.123.101:0/606165699 conn(0x7fb3bc06ee30 msgr2=0x7fb3bc06f240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:24.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 -- 192.168.123.101:0/606165699 shutdown_connections 2026-03-09T15:55:24.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 -- 192.168.123.101:0/606165699 wait complete. 2026-03-09T15:55:24.051 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 Processor -- start 2026-03-09T15:55:24.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 -- start start 2026-03-09T15:55:24.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3bc10a6d0 0x7fb3bc1a5650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:24.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3bc10aff0 0x7fb3bc1a5b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:24.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3bc1a0770 0x7fb3bc1a0c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:24.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fb3bc118790 con 0x7fb3bc10a6d0 2026-03-09T15:55:24.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fb3bc118610 con 0x7fb3bc10aff0 2026-03-09T15:55:24.052 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3c0b10640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7fb3bc118910 con 0x7fb3bc1a0770 2026-03-09T15:55:24.053 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3bb7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3bc10a6d0 0x7fb3bc1a5650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:24.053 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3baffd640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3bc10aff0 0x7fb3bc1a5b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:24.053 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3bb7fe640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3bc10a6d0 0x7fb3bc1a5650 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:58416/0 (socket says 192.168.123.101:58416) 2026-03-09T15:55:24.053 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3bb7fe640 1 -- 192.168.123.101:0/1541889008 learned_addr learned my addr 192.168.123.101:0/1541889008 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:24.053 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.051+0000 7fb3baffd640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3bc10aff0 0x7fb3bc1a5b90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:46876/0 (socket says 192.168.123.101:46876) 2026-03-09T15:55:24.055 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3baffd640 1 -- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3bc1a0770 msgr2=0x7fb3bc1a0c00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.055 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3bbfff640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3bc1a0770 0x7fb3bc1a0c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:24.055 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3baffd640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3bc1a0770 0x7fb3bc1a0c00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.055 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3baffd640 1 -- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3bc10a6d0 msgr2=0x7fb3bc1a5650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.055 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3baffd640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3bc10a6d0 0x7fb3bc1a5650 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.055 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3baffd640 1 -- 192.168.123.101:0/1541889008 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb3bc1a14c0 con 0x7fb3bc10aff0 2026-03-09T15:55:24.055 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3bb7fe640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3bc10a6d0 0x7fb3bc1a5650 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:55:24.055 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3bbfff640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3bc1a0770 0x7fb3bc1a0c00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:24.057 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3baffd640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3bc10aff0 0x7fb3bc1a5b90 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fb3ac00bd70 tx=0x7fb3ac00be70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:24.058 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3b8ff9640 1 -- 192.168.123.101:0/1541889008 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb3ac002a60 con 0x7fb3bc10aff0 2026-03-09T15:55:24.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.055+0000 7fb3c0b10640 1 -- 192.168.123.101:0/1541889008 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb3bc1ab3d0 con 0x7fb3bc10aff0 2026-03-09T15:55:24.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.059+0000 7fb3c0b10640 1 -- 192.168.123.101:0/1541889008 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fb3bc1ab8e0 con 0x7fb3bc10aff0 2026-03-09T15:55:24.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.059+0000 7fb3b8ff9640 1 -- 192.168.123.101:0/1541889008 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fb3ac010070 con 0x7fb3bc10aff0 2026-03-09T15:55:24.060 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.059+0000 7fb3b8ff9640 1 -- 192.168.123.101:0/1541889008 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb3ac015470 con 0x7fb3bc10aff0 2026-03-09T15:55:24.061 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.059+0000 7fb3c0b10640 1 -- 192.168.123.101:0/1541889008 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb384005180 con 0x7fb3bc10aff0 2026-03-09T15:55:24.076 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.075+0000 7fb3b8ff9640 1 -- 192.168.123.101:0/1541889008 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fb3ac0040b0 con 0x7fb3bc10aff0 2026-03-09T15:55:24.076 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.075+0000 7fb3b8ff9640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb380077750 0x7fb380079c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:24.076 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.075+0000 7fb3b8ff9640 1 -- 192.168.123.101:0/1541889008 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7fb3ac09a010 con 0x7fb3bc10aff0 2026-03-09T15:55:24.076 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.075+0000 7fb3b8ff9640 1 -- 192.168.123.101:0/1541889008 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fb3ac09a410 con 0x7fb3bc10aff0 2026-03-09T15:55:24.076 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.075+0000 7fb3bb7fe640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb380077750 0x7fb380079c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:24.077 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.075+0000 7fb3bb7fe640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb380077750 0x7fb380079c10 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fb3a40046d0 tx=0x7fb3a4004050 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:24.143 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411380 got 77309411381 for osd.2 2026-03-09T15:55:24.143 DEBUG:teuthology.parallel:result is None 2026-03-09T15:55:24.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:23 vm01 bash[28152]: cluster 2026-03-09T15:55:22.645164+0000 mgr.y (mgr.14520) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:24.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:23 vm01 bash[28152]: cluster 2026-03-09T15:55:22.645164+0000 mgr.y (mgr.14520) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:24.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:23 vm01 bash[28152]: audit 2026-03-09T15:55:23.898473+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.101:0/220951136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T15:55:24.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:23 vm01 bash[28152]: audit 2026-03-09T15:55:23.898473+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.101:0/220951136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T15:55:24.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:23 vm01 bash[20728]: cluster 2026-03-09T15:55:22.645164+0000 mgr.y (mgr.14520) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:24.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:23 vm01 bash[20728]: cluster 2026-03-09T15:55:22.645164+0000 mgr.y (mgr.14520) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:24.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:23 vm01 bash[20728]: audit 2026-03-09T15:55:23.898473+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.101:0/220951136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T15:55:24.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:23 vm01 bash[20728]: audit 2026-03-09T15:55:23.898473+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.101:0/220951136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T15:55:24.192 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.191+0000 7f2f6cd4f640 1 -- 192.168.123.101:0/3828871474 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 6} v 0) -- 0x7f2f2c005470 con 0x7f2f6810a6d0 2026-03-09T15:55:24.204 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.203+0000 7f2f577fe640 1 -- 192.168.123.101:0/3828871474 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 6}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f2f5c06c560 con 0x7f2f6810a6d0 2026-03-09T15:55:24.204 INFO:teuthology.orchestra.run.vm01.stdout:184683593754 2026-03-09T15:55:24.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 -- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f2f40077800 msgr2=0x7f2f40079cc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f2f40077800 0x7f2f40079cc0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f2f681115f0 tx=0x7f2f58002750 comp rx=0 tx=0).stop 2026-03-09T15:55:24.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 -- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2f6810a6d0 msgr2=0x7f2f68112c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2f6810a6d0 0x7f2f68112c60 secure :-1 s=READY pgs=167 cs=0 l=1 rev1=1 crypto rx=0x7f2f5c00dab0 tx=0x7f2f5c00df80 comp rx=0 tx=0).stop 2026-03-09T15:55:24.207 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 -- 192.168.123.101:0/3828871474 shutdown_connections 2026-03-09T15:55:24.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f2f40077800 0x7f2f40079cc0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2f6810fe30 0x7f2f681102b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f2f68113b50 0x7f2f681131a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 --2- 192.168.123.101:0/3828871474 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f2f6810a6d0 0x7f2f68112c60 unknown :-1 s=CLOSED pgs=167 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 -- 192.168.123.101:0/3828871474 >> 192.168.123.101:0/3828871474 conn(0x7f2f6806deb0 msgr2=0x7f2f68119420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:24.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 -- 192.168.123.101:0/3828871474 shutdown_connections 2026-03-09T15:55:24.208 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7f2f557fa640 1 -- 192.168.123.101:0/3828871474 wait complete. 2026-03-09T15:55:24.211 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.207+0000 7fb3aee42640 1 -- 192.168.123.101:0/2463022338 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7fb3a00b4870 con 0x7fb3a00aa1d0 2026-03-09T15:55:24.222 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.219+0000 7fb396ffd640 1 -- 192.168.123.101:0/2463022338 <== mon.2 v2:192.168.123.101:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7fb3a00b4870 con 0x7fb3a00aa1d0 2026-03-09T15:55:24.222 INFO:teuthology.orchestra.run.vm01.stdout:55834574908 2026-03-09T15:55:24.234 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 -- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb370077800 msgr2=0x7fb370079cc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.234 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb370077800 0x7fb370079cc0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fb3a4030e10 tx=0x7fb3a403c040 comp rx=0 tx=0).stop 2026-03-09T15:55:24.234 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 -- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00aa1d0 msgr2=0x7fb3a013c510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.234 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00aa1d0 0x7fb3a013c510 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7fb39c00b520 tx=0x7fb39c00b9f0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 -- 192.168.123.101:0/2463022338 shutdown_connections 2026-03-09T15:55:24.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb370077800 0x7fb370079cc0 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3a00aa1d0 0x7fb3a013c510 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3a00a5a50 0x7fb3a01418b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 --2- 192.168.123.101:0/2463022338 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3a00a50a0 0x7fb3a0141370 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 -- 192.168.123.101:0/2463022338 >> 192.168.123.101:0/2463022338 conn(0x7fb3a001ab90 msgr2=0x7fb3a00a29f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:24.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 -- 192.168.123.101:0/2463022338 shutdown_connections 2026-03-09T15:55:24.235 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.231+0000 7fb394ff9640 1 -- 192.168.123.101:0/2463022338 wait complete. 2026-03-09T15:55:24.291 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.287+0000 7f99177fe640 1 -- 192.168.123.101:0/2650669516 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 7} v 0) -- 0x7f990c005740 con 0x7f994410a6d0 2026-03-09T15:55:24.302 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.295+0000 7f9941ffb640 1 -- 192.168.123.101:0/2650669516 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 7}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f993006c970 con 0x7f994410a6d0 2026-03-09T15:55:24.311 INFO:teuthology.orchestra.run.vm01.stdout:214748364819 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.323+0000 7f994ae14640 1 -- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f99180777d0 msgr2=0x7f9918079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.323+0000 7f994ae14640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f99180777d0 0x7f9918079c90 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f99441a2150 tx=0x7f993400f040 comp rx=0 tx=0).stop 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.323+0000 7f994ae14640 1 -- 192.168.123.101:0/2650669516 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f994410a6d0 msgr2=0x7f99441a0990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.323+0000 7f994ae14640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f994410a6d0 0x7f99441a0990 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f9930009860 tx=0x7f9930009d30 comp rx=0 tx=0).stop 2026-03-09T15:55:24.328 INFO:tasks.cephadm.ceph_manager.ceph:need seq 184683593753 got 184683593754 for osd.6 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.327+0000 7f994ae14640 1 -- 192.168.123.101:0/2650669516 shutdown_connections 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.327+0000 7f994ae14640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f99180777d0 0x7f9918079c90 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.327+0000 7f994ae14640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f99441a1410 0x7f99441be000 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.327+0000 7f994ae14640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f994410b080 0x7f99441a0ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.327+0000 7f994ae14640 1 --2- 192.168.123.101:0/2650669516 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f994410a6d0 0x7f99441a0990 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.327+0000 7f994ae14640 1 -- 192.168.123.101:0/2650669516 >> 192.168.123.101:0/2650669516 conn(0x7f994406dfc0 msgr2=0x7f994410d650 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:24.328 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.327+0000 7f994ae14640 1 -- 192.168.123.101:0/2650669516 shutdown_connections 2026-03-09T15:55:24.329 DEBUG:teuthology.parallel:result is None 2026-03-09T15:55:24.329 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.327+0000 7f994ae14640 1 -- 192.168.123.101:0/2650669516 wait complete. 2026-03-09T15:55:24.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.339+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) -- 0x7fe798005470 con 0x7fe7c00a5a60 2026-03-09T15:55:24.340 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.339+0000 7fe7b67fc640 1 -- 192.168.123.101:0/1406944844 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7fe7b8002e20 con 0x7fe7c00a5a60 2026-03-09T15:55:24.340 INFO:teuthology.orchestra.run.vm01.stdout:34359738435 2026-03-09T15:55:24.350 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fe7a00777d0 msgr2=0x7fe7a0079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.350 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fe7a00777d0 0x7fe7a0079c90 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fe7bc030420 tx=0x7fe7bc032040 comp rx=0 tx=0).stop 2026-03-09T15:55:24.350 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7c00a5a60 msgr2=0x7fe7c013cbb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.350 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7c00a5a60 0x7fe7c013cbb0 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7fe7b800e9f0 tx=0x7fe7b800eec0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.351 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 shutdown_connections 2026-03-09T15:55:24.351 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fe7a00777d0 0x7fe7a0079c90 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.351 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fe7c00aa1e0 0x7fe7c01597c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.351 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fe7c00a5a60 0x7fe7c013cbb0 unknown :-1 s=CLOSED pgs=168 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.351 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 --2- 192.168.123.101:0/1406944844 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7c00a50b0 0x7fe7c013c670 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.351 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 >> 192.168.123.101:0/1406944844 conn(0x7fe7c001abc0 msgr2=0x7fe7c00a2990 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:24.351 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 shutdown_connections 2026-03-09T15:55:24.351 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.347+0000 7fe7ce7fd640 1 -- 192.168.123.101:0/1406944844 wait complete. 2026-03-09T15:55:24.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:23 vm09 bash[22983]: cluster 2026-03-09T15:55:22.645164+0000 mgr.y (mgr.14520) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:24.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:23 vm09 bash[22983]: cluster 2026-03-09T15:55:22.645164+0000 mgr.y (mgr.14520) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:24.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:23 vm09 bash[22983]: audit 2026-03-09T15:55:23.898473+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.101:0/220951136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T15:55:24.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:23 vm09 bash[22983]: audit 2026-03-09T15:55:23.898473+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.101:0/220951136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T15:55:24.407 INFO:teuthology.orchestra.run.vm01.stdout:133143986218 2026-03-09T15:55:24.407 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.399+0000 7f4d4cf95640 1 -- 192.168.123.101:0/2404491854 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) -- 0x7f4d14005470 con 0x7f4d4810a850 2026-03-09T15:55:24.407 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.403+0000 7f4d44ff9640 1 -- 192.168.123.101:0/2404491854 <== mon.2 v2:192.168.123.101:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f4d30066520 con 0x7f4d4810a850 2026-03-09T15:55:24.422 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.419+0000 7f4d267fc640 1 -- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f4d200777d0 msgr2=0x7f4d20079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.423 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.419+0000 7f4d267fc640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f4d200777d0 0x7f4d20079c90 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f4d3c005f10 tx=0x7f4d3c005ea0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.423 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.419+0000 7f4d267fc640 1 -- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4810a850 msgr2=0x7f4d4811a060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.423 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.419+0000 7f4d267fc640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4810a850 0x7f4d4811a060 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f4d3000cce0 tx=0x7f4d30007590 comp rx=0 tx=0).stop 2026-03-09T15:55:24.423 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.411+0000 7fb3c0b10640 1 -- 192.168.123.101:0/1541889008 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) -- 0x7fb384005740 con 0x7fb3bc10aff0 2026-03-09T15:55:24.423 INFO:teuthology.orchestra.run.vm01.stdout:158913789985 2026-03-09T15:55:24.423 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.411+0000 7fb3b8ff9640 1 -- 192.168.123.101:0/1541889008 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7fb3ac0669b0 con 0x7fb3bc10aff0 2026-03-09T15:55:24.428 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7f4d267fc640 1 -- 192.168.123.101:0/2404491854 shutdown_connections 2026-03-09T15:55:24.428 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7f4d267fc640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f4d200777d0 0x7f4d20079c90 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.428 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7f4d267fc640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f4d4811c780 0x7f4d48112bd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.428 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7f4d267fc640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f4d4810a850 0x7f4d4811a060 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.428 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7f4d267fc640 1 --2- 192.168.123.101:0/2404491854 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4d4810a470 0x7f4d48119b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.428 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7f4d267fc640 1 -- 192.168.123.101:0/2404491854 >> 192.168.123.101:0/2404491854 conn(0x7f4d4806d9c0 msgr2=0x7f4d4811cc60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:24.430 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7f4d267fc640 1 -- 192.168.123.101:0/2404491854 shutdown_connections 2026-03-09T15:55:24.431 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7f4d267fc640 1 -- 192.168.123.101:0/2404491854 wait complete. 2026-03-09T15:55:24.431 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7fb39a7fc640 1 -- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb380077750 msgr2=0x7fb380079c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.431 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7fb39a7fc640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb380077750 0x7fb380079c10 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fb3a40046d0 tx=0x7fb3a4004050 comp rx=0 tx=0).stop 2026-03-09T15:55:24.431 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7fb39a7fc640 1 -- 192.168.123.101:0/1541889008 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3bc10aff0 msgr2=0x7fb3bc1a5b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:24.431 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.427+0000 7fb39a7fc640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3bc10aff0 0x7fb3bc1a5b90 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fb3ac00bd70 tx=0x7fb3ac00be70 comp rx=0 tx=0).stop 2026-03-09T15:55:24.431 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574907 got 55834574908 for osd.1 2026-03-09T15:55:24.431 DEBUG:teuthology.parallel:result is None 2026-03-09T15:55:24.432 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.431+0000 7fb39a7fc640 1 -- 192.168.123.101:0/1541889008 shutdown_connections 2026-03-09T15:55:24.432 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.431+0000 7fb39a7fc640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fb380077750 0x7fb380079c10 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.432 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.431+0000 7fb39a7fc640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fb3bc1a0770 0x7fb3bc1a0c00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.432 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.431+0000 7fb39a7fc640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb3bc10aff0 0x7fb3bc1a5b90 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.432 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.431+0000 7fb39a7fc640 1 --2- 192.168.123.101:0/1541889008 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fb3bc10a6d0 0x7fb3bc1a5650 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:24.432 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.431+0000 7fb39a7fc640 1 -- 192.168.123.101:0/1541889008 >> 192.168.123.101:0/1541889008 conn(0x7fb3bc06ee30 msgr2=0x7fb3bc0733a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:24.433 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.431+0000 7fb39a7fc640 1 -- 192.168.123.101:0/1541889008 shutdown_connections 2026-03-09T15:55:24.434 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:24.431+0000 7fb39a7fc640 1 -- 192.168.123.101:0/1541889008 wait complete. 2026-03-09T15:55:24.545 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738434 got 34359738435 for osd.0 2026-03-09T15:55:24.546 DEBUG:teuthology.parallel:result is None 2026-03-09T15:55:24.547 INFO:tasks.cephadm.ceph_manager.ceph:need seq 214748364818 got 214748364819 for osd.7 2026-03-09T15:55:24.547 DEBUG:teuthology.parallel:result is None 2026-03-09T15:55:24.579 INFO:tasks.cephadm.ceph_manager.ceph:need seq 158913789984 got 158913789985 for osd.5 2026-03-09T15:55:24.579 DEBUG:teuthology.parallel:result is None 2026-03-09T15:55:24.595 INFO:tasks.cephadm.ceph_manager.ceph:need seq 133143986217 got 133143986218 for osd.4 2026-03-09T15:55:24.596 DEBUG:teuthology.parallel:result is None 2026-03-09T15:55:24.596 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T15:55:24.596 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph pg dump --format=json 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:23.976885+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.101:0/2925051888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:23.976885+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.101:0/2925051888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.194831+0000 mon.a (mon.0) 858 : audit [DBG] from='client.? 192.168.123.101:0/3828871474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.194831+0000 mon.a (mon.0) 858 : audit [DBG] from='client.? 192.168.123.101:0/3828871474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.221388+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.101:0/2463022338' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.221388+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.101:0/2463022338' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.292636+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.101:0/2650669516' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.292636+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.101:0/2650669516' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.342114+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.101:0/1406944844' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.342114+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.101:0/1406944844' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.402765+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.101:0/2404491854' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.402765+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.101:0/2404491854' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.412324+0000 mon.b (mon.1) 32 : audit [DBG] from='client.? 192.168.123.101:0/1541889008' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:24 vm01 bash[28152]: audit 2026-03-09T15:55:24.412324+0000 mon.b (mon.1) 32 : audit [DBG] from='client.? 192.168.123.101:0/1541889008' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:23.976885+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.101:0/2925051888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:23.976885+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.101:0/2925051888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.194831+0000 mon.a (mon.0) 858 : audit [DBG] from='client.? 192.168.123.101:0/3828871474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.194831+0000 mon.a (mon.0) 858 : audit [DBG] from='client.? 192.168.123.101:0/3828871474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.221388+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.101:0/2463022338' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.221388+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.101:0/2463022338' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T15:55:25.195 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.292636+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.101:0/2650669516' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T15:55:25.196 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.292636+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.101:0/2650669516' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T15:55:25.196 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.342114+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.101:0/1406944844' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T15:55:25.196 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.342114+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.101:0/1406944844' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T15:55:25.196 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.402765+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.101:0/2404491854' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T15:55:25.196 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.402765+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.101:0/2404491854' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T15:55:25.196 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.412324+0000 mon.b (mon.1) 32 : audit [DBG] from='client.? 192.168.123.101:0/1541889008' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T15:55:25.196 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:24 vm01 bash[20728]: audit 2026-03-09T15:55:24.412324+0000 mon.b (mon.1) 32 : audit [DBG] from='client.? 192.168.123.101:0/1541889008' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:23.976885+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.101:0/2925051888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:23.976885+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.101:0/2925051888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.194831+0000 mon.a (mon.0) 858 : audit [DBG] from='client.? 192.168.123.101:0/3828871474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.194831+0000 mon.a (mon.0) 858 : audit [DBG] from='client.? 192.168.123.101:0/3828871474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.221388+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.101:0/2463022338' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.221388+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.101:0/2463022338' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.292636+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.101:0/2650669516' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.292636+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.101:0/2650669516' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.342114+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.101:0/1406944844' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.342114+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.101:0/1406944844' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.402765+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.101:0/2404491854' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.402765+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.101:0/2404491854' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.412324+0000 mon.b (mon.1) 32 : audit [DBG] from='client.? 192.168.123.101:0/1541889008' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T15:55:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:24 vm09 bash[22983]: audit 2026-03-09T15:55:24.412324+0000 mon.b (mon.1) 32 : audit [DBG] from='client.? 192.168.123.101:0/1541889008' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T15:55:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:25 vm09 bash[22983]: cluster 2026-03-09T15:55:24.645797+0000 mgr.y (mgr.14520) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:25 vm09 bash[22983]: cluster 2026-03-09T15:55:24.645797+0000 mgr.y (mgr.14520) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:55:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:55:26.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:25 vm01 bash[28152]: cluster 2026-03-09T15:55:24.645797+0000 mgr.y (mgr.14520) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:26.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:25 vm01 bash[28152]: cluster 2026-03-09T15:55:24.645797+0000 mgr.y (mgr.14520) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:26.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:25 vm01 bash[20728]: cluster 2026-03-09T15:55:24.645797+0000 mgr.y (mgr.14520) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:26.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:25 vm01 bash[20728]: cluster 2026-03-09T15:55:24.645797+0000 mgr.y (mgr.14520) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:28 vm09 bash[22983]: audit 2026-03-09T15:55:26.177217+0000 mgr.y (mgr.14520) 60 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:28 vm09 bash[22983]: audit 2026-03-09T15:55:26.177217+0000 mgr.y (mgr.14520) 60 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:28 vm09 bash[22983]: cluster 2026-03-09T15:55:26.646244+0000 mgr.y (mgr.14520) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:28 vm09 bash[22983]: cluster 2026-03-09T15:55:26.646244+0000 mgr.y (mgr.14520) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:28 vm09 bash[22983]: audit 2026-03-09T15:55:27.742689+0000 mon.a (mon.0) 860 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:28 vm09 bash[22983]: audit 2026-03-09T15:55:27.742689+0000 mon.a (mon.0) 860 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:28 vm01 bash[28152]: audit 2026-03-09T15:55:26.177217+0000 mgr.y (mgr.14520) 60 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:28 vm01 bash[28152]: audit 2026-03-09T15:55:26.177217+0000 mgr.y (mgr.14520) 60 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:28 vm01 bash[28152]: cluster 2026-03-09T15:55:26.646244+0000 mgr.y (mgr.14520) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:28 vm01 bash[28152]: cluster 2026-03-09T15:55:26.646244+0000 mgr.y (mgr.14520) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:28 vm01 bash[28152]: audit 2026-03-09T15:55:27.742689+0000 mon.a (mon.0) 860 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:28 vm01 bash[28152]: audit 2026-03-09T15:55:27.742689+0000 mon.a (mon.0) 860 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:28 vm01 bash[20728]: audit 2026-03-09T15:55:26.177217+0000 mgr.y (mgr.14520) 60 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:28 vm01 bash[20728]: audit 2026-03-09T15:55:26.177217+0000 mgr.y (mgr.14520) 60 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:28 vm01 bash[20728]: cluster 2026-03-09T15:55:26.646244+0000 mgr.y (mgr.14520) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:28 vm01 bash[20728]: cluster 2026-03-09T15:55:26.646244+0000 mgr.y (mgr.14520) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:28 vm01 bash[20728]: audit 2026-03-09T15:55:27.742689+0000 mon.a (mon.0) 860 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:28.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:28 vm01 bash[20728]: audit 2026-03-09T15:55:27.742689+0000 mon.a (mon.0) 860 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:29.298 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:29.347 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:29 vm01 bash[20728]: cluster 2026-03-09T15:55:28.646816+0000 mgr.y (mgr.14520) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:29.347 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:29 vm01 bash[20728]: cluster 2026-03-09T15:55:28.646816+0000 mgr.y (mgr.14520) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:29.347 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:29 vm01 bash[28152]: cluster 2026-03-09T15:55:28.646816+0000 mgr.y (mgr.14520) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:29.347 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:29 vm01 bash[28152]: cluster 2026-03-09T15:55:28.646816+0000 mgr.y (mgr.14520) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:29 vm09 bash[22983]: cluster 2026-03-09T15:55:28.646816+0000 mgr.y (mgr.14520) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:29 vm09 bash[22983]: cluster 2026-03-09T15:55:28.646816+0000 mgr.y (mgr.14520) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.455+0000 7eff65394640 1 -- 192.168.123.101:0/2167325067 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60113b80 msgr2=0x7eff60115f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.455+0000 7eff65394640 1 --2- 192.168.123.101:0/2167325067 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60113b80 0x7eff60115f70 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7eff5000b0a0 tx=0x7eff5002f470 comp rx=0 tx=0).stop 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.455+0000 7eff65394640 1 -- 192.168.123.101:0/2167325067 shutdown_connections 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.455+0000 7eff65394640 1 --2- 192.168.123.101:0/2167325067 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60113b80 0x7eff60115f70 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.455+0000 7eff65394640 1 --2- 192.168.123.101:0/2167325067 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7eff60077f40 0x7eff60113640 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.455+0000 7eff65394640 1 --2- 192.168.123.101:0/2167325067 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff60077620 0x7eff60077a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.455+0000 7eff65394640 1 -- 192.168.123.101:0/2167325067 >> 192.168.123.101:0/2167325067 conn(0x7eff601009e0 msgr2=0x7eff60102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.455+0000 7eff65394640 1 -- 192.168.123.101:0/2167325067 shutdown_connections 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 -- 192.168.123.101:0/2167325067 wait complete. 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 Processor -- start 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 -- start start 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60077620 0x7eff601a0d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:29.459 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5effd640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60077620 0x7eff601a0d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5effd640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60077620 0x7eff601a0d70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:37024/0 (socket says 192.168.123.101:37024) 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7eff60077f40 0x7eff601a12b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff60113b80 0x7eff601a5640 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7eff601187d0 con 0x7eff60077f40 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5effd640 1 -- 192.168.123.101:0/1134356714 learned_addr learned my addr 192.168.123.101:0/1134356714 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7eff60118650 con 0x7eff60113b80 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7eff60118950 con 0x7eff60077620 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5f7fe640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff60113b80 0x7eff601a5640 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5e7fc640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7eff60077f40 0x7eff601a12b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5e7fc640 1 -- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60077620 msgr2=0x7eff601a0d70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5e7fc640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60077620 0x7eff601a0d70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.460 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5e7fc640 1 -- 192.168.123.101:0/1134356714 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff60113b80 msgr2=0x7eff601a5640 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:29.461 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5e7fc640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff60113b80 0x7eff601a5640 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.461 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5e7fc640 1 -- 192.168.123.101:0/1134356714 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7eff601a5dc0 con 0x7eff60077f40 2026-03-09T15:55:29.461 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5e7fc640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7eff60077f40 0x7eff601a12b0 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7eff4c00da20 tx=0x7eff4c00def0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:29.461 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff5effd640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60077620 0x7eff601a0d70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:29.461 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff43fff640 1 -- 192.168.123.101:0/1134356714 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7eff4c014070 con 0x7eff60077f40 2026-03-09T15:55:29.461 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7eff601a60b0 con 0x7eff60077f40 2026-03-09T15:55:29.463 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7eff601ad8f0 con 0x7eff60077f40 2026-03-09T15:55:29.463 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff43fff640 1 -- 192.168.123.101:0/1134356714 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7eff4c004510 con 0x7eff60077f40 2026-03-09T15:55:29.463 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.459+0000 7eff43fff640 1 -- 192.168.123.101:0/1134356714 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7eff4c004e50 con 0x7eff60077f40 2026-03-09T15:55:29.463 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.463+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7eff2c005180 con 0x7eff60077f40 2026-03-09T15:55:29.467 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.463+0000 7eff43fff640 1 -- 192.168.123.101:0/1134356714 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7eff4c0040d0 con 0x7eff60077f40 2026-03-09T15:55:29.467 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.463+0000 7eff43fff640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7eff28077970 0x7eff28079e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:29.467 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.463+0000 7eff43fff640 1 -- 192.168.123.101:0/1134356714 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7eff4c0999e0 con 0x7eff60077f40 2026-03-09T15:55:29.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.467+0000 7eff5effd640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7eff28077970 0x7eff28079e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:29.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.467+0000 7eff5effd640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7eff28077970 0x7eff28079e30 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7eff540059c0 tx=0x7eff5400a2b0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:29.468 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.467+0000 7eff43fff640 1 -- 192.168.123.101:0/1134356714 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7eff4c066380 con 0x7eff60077f40 2026-03-09T15:55:29.569 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.567+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 --> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7eff2c002bf0 con 0x7eff28077970 2026-03-09T15:55:29.575 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.575+0000 7eff43fff640 1 -- 192.168.123.101:0/1134356714 <== mgr.14520 v2:192.168.123.101:6800/123914266 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+346481 (secure 0 0 0) 0x7eff2c002bf0 con 0x7eff28077970 2026-03-09T15:55:29.576 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:55:29.577 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7eff28077970 msgr2=0x7eff28079e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7eff28077970 0x7eff28079e30 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7eff540059c0 tx=0x7eff5400a2b0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7eff60077f40 msgr2=0x7eff601a12b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7eff60077f40 0x7eff601a12b0 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7eff4c00da20 tx=0x7eff4c00def0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 shutdown_connections 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7eff28077970 0x7eff28079e30 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff60113b80 0x7eff601a5640 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7eff60077f40 0x7eff601a12b0 unknown :-1 s=CLOSED pgs=169 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 --2- 192.168.123.101:0/1134356714 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7eff60077620 0x7eff601a0d70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 >> 192.168.123.101:0/1134356714 conn(0x7eff601009e0 msgr2=0x7eff60102dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 shutdown_connections 2026-03-09T15:55:29.581 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:29.579+0000 7eff65394640 1 -- 192.168.123.101:0/1134356714 wait complete. 2026-03-09T15:55:29.654 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":26,"stamp":"2026-03-09T15:55:28.646534+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":907,"num_read_kb":766,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221648,"kb_used_data":6940,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167517744,"statfs":{"total":171765137408,"available":171538169856,"internally_reserved":0,"allocated":7106560,"data_stored":3715513,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12709,"internal_metadata":219663963},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002976"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608185+0000","last_change":"2026-03-09T15:54:17.381614+0000","last_active":"2026-03-09T15:54:42.608185+0000","last_peered":"2026-03-09T15:54:42.608185+0000","last_clean":"2026-03-09T15:54:42.608185+0000","last_became_active":"2026-03-09T15:54:17.381426+0000","last_became_peered":"2026-03-09T15:54:17.381426+0000","last_unstale":"2026-03-09T15:54:42.608185+0000","last_undegraded":"2026-03-09T15:54:42.608185+0000","last_fullsized":"2026-03-09T15:54:42.608185+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:21:13.560580+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071608+0000","last_change":"2026-03-09T15:54:11.362363+0000","last_active":"2026-03-09T15:54:43.071608+0000","last_peered":"2026-03-09T15:54:43.071608+0000","last_clean":"2026-03-09T15:54:43.071608+0000","last_became_active":"2026-03-09T15:54:11.361461+0000","last_became_peered":"2026-03-09T15:54:11.361461+0000","last_unstale":"2026-03-09T15:54:43.071608+0000","last_undegraded":"2026-03-09T15:54:43.071608+0000","last_fullsized":"2026-03-09T15:54:43.071608+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:14:16.946358+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"62'10","reported_seq":42,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608226+0000","last_change":"2026-03-09T15:54:13.355864+0000","last_active":"2026-03-09T15:54:42.608226+0000","last_peered":"2026-03-09T15:54:42.608226+0000","last_clean":"2026-03-09T15:54:42.608226+0000","last_became_active":"2026-03-09T15:54:13.355604+0000","last_became_peered":"2026-03-09T15:54:13.355604+0000","last_unstale":"2026-03-09T15:54:42.608226+0000","last_undegraded":"2026-03-09T15:54:42.608226+0000","last_fullsized":"2026-03-09T15:54:42.608226+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:57:26.744585+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725528+0000","last_change":"2026-03-09T15:54:15.366318+0000","last_active":"2026-03-09T15:54:42.725528+0000","last_peered":"2026-03-09T15:54:42.725528+0000","last_clean":"2026-03-09T15:54:42.725528+0000","last_became_active":"2026-03-09T15:54:15.366170+0000","last_became_peered":"2026-03-09T15:54:15.366170+0000","last_unstale":"2026-03-09T15:54:42.725528+0000","last_undegraded":"2026-03-09T15:54:42.725528+0000","last_fullsized":"2026-03-09T15:54:42.725528+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:58:08.472942+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"54'1","reported_seq":39,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607269+0000","last_change":"2026-03-09T15:54:11.344598+0000","last_active":"2026-03-09T15:54:42.607269+0000","last_peered":"2026-03-09T15:54:42.607269+0000","last_clean":"2026-03-09T15:54:42.607269+0000","last_became_active":"2026-03-09T15:54:11.344446+0000","last_became_peered":"2026-03-09T15:54:11.344446+0000","last_unstale":"2026-03-09T15:54:42.607269+0000","last_undegraded":"2026-03-09T15:54:42.607269+0000","last_fullsized":"2026-03-09T15:54:42.607269+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:56:45.531791+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"62'11","reported_seq":46,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071959+0000","last_change":"2026-03-09T15:54:13.360634+0000","last_active":"2026-03-09T15:54:43.071959+0000","last_peered":"2026-03-09T15:54:43.071959+0000","last_clean":"2026-03-09T15:54:43.071959+0000","last_became_active":"2026-03-09T15:54:13.354798+0000","last_became_peered":"2026-03-09T15:54:13.354798+0000","last_unstale":"2026-03-09T15:54:43.071959+0000","last_undegraded":"2026-03-09T15:54:43.071959+0000","last_fullsized":"2026-03-09T15:54:43.071959+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:34:21.064892+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610484+0000","last_change":"2026-03-09T15:54:15.369094+0000","last_active":"2026-03-09T15:54:42.610484+0000","last_peered":"2026-03-09T15:54:42.610484+0000","last_clean":"2026-03-09T15:54:42.610484+0000","last_became_active":"2026-03-09T15:54:15.368996+0000","last_became_peered":"2026-03-09T15:54:15.368996+0000","last_unstale":"2026-03-09T15:54:42.610484+0000","last_undegraded":"2026-03-09T15:54:42.610484+0000","last_fullsized":"2026-03-09T15:54:42.610484+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:53:29.393338+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.724858+0000","last_change":"2026-03-09T15:54:17.388558+0000","last_active":"2026-03-09T15:54:42.724858+0000","last_peered":"2026-03-09T15:54:42.724858+0000","last_clean":"2026-03-09T15:54:42.724858+0000","last_became_active":"2026-03-09T15:54:17.385514+0000","last_became_peered":"2026-03-09T15:54:17.385514+0000","last_unstale":"2026-03-09T15:54:42.724858+0000","last_undegraded":"2026-03-09T15:54:42.724858+0000","last_fullsized":"2026-03-09T15:54:42.724858+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:57:18.231700+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070785+0000","last_change":"2026-03-09T15:54:11.366913+0000","last_active":"2026-03-09T15:54:43.070785+0000","last_peered":"2026-03-09T15:54:43.070785+0000","last_clean":"2026-03-09T15:54:43.070785+0000","last_became_active":"2026-03-09T15:54:11.366719+0000","last_became_peered":"2026-03-09T15:54:11.366719+0000","last_unstale":"2026-03-09T15:54:43.070785+0000","last_undegraded":"2026-03-09T15:54:43.070785+0000","last_fullsized":"2026-03-09T15:54:43.070785+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:48:15.023241+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"61'15","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612587+0000","last_change":"2026-03-09T15:54:13.351676+0000","last_active":"2026-03-09T15:54:42.612587+0000","last_peered":"2026-03-09T15:54:42.612587+0000","last_clean":"2026-03-09T15:54:42.612587+0000","last_became_active":"2026-03-09T15:54:13.351510+0000","last_became_peered":"2026-03-09T15:54:13.351510+0000","last_unstale":"2026-03-09T15:54:42.612587+0000","last_undegraded":"2026-03-09T15:54:42.612587+0000","last_fullsized":"2026-03-09T15:54:42.612587+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:43:52.051889+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070873+0000","last_change":"2026-03-09T15:54:15.379526+0000","last_active":"2026-03-09T15:54:43.070873+0000","last_peered":"2026-03-09T15:54:43.070873+0000","last_clean":"2026-03-09T15:54:43.070873+0000","last_became_active":"2026-03-09T15:54:15.377552+0000","last_became_peered":"2026-03-09T15:54:15.377552+0000","last_unstale":"2026-03-09T15:54:43.070873+0000","last_undegraded":"2026-03-09T15:54:43.070873+0000","last_fullsized":"2026-03-09T15:54:43.070873+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:52:35.402533+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612480+0000","last_change":"2026-03-09T15:54:17.376486+0000","last_active":"2026-03-09T15:54:42.612480+0000","last_peered":"2026-03-09T15:54:42.612480+0000","last_clean":"2026-03-09T15:54:42.612480+0000","last_became_active":"2026-03-09T15:54:17.376386+0000","last_became_peered":"2026-03-09T15:54:17.376386+0000","last_unstale":"2026-03-09T15:54:42.612480+0000","last_undegraded":"2026-03-09T15:54:42.612480+0000","last_fullsized":"2026-03-09T15:54:42.612480+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:29:24.059321+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070685+0000","last_change":"2026-03-09T15:54:11.360504+0000","last_active":"2026-03-09T15:54:43.070685+0000","last_peered":"2026-03-09T15:54:43.070685+0000","last_clean":"2026-03-09T15:54:43.070685+0000","last_became_active":"2026-03-09T15:54:11.360411+0000","last_became_peered":"2026-03-09T15:54:11.360411+0000","last_unstale":"2026-03-09T15:54:43.070685+0000","last_undegraded":"2026-03-09T15:54:43.070685+0000","last_fullsized":"2026-03-09T15:54:43.070685+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:47:59.369509+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"62'12","reported_seq":50,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611421+0000","last_change":"2026-03-09T15:54:13.354473+0000","last_active":"2026-03-09T15:54:42.611421+0000","last_peered":"2026-03-09T15:54:42.611421+0000","last_clean":"2026-03-09T15:54:42.611421+0000","last_became_active":"2026-03-09T15:54:13.354400+0000","last_became_peered":"2026-03-09T15:54:13.354400+0000","last_unstale":"2026-03-09T15:54:42.611421+0000","last_undegraded":"2026-03-09T15:54:42.611421+0000","last_fullsized":"2026-03-09T15:54:42.611421+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:37:49.942670+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611349+0000","last_change":"2026-03-09T15:54:15.383541+0000","last_active":"2026-03-09T15:54:42.611349+0000","last_peered":"2026-03-09T15:54:42.611349+0000","last_clean":"2026-03-09T15:54:42.611349+0000","last_became_active":"2026-03-09T15:54:15.383409+0000","last_became_peered":"2026-03-09T15:54:15.383409+0000","last_unstale":"2026-03-09T15:54:42.611349+0000","last_undegraded":"2026-03-09T15:54:42.611349+0000","last_fullsized":"2026-03-09T15:54:42.611349+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:22:42.222598+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070974+0000","last_change":"2026-03-09T15:54:17.376725+0000","last_active":"2026-03-09T15:54:43.070974+0000","last_peered":"2026-03-09T15:54:43.070974+0000","last_clean":"2026-03-09T15:54:43.070974+0000","last_became_active":"2026-03-09T15:54:17.376498+0000","last_became_peered":"2026-03-09T15:54:17.376498+0000","last_unstale":"2026-03-09T15:54:43.070974+0000","last_undegraded":"2026-03-09T15:54:43.070974+0000","last_fullsized":"2026-03-09T15:54:43.070974+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:12:13.199965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"62'19","reported_seq":58,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072871+0000","last_change":"2026-03-09T15:54:13.359646+0000","last_active":"2026-03-09T15:54:43.072871+0000","last_peered":"2026-03-09T15:54:43.072871+0000","last_clean":"2026-03-09T15:54:43.072871+0000","last_became_active":"2026-03-09T15:54:13.359412+0000","last_became_peered":"2026-03-09T15:54:13.359412+0000","last_unstale":"2026-03-09T15:54:43.072871+0000","last_undegraded":"2026-03-09T15:54:43.072871+0000","last_fullsized":"2026-03-09T15:54:43.072871+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:18:47.639834+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070525+0000","last_change":"2026-03-09T15:54:11.360161+0000","last_active":"2026-03-09T15:54:43.070525+0000","last_peered":"2026-03-09T15:54:43.070525+0000","last_clean":"2026-03-09T15:54:43.070525+0000","last_became_active":"2026-03-09T15:54:11.360032+0000","last_became_peered":"2026-03-09T15:54:11.360032+0000","last_unstale":"2026-03-09T15:54:43.070525+0000","last_undegraded":"2026-03-09T15:54:43.070525+0000","last_fullsized":"2026-03-09T15:54:43.070525+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:05:25.991121+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610432+0000","last_change":"2026-03-09T15:54:15.367415+0000","last_active":"2026-03-09T15:54:42.610432+0000","last_peered":"2026-03-09T15:54:42.610432+0000","last_clean":"2026-03-09T15:54:42.610432+0000","last_became_active":"2026-03-09T15:54:15.366623+0000","last_became_peered":"2026-03-09T15:54:15.366623+0000","last_unstale":"2026-03-09T15:54:42.610432+0000","last_undegraded":"2026-03-09T15:54:42.610432+0000","last_fullsized":"2026-03-09T15:54:42.610432+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:03:29.130787+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073979+0000","last_change":"2026-03-09T15:54:17.389455+0000","last_active":"2026-03-09T15:54:43.073979+0000","last_peered":"2026-03-09T15:54:43.073979+0000","last_clean":"2026-03-09T15:54:43.073979+0000","last_became_active":"2026-03-09T15:54:17.387802+0000","last_became_peered":"2026-03-09T15:54:17.387802+0000","last_unstale":"2026-03-09T15:54:43.073979+0000","last_undegraded":"2026-03-09T15:54:43.073979+0000","last_fullsized":"2026-03-09T15:54:43.073979+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:54:13.560516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607324+0000","last_change":"2026-03-09T15:54:13.357286+0000","last_active":"2026-03-09T15:54:42.607324+0000","last_peered":"2026-03-09T15:54:42.607324+0000","last_clean":"2026-03-09T15:54:42.607324+0000","last_became_active":"2026-03-09T15:54:13.356206+0000","last_became_peered":"2026-03-09T15:54:13.356206+0000","last_unstale":"2026-03-09T15:54:42.607324+0000","last_undegraded":"2026-03-09T15:54:42.607324+0000","last_fullsized":"2026-03-09T15:54:42.607324+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:15:34.380881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609851+0000","last_change":"2026-03-09T15:54:11.364404+0000","last_active":"2026-03-09T15:54:42.609851+0000","last_peered":"2026-03-09T15:54:42.609851+0000","last_clean":"2026-03-09T15:54:42.609851+0000","last_became_active":"2026-03-09T15:54:11.363935+0000","last_became_peered":"2026-03-09T15:54:11.363935+0000","last_unstale":"2026-03-09T15:54:42.609851+0000","last_undegraded":"2026-03-09T15:54:42.609851+0000","last_fullsized":"2026-03-09T15:54:42.609851+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:33:04.895728+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"62'11","reported_seq":48,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.413959+0000","last_change":"2026-03-09T15:54:15.378921+0000","last_active":"2026-03-09T15:55:16.413959+0000","last_peered":"2026-03-09T15:55:16.413959+0000","last_clean":"2026-03-09T15:55:16.413959+0000","last_became_active":"2026-03-09T15:54:15.378285+0000","last_became_peered":"2026-03-09T15:54:15.378285+0000","last_unstale":"2026-03-09T15:55:16.413959+0000","last_undegraded":"2026-03-09T15:55:16.413959+0000","last_fullsized":"2026-03-09T15:55:16.413959+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:15.316324+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726576+0000","last_change":"2026-03-09T15:54:17.381082+0000","last_active":"2026-03-09T15:54:42.726576+0000","last_peered":"2026-03-09T15:54:42.726576+0000","last_clean":"2026-03-09T15:54:42.726576+0000","last_became_active":"2026-03-09T15:54:17.380847+0000","last_became_peered":"2026-03-09T15:54:17.380847+0000","last_unstale":"2026-03-09T15:54:42.726576+0000","last_undegraded":"2026-03-09T15:54:42.726576+0000","last_fullsized":"2026-03-09T15:54:42.726576+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:08:20.888387+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"61'15","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607386+0000","last_change":"2026-03-09T15:54:13.357355+0000","last_active":"2026-03-09T15:54:42.607386+0000","last_peered":"2026-03-09T15:54:42.607386+0000","last_clean":"2026-03-09T15:54:42.607386+0000","last_became_active":"2026-03-09T15:54:13.356303+0000","last_became_peered":"2026-03-09T15:54:13.356303+0000","last_unstale":"2026-03-09T15:54:42.607386+0000","last_undegraded":"2026-03-09T15:54:42.607386+0000","last_fullsized":"2026-03-09T15:54:42.607386+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:14:41.902134+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609898+0000","last_change":"2026-03-09T15:54:11.364112+0000","last_active":"2026-03-09T15:54:42.609898+0000","last_peered":"2026-03-09T15:54:42.609898+0000","last_clean":"2026-03-09T15:54:42.609898+0000","last_became_active":"2026-03-09T15:54:11.363963+0000","last_became_peered":"2026-03-09T15:54:11.363963+0000","last_unstale":"2026-03-09T15:54:42.609898+0000","last_undegraded":"2026-03-09T15:54:42.609898+0000","last_fullsized":"2026-03-09T15:54:42.609898+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:53:10.209701+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"62'11","reported_seq":48,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.411914+0000","last_change":"2026-03-09T15:54:15.391051+0000","last_active":"2026-03-09T15:55:16.411914+0000","last_peered":"2026-03-09T15:55:16.411914+0000","last_clean":"2026-03-09T15:55:16.411914+0000","last_became_active":"2026-03-09T15:54:15.390780+0000","last_became_peered":"2026-03-09T15:54:15.390780+0000","last_unstale":"2026-03-09T15:55:16.411914+0000","last_undegraded":"2026-03-09T15:55:16.411914+0000","last_fullsized":"2026-03-09T15:55:16.411914+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:48:14.737603+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611122+0000","last_change":"2026-03-09T15:54:17.383942+0000","last_active":"2026-03-09T15:54:42.611122+0000","last_peered":"2026-03-09T15:54:42.611122+0000","last_clean":"2026-03-09T15:54:42.611122+0000","last_became_active":"2026-03-09T15:54:17.377541+0000","last_became_peered":"2026-03-09T15:54:17.377541+0000","last_unstale":"2026-03-09T15:54:42.611122+0000","last_undegraded":"2026-03-09T15:54:42.611122+0000","last_fullsized":"2026-03-09T15:54:42.611122+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:11:47.679015+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"61'12","reported_seq":50,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726140+0000","last_change":"2026-03-09T15:54:13.353669+0000","last_active":"2026-03-09T15:54:42.726140+0000","last_peered":"2026-03-09T15:54:42.726140+0000","last_clean":"2026-03-09T15:54:42.726140+0000","last_became_active":"2026-03-09T15:54:13.353571+0000","last_became_peered":"2026-03-09T15:54:13.353571+0000","last_unstale":"2026-03-09T15:54:42.726140+0000","last_undegraded":"2026-03-09T15:54:42.726140+0000","last_fullsized":"2026-03-09T15:54:42.726140+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:10:20.134042+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070447+0000","last_change":"2026-03-09T15:54:11.365353+0000","last_active":"2026-03-09T15:54:43.070447+0000","last_peered":"2026-03-09T15:54:43.070447+0000","last_clean":"2026-03-09T15:54:43.070447+0000","last_became_active":"2026-03-09T15:54:11.365023+0000","last_became_peered":"2026-03-09T15:54:11.365023+0000","last_unstale":"2026-03-09T15:54:43.070447+0000","last_undegraded":"2026-03-09T15:54:43.070447+0000","last_fullsized":"2026-03-09T15:54:43.070447+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:04:10.825623+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612917+0000","last_change":"2026-03-09T15:54:15.379001+0000","last_active":"2026-03-09T15:54:42.612917+0000","last_peered":"2026-03-09T15:54:42.612917+0000","last_clean":"2026-03-09T15:54:42.612917+0000","last_became_active":"2026-03-09T15:54:15.378796+0000","last_became_peered":"2026-03-09T15:54:15.378796+0000","last_unstale":"2026-03-09T15:54:42.612917+0000","last_undegraded":"2026-03-09T15:54:42.612917+0000","last_fullsized":"2026-03-09T15:54:42.612917+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:08:15.930760+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606449+0000","last_change":"2026-03-09T15:54:17.365054+0000","last_active":"2026-03-09T15:54:42.606449+0000","last_peered":"2026-03-09T15:54:42.606449+0000","last_clean":"2026-03-09T15:54:42.606449+0000","last_became_active":"2026-03-09T15:54:17.364925+0000","last_became_peered":"2026-03-09T15:54:17.364925+0000","last_unstale":"2026-03-09T15:54:42.606449+0000","last_undegraded":"2026-03-09T15:54:42.606449+0000","last_fullsized":"2026-03-09T15:54:42.606449+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:03:16.498916+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"61'12","reported_seq":45,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072345+0000","last_change":"2026-03-09T15:54:13.363234+0000","last_active":"2026-03-09T15:54:43.072345+0000","last_peered":"2026-03-09T15:54:43.072345+0000","last_clean":"2026-03-09T15:54:43.072345+0000","last_became_active":"2026-03-09T15:54:13.355094+0000","last_became_peered":"2026-03-09T15:54:13.355094+0000","last_unstale":"2026-03-09T15:54:43.072345+0000","last_undegraded":"2026-03-09T15:54:43.072345+0000","last_fullsized":"2026-03-09T15:54:43.072345+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:15:36.140709+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.075100+0000","last_change":"2026-03-09T15:54:11.360325+0000","last_active":"2026-03-09T15:54:43.075100+0000","last_peered":"2026-03-09T15:54:43.075100+0000","last_clean":"2026-03-09T15:54:43.075100+0000","last_became_active":"2026-03-09T15:54:11.360100+0000","last_became_peered":"2026-03-09T15:54:11.360100+0000","last_unstale":"2026-03-09T15:54:43.075100+0000","last_undegraded":"2026-03-09T15:54:43.075100+0000","last_fullsized":"2026-03-09T15:54:43.075100+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:11:35.282349+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"60'1","reported_seq":35,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726496+0000","last_change":"2026-03-09T15:54:19.477060+0000","last_active":"2026-03-09T15:54:42.726496+0000","last_peered":"2026-03-09T15:54:42.726496+0000","last_clean":"2026-03-09T15:54:42.726496+0000","last_became_active":"2026-03-09T15:54:13.352621+0000","last_became_peered":"2026-03-09T15:54:13.352621+0000","last_unstale":"2026-03-09T15:54:42.726496+0000","last_undegraded":"2026-03-09T15:54:42.726496+0000","last_fullsized":"2026-03-09T15:54:42.726496+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:57:19.014710+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00018030699999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"62'11","reported_seq":48,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.413855+0000","last_change":"2026-03-09T15:54:15.366416+0000","last_active":"2026-03-09T15:55:16.413855+0000","last_peered":"2026-03-09T15:55:16.413855+0000","last_clean":"2026-03-09T15:55:16.413855+0000","last_became_active":"2026-03-09T15:54:15.366261+0000","last_became_peered":"2026-03-09T15:54:15.366261+0000","last_unstale":"2026-03-09T15:55:16.413855+0000","last_undegraded":"2026-03-09T15:55:16.413855+0000","last_fullsized":"2026-03-09T15:55:16.413855+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:57:53.622224+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070139+0000","last_change":"2026-03-09T15:54:17.379918+0000","last_active":"2026-03-09T15:54:43.070139+0000","last_peered":"2026-03-09T15:54:43.070139+0000","last_clean":"2026-03-09T15:54:43.070139+0000","last_became_active":"2026-03-09T15:54:17.379722+0000","last_became_peered":"2026-03-09T15:54:17.379722+0000","last_unstale":"2026-03-09T15:54:43.070139+0000","last_undegraded":"2026-03-09T15:54:43.070139+0000","last_fullsized":"2026-03-09T15:54:43.070139+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:04:27.911032+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"61'13","reported_seq":54,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608110+0000","last_change":"2026-03-09T15:54:13.357614+0000","last_active":"2026-03-09T15:54:42.608110+0000","last_peered":"2026-03-09T15:54:42.608110+0000","last_clean":"2026-03-09T15:54:42.608110+0000","last_became_active":"2026-03-09T15:54:13.357443+0000","last_became_peered":"2026-03-09T15:54:13.357443+0000","last_unstale":"2026-03-09T15:54:42.608110+0000","last_undegraded":"2026-03-09T15:54:42.608110+0000","last_fullsized":"2026-03-09T15:54:42.608110+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:19:17.150066+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"54'1","reported_seq":32,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609944+0000","last_change":"2026-03-09T15:54:11.347738+0000","last_active":"2026-03-09T15:54:42.609944+0000","last_peered":"2026-03-09T15:54:42.609944+0000","last_clean":"2026-03-09T15:54:42.609944+0000","last_became_active":"2026-03-09T15:54:11.347617+0000","last_became_peered":"2026-03-09T15:54:11.347617+0000","last_unstale":"2026-03-09T15:54:42.609944+0000","last_undegraded":"2026-03-09T15:54:42.609944+0000","last_fullsized":"2026-03-09T15:54:42.609944+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:48:01.615624+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"62'5","reported_seq":100,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:22.194510+0000","last_change":"2026-03-09T15:54:19.476432+0000","last_active":"2026-03-09T15:55:22.194510+0000","last_peered":"2026-03-09T15:55:22.194510+0000","last_clean":"2026-03-09T15:55:22.194510+0000","last_became_active":"2026-03-09T15:54:13.356955+0000","last_became_peered":"2026-03-09T15:54:13.356955+0000","last_unstale":"2026-03-09T15:55:22.194510+0000","last_undegraded":"2026-03-09T15:55:22.194510+0000","last_fullsized":"2026-03-09T15:55:22.194510+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:36:51.751332+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000456298,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":63,"num_read_kb":58,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725688+0000","last_change":"2026-03-09T15:54:15.369299+0000","last_active":"2026-03-09T15:54:42.725688+0000","last_peered":"2026-03-09T15:54:42.725688+0000","last_clean":"2026-03-09T15:54:42.725688+0000","last_became_active":"2026-03-09T15:54:15.369181+0000","last_became_peered":"2026-03-09T15:54:15.369181+0000","last_unstale":"2026-03-09T15:54:42.725688+0000","last_undegraded":"2026-03-09T15:54:42.725688+0000","last_fullsized":"2026-03-09T15:54:42.725688+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:07:18.415359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725678+0000","last_change":"2026-03-09T15:54:17.374330+0000","last_active":"2026-03-09T15:54:42.725678+0000","last_peered":"2026-03-09T15:54:42.725678+0000","last_clean":"2026-03-09T15:54:42.725678+0000","last_became_active":"2026-03-09T15:54:17.374141+0000","last_became_peered":"2026-03-09T15:54:17.374141+0000","last_unstale":"2026-03-09T15:54:42.725678+0000","last_undegraded":"2026-03-09T15:54:42.725678+0000","last_fullsized":"2026-03-09T15:54:42.725678+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:02:48.437540+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"62'30","reported_seq":92,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.413560+0000","last_change":"2026-03-09T15:54:13.362622+0000","last_active":"2026-03-09T15:55:16.413560+0000","last_peered":"2026-03-09T15:55:16.413560+0000","last_clean":"2026-03-09T15:55:16.413560+0000","last_became_active":"2026-03-09T15:54:13.362110+0000","last_became_peered":"2026-03-09T15:54:13.362110+0000","last_unstale":"2026-03-09T15:55:16.413560+0000","last_undegraded":"2026-03-09T15:55:16.413560+0000","last_fullsized":"2026-03-09T15:55:16.413560+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:40:55.419542+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070407+0000","last_change":"2026-03-09T15:54:11.362001+0000","last_active":"2026-03-09T15:54:43.070407+0000","last_peered":"2026-03-09T15:54:43.070407+0000","last_clean":"2026-03-09T15:54:43.070407+0000","last_became_active":"2026-03-09T15:54:11.361837+0000","last_became_peered":"2026-03-09T15:54:11.361837+0000","last_unstale":"2026-03-09T15:54:43.070407+0000","last_undegraded":"2026-03-09T15:54:43.070407+0000","last_fullsized":"2026-03-09T15:54:43.070407+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:02:01.367370+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.075346+0000","last_change":"2026-03-09T15:54:15.363894+0000","last_active":"2026-03-09T15:54:43.075346+0000","last_peered":"2026-03-09T15:54:43.075346+0000","last_clean":"2026-03-09T15:54:43.075346+0000","last_became_active":"2026-03-09T15:54:15.363784+0000","last_became_peered":"2026-03-09T15:54:15.363784+0000","last_unstale":"2026-03-09T15:54:43.075346+0000","last_undegraded":"2026-03-09T15:54:43.075346+0000","last_fullsized":"2026-03-09T15:54:43.075346+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:46:06.356956+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609298+0000","last_change":"2026-03-09T15:54:17.377576+0000","last_active":"2026-03-09T15:54:42.609298+0000","last_peered":"2026-03-09T15:54:42.609298+0000","last_clean":"2026-03-09T15:54:42.609298+0000","last_became_active":"2026-03-09T15:54:17.377475+0000","last_became_peered":"2026-03-09T15:54:17.377475+0000","last_unstale":"2026-03-09T15:54:42.609298+0000","last_undegraded":"2026-03-09T15:54:42.609298+0000","last_fullsized":"2026-03-09T15:54:42.609298+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:17:00.658582+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"61'16","reported_seq":64,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.412330+0000","last_change":"2026-03-09T15:54:13.357688+0000","last_active":"2026-03-09T15:55:16.412330+0000","last_peered":"2026-03-09T15:55:16.412330+0000","last_clean":"2026-03-09T15:55:16.412330+0000","last_became_active":"2026-03-09T15:54:13.357617+0000","last_became_peered":"2026-03-09T15:54:13.357617+0000","last_unstale":"2026-03-09T15:55:16.412330+0000","last_undegraded":"2026-03-09T15:55:16.412330+0000","last_fullsized":"2026-03-09T15:55:16.412330+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:34:47.013985+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610019+0000","last_change":"2026-03-09T15:54:11.364488+0000","last_active":"2026-03-09T15:54:42.610019+0000","last_peered":"2026-03-09T15:54:42.610019+0000","last_clean":"2026-03-09T15:54:42.610019+0000","last_became_active":"2026-03-09T15:54:11.363805+0000","last_became_peered":"2026-03-09T15:54:11.363805+0000","last_unstale":"2026-03-09T15:54:42.610019+0000","last_undegraded":"2026-03-09T15:54:42.610019+0000","last_fullsized":"2026-03-09T15:54:42.610019+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:11:37.607532+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"62'2","reported_seq":36,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610084+0000","last_change":"2026-03-09T15:54:19.473850+0000","last_active":"2026-03-09T15:54:42.610084+0000","last_peered":"2026-03-09T15:54:42.610084+0000","last_clean":"2026-03-09T15:54:42.610084+0000","last_became_active":"2026-03-09T15:54:13.358993+0000","last_became_peered":"2026-03-09T15:54:13.358993+0000","last_unstale":"2026-03-09T15:54:42.610084+0000","last_undegraded":"2026-03-09T15:54:42.610084+0000","last_fullsized":"2026-03-09T15:54:42.610084+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:25:54.693820+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00059841400000000002,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"62'11","reported_seq":48,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.413506+0000","last_change":"2026-03-09T15:54:15.383386+0000","last_active":"2026-03-09T15:55:16.413506+0000","last_peered":"2026-03-09T15:55:16.413506+0000","last_clean":"2026-03-09T15:55:16.413506+0000","last_became_active":"2026-03-09T15:54:15.382963+0000","last_became_peered":"2026-03-09T15:54:15.382963+0000","last_unstale":"2026-03-09T15:55:16.413506+0000","last_undegraded":"2026-03-09T15:55:16.413506+0000","last_fullsized":"2026-03-09T15:55:16.413506+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:35:57.637932+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071487+0000","last_change":"2026-03-09T15:54:17.384926+0000","last_active":"2026-03-09T15:54:43.071487+0000","last_peered":"2026-03-09T15:54:43.071487+0000","last_clean":"2026-03-09T15:54:43.071487+0000","last_became_active":"2026-03-09T15:54:17.384399+0000","last_became_peered":"2026-03-09T15:54:17.384399+0000","last_unstale":"2026-03-09T15:54:43.071487+0000","last_undegraded":"2026-03-09T15:54:43.071487+0000","last_fullsized":"2026-03-09T15:54:43.071487+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:26:38.376356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"61'19","reported_seq":63,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725458+0000","last_change":"2026-03-09T15:54:13.361579+0000","last_active":"2026-03-09T15:54:42.725458+0000","last_peered":"2026-03-09T15:54:42.725458+0000","last_clean":"2026-03-09T15:54:42.725458+0000","last_became_active":"2026-03-09T15:54:13.361434+0000","last_became_peered":"2026-03-09T15:54:13.361434+0000","last_unstale":"2026-03-09T15:54:42.725458+0000","last_undegraded":"2026-03-09T15:54:42.725458+0000","last_fullsized":"2026-03-09T15:54:42.725458+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:24:22.376482+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612390+0000","last_change":"2026-03-09T15:54:11.349770+0000","last_active":"2026-03-09T15:54:42.612390+0000","last_peered":"2026-03-09T15:54:42.612390+0000","last_clean":"2026-03-09T15:54:42.612390+0000","last_became_active":"2026-03-09T15:54:11.349631+0000","last_became_peered":"2026-03-09T15:54:11.349631+0000","last_unstale":"2026-03-09T15:54:42.612390+0000","last_undegraded":"2026-03-09T15:54:42.612390+0000","last_fullsized":"2026-03-09T15:54:42.612390+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:39:36.701559+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071651+0000","last_change":"2026-03-09T15:54:15.383483+0000","last_active":"2026-03-09T15:54:43.071651+0000","last_peered":"2026-03-09T15:54:43.071651+0000","last_clean":"2026-03-09T15:54:43.071651+0000","last_became_active":"2026-03-09T15:54:15.383127+0000","last_became_peered":"2026-03-09T15:54:15.383127+0000","last_unstale":"2026-03-09T15:54:43.071651+0000","last_undegraded":"2026-03-09T15:54:43.071651+0000","last_fullsized":"2026-03-09T15:54:43.071651+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:54:07.441803+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606614+0000","last_change":"2026-03-09T15:54:17.385586+0000","last_active":"2026-03-09T15:54:42.606614+0000","last_peered":"2026-03-09T15:54:42.606614+0000","last_clean":"2026-03-09T15:54:42.606614+0000","last_became_active":"2026-03-09T15:54:17.379501+0000","last_became_peered":"2026-03-09T15:54:17.379501+0000","last_unstale":"2026-03-09T15:54:42.606614+0000","last_undegraded":"2026-03-09T15:54:42.606614+0000","last_fullsized":"2026-03-09T15:54:42.606614+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:16:22.811449+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"61'18","reported_seq":59,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609404+0000","last_change":"2026-03-09T15:54:13.362741+0000","last_active":"2026-03-09T15:54:42.609404+0000","last_peered":"2026-03-09T15:54:42.609404+0000","last_clean":"2026-03-09T15:54:42.609404+0000","last_became_active":"2026-03-09T15:54:13.362250+0000","last_became_peered":"2026-03-09T15:54:13.362250+0000","last_unstale":"2026-03-09T15:54:42.609404+0000","last_undegraded":"2026-03-09T15:54:42.609404+0000","last_fullsized":"2026-03-09T15:54:42.609404+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:58:53.163207+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073422+0000","last_change":"2026-03-09T15:54:11.346346+0000","last_active":"2026-03-09T15:54:43.073422+0000","last_peered":"2026-03-09T15:54:43.073422+0000","last_clean":"2026-03-09T15:54:43.073422+0000","last_became_active":"2026-03-09T15:54:11.346081+0000","last_became_peered":"2026-03-09T15:54:11.346081+0000","last_unstale":"2026-03-09T15:54:43.073422+0000","last_undegraded":"2026-03-09T15:54:43.073422+0000","last_fullsized":"2026-03-09T15:54:43.073422+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:38:59.379815+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073504+0000","last_change":"2026-03-09T15:54:15.387229+0000","last_active":"2026-03-09T15:54:43.073504+0000","last_peered":"2026-03-09T15:54:43.073504+0000","last_clean":"2026-03-09T15:54:43.073504+0000","last_became_active":"2026-03-09T15:54:15.377757+0000","last_became_peered":"2026-03-09T15:54:15.377757+0000","last_unstale":"2026-03-09T15:54:43.073504+0000","last_undegraded":"2026-03-09T15:54:43.073504+0000","last_fullsized":"2026-03-09T15:54:43.073504+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:54:46.791680+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071621+0000","last_change":"2026-03-09T15:54:17.389850+0000","last_active":"2026-03-09T15:54:43.071621+0000","last_peered":"2026-03-09T15:54:43.071621+0000","last_clean":"2026-03-09T15:54:43.071621+0000","last_became_active":"2026-03-09T15:54:17.389614+0000","last_became_peered":"2026-03-09T15:54:17.389614+0000","last_unstale":"2026-03-09T15:54:43.071621+0000","last_undegraded":"2026-03-09T15:54:43.071621+0000","last_fullsized":"2026-03-09T15:54:43.071621+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:29:45.220106+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"62'14","reported_seq":48,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072243+0000","last_change":"2026-03-09T15:54:13.362624+0000","last_active":"2026-03-09T15:54:43.072243+0000","last_peered":"2026-03-09T15:54:43.072243+0000","last_clean":"2026-03-09T15:54:43.072243+0000","last_became_active":"2026-03-09T15:54:13.354383+0000","last_became_peered":"2026-03-09T15:54:13.354383+0000","last_unstale":"2026-03-09T15:54:43.072243+0000","last_undegraded":"2026-03-09T15:54:43.072243+0000","last_fullsized":"2026-03-09T15:54:43.072243+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:04:11.930205+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070780+0000","last_change":"2026-03-09T15:54:11.365192+0000","last_active":"2026-03-09T15:54:43.070780+0000","last_peered":"2026-03-09T15:54:43.070780+0000","last_clean":"2026-03-09T15:54:43.070780+0000","last_became_active":"2026-03-09T15:54:11.362397+0000","last_became_peered":"2026-03-09T15:54:11.362397+0000","last_unstale":"2026-03-09T15:54:43.070780+0000","last_undegraded":"2026-03-09T15:54:43.070780+0000","last_fullsized":"2026-03-09T15:54:43.070780+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:58:01.912069+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612683+0000","last_change":"2026-03-09T15:54:15.383701+0000","last_active":"2026-03-09T15:54:42.612683+0000","last_peered":"2026-03-09T15:54:42.612683+0000","last_clean":"2026-03-09T15:54:42.612683+0000","last_became_active":"2026-03-09T15:54:15.383597+0000","last_became_peered":"2026-03-09T15:54:15.383597+0000","last_unstale":"2026-03-09T15:54:42.612683+0000","last_undegraded":"2026-03-09T15:54:42.612683+0000","last_fullsized":"2026-03-09T15:54:42.612683+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:36:56.046748+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609737+0000","last_change":"2026-03-09T15:54:17.391377+0000","last_active":"2026-03-09T15:54:42.609737+0000","last_peered":"2026-03-09T15:54:42.609737+0000","last_clean":"2026-03-09T15:54:42.609737+0000","last_became_active":"2026-03-09T15:54:17.391273+0000","last_became_peered":"2026-03-09T15:54:17.391273+0000","last_unstale":"2026-03-09T15:54:42.609737+0000","last_undegraded":"2026-03-09T15:54:42.609737+0000","last_fullsized":"2026-03-09T15:54:42.609737+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:12:34.558535+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"61'10","reported_seq":42,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608028+0000","last_change":"2026-03-09T15:54:13.357209+0000","last_active":"2026-03-09T15:54:42.608028+0000","last_peered":"2026-03-09T15:54:42.608028+0000","last_clean":"2026-03-09T15:54:42.608028+0000","last_became_active":"2026-03-09T15:54:13.356177+0000","last_became_peered":"2026-03-09T15:54:13.356177+0000","last_unstale":"2026-03-09T15:54:42.608028+0000","last_undegraded":"2026-03-09T15:54:42.608028+0000","last_fullsized":"2026-03-09T15:54:42.608028+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:32:54.942043+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611562+0000","last_change":"2026-03-09T15:54:11.363282+0000","last_active":"2026-03-09T15:54:42.611562+0000","last_peered":"2026-03-09T15:54:42.611562+0000","last_clean":"2026-03-09T15:54:42.611562+0000","last_became_active":"2026-03-09T15:54:11.363009+0000","last_became_peered":"2026-03-09T15:54:11.363009+0000","last_unstale":"2026-03-09T15:54:42.611562+0000","last_undegraded":"2026-03-09T15:54:42.611562+0000","last_fullsized":"2026-03-09T15:54:42.611562+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:26:38.451615+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"64'39","reported_seq":66,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:44.722666+0000","last_change":"2026-03-09T15:53:51.503795+0000","last_active":"2026-03-09T15:54:44.722666+0000","last_peered":"2026-03-09T15:54:44.722666+0000","last_clean":"2026-03-09T15:54:44.722666+0000","last_became_active":"2026-03-09T15:53:51.497232+0000","last_became_peered":"2026-03-09T15:53:51.497232+0000","last_unstale":"2026-03-09T15:54:44.722666+0000","last_undegraded":"2026-03-09T15:54:44.722666+0000","last_fullsized":"2026-03-09T15:54:44.722666+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:51:01.337312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:51:01.337312+0000","last_clean_scrub_stamp":"2026-03-09T15:51:01.337312+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:13:46.716016+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070724+0000","last_change":"2026-03-09T15:54:15.386706+0000","last_active":"2026-03-09T15:54:43.070724+0000","last_peered":"2026-03-09T15:54:43.070724+0000","last_clean":"2026-03-09T15:54:43.070724+0000","last_became_active":"2026-03-09T15:54:15.386260+0000","last_became_peered":"2026-03-09T15:54:15.386260+0000","last_unstale":"2026-03-09T15:54:43.070724+0000","last_undegraded":"2026-03-09T15:54:43.070724+0000","last_fullsized":"2026-03-09T15:54:43.070724+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:18:52.112469+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611485+0000","last_change":"2026-03-09T15:54:17.383779+0000","last_active":"2026-03-09T15:54:42.611485+0000","last_peered":"2026-03-09T15:54:42.611485+0000","last_clean":"2026-03-09T15:54:42.611485+0000","last_became_active":"2026-03-09T15:54:17.377406+0000","last_became_peered":"2026-03-09T15:54:17.377406+0000","last_unstale":"2026-03-09T15:54:42.611485+0000","last_undegraded":"2026-03-09T15:54:42.611485+0000","last_fullsized":"2026-03-09T15:54:42.611485+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:01:05.220739+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"61'17","reported_seq":55,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.069888+0000","last_change":"2026-03-09T15:54:13.354702+0000","last_active":"2026-03-09T15:54:43.069888+0000","last_peered":"2026-03-09T15:54:43.069888+0000","last_clean":"2026-03-09T15:54:43.069888+0000","last_became_active":"2026-03-09T15:54:13.354434+0000","last_became_peered":"2026-03-09T15:54:13.354434+0000","last_unstale":"2026-03-09T15:54:43.069888+0000","last_undegraded":"2026-03-09T15:54:43.069888+0000","last_fullsized":"2026-03-09T15:54:43.069888+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:26:03.884511+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073470+0000","last_change":"2026-03-09T15:54:11.345850+0000","last_active":"2026-03-09T15:54:43.073470+0000","last_peered":"2026-03-09T15:54:43.073470+0000","last_clean":"2026-03-09T15:54:43.073470+0000","last_became_active":"2026-03-09T15:54:11.345595+0000","last_became_peered":"2026-03-09T15:54:11.345595+0000","last_unstale":"2026-03-09T15:54:43.073470+0000","last_undegraded":"2026-03-09T15:54:43.073470+0000","last_fullsized":"2026-03-09T15:54:43.073470+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:55:46.131966+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073394+0000","last_change":"2026-03-09T15:54:15.378750+0000","last_active":"2026-03-09T15:54:43.073394+0000","last_peered":"2026-03-09T15:54:43.073394+0000","last_clean":"2026-03-09T15:54:43.073394+0000","last_became_active":"2026-03-09T15:54:15.378016+0000","last_became_peered":"2026-03-09T15:54:15.378016+0000","last_unstale":"2026-03-09T15:54:43.073394+0000","last_undegraded":"2026-03-09T15:54:43.073394+0000","last_fullsized":"2026-03-09T15:54:43.073394+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:47:00.475914+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.069916+0000","last_change":"2026-03-09T15:54:17.384669+0000","last_active":"2026-03-09T15:54:43.069916+0000","last_peered":"2026-03-09T15:54:43.069916+0000","last_clean":"2026-03-09T15:54:43.069916+0000","last_became_active":"2026-03-09T15:54:17.384573+0000","last_became_peered":"2026-03-09T15:54:17.384573+0000","last_unstale":"2026-03-09T15:54:43.069916+0000","last_undegraded":"2026-03-09T15:54:43.069916+0000","last_fullsized":"2026-03-09T15:54:43.069916+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:17:01.831418+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"61'10","reported_seq":42,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612431+0000","last_change":"2026-03-09T15:54:13.357440+0000","last_active":"2026-03-09T15:54:42.612431+0000","last_peered":"2026-03-09T15:54:42.612431+0000","last_clean":"2026-03-09T15:54:42.612431+0000","last_became_active":"2026-03-09T15:54:13.357356+0000","last_became_peered":"2026-03-09T15:54:13.357356+0000","last_unstale":"2026-03-09T15:54:42.612431+0000","last_undegraded":"2026-03-09T15:54:42.612431+0000","last_fullsized":"2026-03-09T15:54:42.612431+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:01:58.187263+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609805+0000","last_change":"2026-03-09T15:54:11.342596+0000","last_active":"2026-03-09T15:54:42.609805+0000","last_peered":"2026-03-09T15:54:42.609805+0000","last_clean":"2026-03-09T15:54:42.609805+0000","last_became_active":"2026-03-09T15:54:11.341331+0000","last_became_peered":"2026-03-09T15:54:11.341331+0000","last_unstale":"2026-03-09T15:54:42.609805+0000","last_undegraded":"2026-03-09T15:54:42.609805+0000","last_fullsized":"2026-03-09T15:54:42.609805+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:57:09.711591+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073832+0000","last_change":"2026-03-09T15:54:15.380894+0000","last_active":"2026-03-09T15:54:43.073832+0000","last_peered":"2026-03-09T15:54:43.073832+0000","last_clean":"2026-03-09T15:54:43.073832+0000","last_became_active":"2026-03-09T15:54:15.380635+0000","last_became_peered":"2026-03-09T15:54:15.380635+0000","last_unstale":"2026-03-09T15:54:43.073832+0000","last_undegraded":"2026-03-09T15:54:43.073832+0000","last_fullsized":"2026-03-09T15:54:43.073832+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:41:01.786993+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070955+0000","last_change":"2026-03-09T15:54:17.376658+0000","last_active":"2026-03-09T15:54:43.070955+0000","last_peered":"2026-03-09T15:54:43.070955+0000","last_clean":"2026-03-09T15:54:43.070955+0000","last_became_active":"2026-03-09T15:54:17.376340+0000","last_became_peered":"2026-03-09T15:54:17.376340+0000","last_unstale":"2026-03-09T15:54:43.070955+0000","last_undegraded":"2026-03-09T15:54:43.070955+0000","last_fullsized":"2026-03-09T15:54:43.070955+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:06:31.736886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"62'15","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070861+0000","last_change":"2026-03-09T15:54:13.354792+0000","last_active":"2026-03-09T15:54:43.070861+0000","last_peered":"2026-03-09T15:54:43.070861+0000","last_clean":"2026-03-09T15:54:43.070861+0000","last_became_active":"2026-03-09T15:54:13.354551+0000","last_became_peered":"2026-03-09T15:54:13.354551+0000","last_unstale":"2026-03-09T15:54:43.070861+0000","last_undegraded":"2026-03-09T15:54:43.070861+0000","last_fullsized":"2026-03-09T15:54:43.070861+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:23:28.330477+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073333+0000","last_change":"2026-03-09T15:54:11.359936+0000","last_active":"2026-03-09T15:54:43.073333+0000","last_peered":"2026-03-09T15:54:43.073333+0000","last_clean":"2026-03-09T15:54:43.073333+0000","last_became_active":"2026-03-09T15:54:11.359674+0000","last_became_peered":"2026-03-09T15:54:11.359674+0000","last_unstale":"2026-03-09T15:54:43.073333+0000","last_undegraded":"2026-03-09T15:54:43.073333+0000","last_fullsized":"2026-03-09T15:54:43.073333+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:14:35.621046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"62'11","reported_seq":48,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.411921+0000","last_change":"2026-03-09T15:54:15.377490+0000","last_active":"2026-03-09T15:55:16.411921+0000","last_peered":"2026-03-09T15:55:16.411921+0000","last_clean":"2026-03-09T15:55:16.411921+0000","last_became_active":"2026-03-09T15:54:15.376887+0000","last_became_peered":"2026-03-09T15:54:15.376887+0000","last_unstale":"2026-03-09T15:55:16.411921+0000","last_undegraded":"2026-03-09T15:55:16.411921+0000","last_fullsized":"2026-03-09T15:55:16.411921+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:53:19.135979+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609820+0000","last_change":"2026-03-09T15:54:17.384058+0000","last_active":"2026-03-09T15:54:42.609820+0000","last_peered":"2026-03-09T15:54:42.609820+0000","last_clean":"2026-03-09T15:54:42.609820+0000","last_became_active":"2026-03-09T15:54:17.377722+0000","last_became_peered":"2026-03-09T15:54:17.377722+0000","last_unstale":"2026-03-09T15:54:42.609820+0000","last_undegraded":"2026-03-09T15:54:42.609820+0000","last_fullsized":"2026-03-09T15:54:42.609820+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:22:47.509069+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"62'11","reported_seq":46,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070203+0000","last_change":"2026-03-09T15:54:13.351089+0000","last_active":"2026-03-09T15:54:43.070203+0000","last_peered":"2026-03-09T15:54:43.070203+0000","last_clean":"2026-03-09T15:54:43.070203+0000","last_became_active":"2026-03-09T15:54:13.350994+0000","last_became_peered":"2026-03-09T15:54:13.350994+0000","last_unstale":"2026-03-09T15:54:43.070203+0000","last_undegraded":"2026-03-09T15:54:43.070203+0000","last_fullsized":"2026-03-09T15:54:43.070203+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:10:39.839271+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"54'3","reported_seq":55,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725776+0000","last_change":"2026-03-09T15:54:11.357983+0000","last_active":"2026-03-09T15:54:42.725776+0000","last_peered":"2026-03-09T15:54:42.725776+0000","last_clean":"2026-03-09T15:54:42.725776+0000","last_became_active":"2026-03-09T15:54:11.357812+0000","last_became_peered":"2026-03-09T15:54:11.357812+0000","last_unstale":"2026-03-09T15:54:42.725776+0000","last_undegraded":"2026-03-09T15:54:42.725776+0000","last_fullsized":"2026-03-09T15:54:42.725776+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":3,"log_dups_size":0,"ondisk_log_size":3,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:53:07.452340+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":1085,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073717+0000","last_change":"2026-03-09T15:54:15.380796+0000","last_active":"2026-03-09T15:54:43.073717+0000","last_peered":"2026-03-09T15:54:43.073717+0000","last_clean":"2026-03-09T15:54:43.073717+0000","last_became_active":"2026-03-09T15:54:15.380402+0000","last_became_peered":"2026-03-09T15:54:15.380402+0000","last_unstale":"2026-03-09T15:54:43.073717+0000","last_undegraded":"2026-03-09T15:54:43.073717+0000","last_fullsized":"2026-03-09T15:54:43.073717+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:17:26.492706+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606518+0000","last_change":"2026-03-09T15:54:17.386081+0000","last_active":"2026-03-09T15:54:42.606518+0000","last_peered":"2026-03-09T15:54:42.606518+0000","last_clean":"2026-03-09T15:54:42.606518+0000","last_became_active":"2026-03-09T15:54:17.385979+0000","last_became_peered":"2026-03-09T15:54:17.385979+0000","last_unstale":"2026-03-09T15:54:42.606518+0000","last_undegraded":"2026-03-09T15:54:42.606518+0000","last_fullsized":"2026-03-09T15:54:42.606518+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:41:44.332789+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"62'11","reported_seq":46,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.069831+0000","last_change":"2026-03-09T15:54:13.354701+0000","last_active":"2026-03-09T15:54:43.069831+0000","last_peered":"2026-03-09T15:54:43.069831+0000","last_clean":"2026-03-09T15:54:43.069831+0000","last_became_active":"2026-03-09T15:54:13.354275+0000","last_became_peered":"2026-03-09T15:54:13.354275+0000","last_unstale":"2026-03-09T15:54:43.069831+0000","last_undegraded":"2026-03-09T15:54:43.069831+0000","last_fullsized":"2026-03-09T15:54:43.069831+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:46:13.709201+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073248+0000","last_change":"2026-03-09T15:54:11.346351+0000","last_active":"2026-03-09T15:54:43.073248+0000","last_peered":"2026-03-09T15:54:43.073248+0000","last_clean":"2026-03-09T15:54:43.073248+0000","last_became_active":"2026-03-09T15:54:11.345897+0000","last_became_peered":"2026-03-09T15:54:11.345897+0000","last_unstale":"2026-03-09T15:54:43.073248+0000","last_undegraded":"2026-03-09T15:54:43.073248+0000","last_fullsized":"2026-03-09T15:54:43.073248+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:12:18.097678+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606737+0000","last_change":"2026-03-09T15:54:15.372775+0000","last_active":"2026-03-09T15:54:42.606737+0000","last_peered":"2026-03-09T15:54:42.606737+0000","last_clean":"2026-03-09T15:54:42.606737+0000","last_became_active":"2026-03-09T15:54:15.372480+0000","last_became_peered":"2026-03-09T15:54:15.372480+0000","last_unstale":"2026-03-09T15:54:42.606737+0000","last_undegraded":"2026-03-09T15:54:42.606737+0000","last_fullsized":"2026-03-09T15:54:42.606737+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:42:23.783965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073279+0000","last_change":"2026-03-09T15:54:17.382329+0000","last_active":"2026-03-09T15:54:43.073279+0000","last_peered":"2026-03-09T15:54:43.073279+0000","last_clean":"2026-03-09T15:54:43.073279+0000","last_became_active":"2026-03-09T15:54:17.382204+0000","last_became_peered":"2026-03-09T15:54:17.382204+0000","last_unstale":"2026-03-09T15:54:43.073279+0000","last_undegraded":"2026-03-09T15:54:43.073279+0000","last_fullsized":"2026-03-09T15:54:43.073279+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:53:02.263771+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"61'4","reported_seq":33,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.075048+0000","last_change":"2026-03-09T15:54:13.372099+0000","last_active":"2026-03-09T15:54:43.075048+0000","last_peered":"2026-03-09T15:54:43.075048+0000","last_clean":"2026-03-09T15:54:43.075048+0000","last_became_active":"2026-03-09T15:54:13.371765+0000","last_became_peered":"2026-03-09T15:54:13.371765+0000","last_unstale":"2026-03-09T15:54:43.075048+0000","last_undegraded":"2026-03-09T15:54:43.075048+0000","last_fullsized":"2026-03-09T15:54:43.075048+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:15:07.689265+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.075024+0000","last_change":"2026-03-09T15:54:11.347507+0000","last_active":"2026-03-09T15:54:43.075024+0000","last_peered":"2026-03-09T15:54:43.075024+0000","last_clean":"2026-03-09T15:54:43.075024+0000","last_became_active":"2026-03-09T15:54:11.347321+0000","last_became_peered":"2026-03-09T15:54:11.347321+0000","last_unstale":"2026-03-09T15:54:43.075024+0000","last_undegraded":"2026-03-09T15:54:43.075024+0000","last_fullsized":"2026-03-09T15:54:43.075024+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:50:28.524571+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612774+0000","last_change":"2026-03-09T15:54:15.373252+0000","last_active":"2026-03-09T15:54:42.612774+0000","last_peered":"2026-03-09T15:54:42.612774+0000","last_clean":"2026-03-09T15:54:42.612774+0000","last_became_active":"2026-03-09T15:54:15.373123+0000","last_became_peered":"2026-03-09T15:54:15.373123+0000","last_unstale":"2026-03-09T15:54:42.612774+0000","last_undegraded":"2026-03-09T15:54:42.612774+0000","last_fullsized":"2026-03-09T15:54:42.612774+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:42:00.869112+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071694+0000","last_change":"2026-03-09T15:54:17.392767+0000","last_active":"2026-03-09T15:54:43.071694+0000","last_peered":"2026-03-09T15:54:43.071694+0000","last_clean":"2026-03-09T15:54:43.071694+0000","last_became_active":"2026-03-09T15:54:17.391362+0000","last_became_peered":"2026-03-09T15:54:17.391362+0000","last_unstale":"2026-03-09T15:54:43.071694+0000","last_undegraded":"2026-03-09T15:54:43.071694+0000","last_fullsized":"2026-03-09T15:54:43.071694+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:53:11.991377+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"61'11","reported_seq":46,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071065+0000","last_change":"2026-03-09T15:54:13.345821+0000","last_active":"2026-03-09T15:54:43.071065+0000","last_peered":"2026-03-09T15:54:43.071065+0000","last_clean":"2026-03-09T15:54:43.071065+0000","last_became_active":"2026-03-09T15:54:13.345704+0000","last_became_peered":"2026-03-09T15:54:13.345704+0000","last_unstale":"2026-03-09T15:54:43.071065+0000","last_undegraded":"2026-03-09T15:54:43.071065+0000","last_fullsized":"2026-03-09T15:54:43.071065+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:55:41.783650+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612127+0000","last_change":"2026-03-09T15:54:11.363414+0000","last_active":"2026-03-09T15:54:42.612127+0000","last_peered":"2026-03-09T15:54:42.612127+0000","last_clean":"2026-03-09T15:54:42.612127+0000","last_became_active":"2026-03-09T15:54:11.362873+0000","last_became_peered":"2026-03-09T15:54:11.362873+0000","last_unstale":"2026-03-09T15:54:42.612127+0000","last_undegraded":"2026-03-09T15:54:42.612127+0000","last_fullsized":"2026-03-09T15:54:42.612127+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:00:46.857159+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"62'11","reported_seq":48,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.412255+0000","last_change":"2026-03-09T15:54:15.385258+0000","last_active":"2026-03-09T15:55:16.412255+0000","last_peered":"2026-03-09T15:55:16.412255+0000","last_clean":"2026-03-09T15:55:16.412255+0000","last_became_active":"2026-03-09T15:54:15.385077+0000","last_became_peered":"2026-03-09T15:54:15.385077+0000","last_unstale":"2026-03-09T15:55:16.412255+0000","last_undegraded":"2026-03-09T15:55:16.412255+0000","last_fullsized":"2026-03-09T15:55:16.412255+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:16:17.940588+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071075+0000","last_change":"2026-03-09T15:54:17.384611+0000","last_active":"2026-03-09T15:54:43.071075+0000","last_peered":"2026-03-09T15:54:43.071075+0000","last_clean":"2026-03-09T15:54:43.071075+0000","last_became_active":"2026-03-09T15:54:17.384473+0000","last_became_peered":"2026-03-09T15:54:17.384473+0000","last_unstale":"2026-03-09T15:54:43.071075+0000","last_undegraded":"2026-03-09T15:54:43.071075+0000","last_fullsized":"2026-03-09T15:54:43.071075+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:10:41.697037+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071386+0000","last_change":"2026-03-09T15:54:13.360501+0000","last_active":"2026-03-09T15:54:43.071386+0000","last_peered":"2026-03-09T15:54:43.071386+0000","last_clean":"2026-03-09T15:54:43.071386+0000","last_became_active":"2026-03-09T15:54:13.354566+0000","last_became_peered":"2026-03-09T15:54:13.354566+0000","last_unstale":"2026-03-09T15:54:43.071386+0000","last_undegraded":"2026-03-09T15:54:43.071386+0000","last_fullsized":"2026-03-09T15:54:43.071386+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:28:10.760258+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071336+0000","last_change":"2026-03-09T15:54:11.351023+0000","last_active":"2026-03-09T15:54:43.071336+0000","last_peered":"2026-03-09T15:54:43.071336+0000","last_clean":"2026-03-09T15:54:43.071336+0000","last_became_active":"2026-03-09T15:54:11.350784+0000","last_became_peered":"2026-03-09T15:54:11.350784+0000","last_unstale":"2026-03-09T15:54:43.071336+0000","last_undegraded":"2026-03-09T15:54:43.071336+0000","last_fullsized":"2026-03-09T15:54:43.071336+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:05:15.491466+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"62'11","reported_seq":51,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.413952+0000","last_change":"2026-03-09T15:54:15.372812+0000","last_active":"2026-03-09T15:55:16.413952+0000","last_peered":"2026-03-09T15:55:16.413952+0000","last_clean":"2026-03-09T15:55:16.413952+0000","last_became_active":"2026-03-09T15:54:15.372568+0000","last_became_peered":"2026-03-09T15:54:15.372568+0000","last_unstale":"2026-03-09T15:55:16.413952+0000","last_undegraded":"2026-03-09T15:55:16.413952+0000","last_fullsized":"2026-03-09T15:55:16.413952+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:38:50.990948+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725622+0000","last_change":"2026-03-09T15:54:17.390758+0000","last_active":"2026-03-09T15:54:42.725622+0000","last_peered":"2026-03-09T15:54:42.725622+0000","last_clean":"2026-03-09T15:54:42.725622+0000","last_became_active":"2026-03-09T15:54:17.388401+0000","last_became_peered":"2026-03-09T15:54:17.388401+0000","last_unstale":"2026-03-09T15:54:42.725622+0000","last_undegraded":"2026-03-09T15:54:42.725622+0000","last_fullsized":"2026-03-09T15:54:42.725622+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:09:23.742975+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070090+0000","last_change":"2026-03-09T15:54:13.346205+0000","last_active":"2026-03-09T15:54:43.070090+0000","last_peered":"2026-03-09T15:54:43.070090+0000","last_clean":"2026-03-09T15:54:43.070090+0000","last_became_active":"2026-03-09T15:54:13.346112+0000","last_became_peered":"2026-03-09T15:54:13.346112+0000","last_unstale":"2026-03-09T15:54:43.070090+0000","last_undegraded":"2026-03-09T15:54:43.070090+0000","last_fullsized":"2026-03-09T15:54:43.070090+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:32:03.527436+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072944+0000","last_change":"2026-03-09T15:54:11.355245+0000","last_active":"2026-03-09T15:54:43.072944+0000","last_peered":"2026-03-09T15:54:43.072944+0000","last_clean":"2026-03-09T15:54:43.072944+0000","last_became_active":"2026-03-09T15:54:11.355030+0000","last_became_peered":"2026-03-09T15:54:11.355030+0000","last_unstale":"2026-03-09T15:54:43.072944+0000","last_undegraded":"2026-03-09T15:54:43.072944+0000","last_fullsized":"2026-03-09T15:54:43.072944+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:34:03.269455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607585+0000","last_change":"2026-03-09T15:54:15.363680+0000","last_active":"2026-03-09T15:54:42.607585+0000","last_peered":"2026-03-09T15:54:42.607585+0000","last_clean":"2026-03-09T15:54:42.607585+0000","last_became_active":"2026-03-09T15:54:15.363338+0000","last_became_peered":"2026-03-09T15:54:15.363338+0000","last_unstale":"2026-03-09T15:54:42.607585+0000","last_undegraded":"2026-03-09T15:54:42.607585+0000","last_fullsized":"2026-03-09T15:54:42.607585+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:28:47.491881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"61'1","reported_seq":20,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071199+0000","last_change":"2026-03-09T15:54:17.370138+0000","last_active":"2026-03-09T15:54:43.071199+0000","last_peered":"2026-03-09T15:54:43.071199+0000","last_clean":"2026-03-09T15:54:43.071199+0000","last_became_active":"2026-03-09T15:54:17.370019+0000","last_became_peered":"2026-03-09T15:54:17.370019+0000","last_unstale":"2026-03-09T15:54:43.071199+0000","last_undegraded":"2026-03-09T15:54:43.071199+0000","last_fullsized":"2026-03-09T15:54:43.071199+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:23:08.190448+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"61'10","reported_seq":42,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726199+0000","last_change":"2026-03-09T15:54:13.353249+0000","last_active":"2026-03-09T15:54:42.726199+0000","last_peered":"2026-03-09T15:54:42.726199+0000","last_clean":"2026-03-09T15:54:42.726199+0000","last_became_active":"2026-03-09T15:54:13.352935+0000","last_became_peered":"2026-03-09T15:54:13.352935+0000","last_unstale":"2026-03-09T15:54:42.726199+0000","last_undegraded":"2026-03-09T15:54:42.726199+0000","last_fullsized":"2026-03-09T15:54:42.726199+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:47:44.870051+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609545+0000","last_change":"2026-03-09T15:54:11.343303+0000","last_active":"2026-03-09T15:54:42.609545+0000","last_peered":"2026-03-09T15:54:42.609545+0000","last_clean":"2026-03-09T15:54:42.609545+0000","last_became_active":"2026-03-09T15:54:11.342666+0000","last_became_peered":"2026-03-09T15:54:11.342666+0000","last_unstale":"2026-03-09T15:54:42.609545+0000","last_undegraded":"2026-03-09T15:54:42.609545+0000","last_fullsized":"2026-03-09T15:54:42.609545+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:07:34.463689+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609574+0000","last_change":"2026-03-09T15:54:15.367270+0000","last_active":"2026-03-09T15:54:42.609574+0000","last_peered":"2026-03-09T15:54:42.609574+0000","last_clean":"2026-03-09T15:54:42.609574+0000","last_became_active":"2026-03-09T15:54:15.366936+0000","last_became_peered":"2026-03-09T15:54:15.366936+0000","last_unstale":"2026-03-09T15:54:42.609574+0000","last_undegraded":"2026-03-09T15:54:42.609574+0000","last_fullsized":"2026-03-09T15:54:42.609574+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:00:10.712160+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606937+0000","last_change":"2026-03-09T15:54:17.375349+0000","last_active":"2026-03-09T15:54:42.606937+0000","last_peered":"2026-03-09T15:54:42.606937+0000","last_clean":"2026-03-09T15:54:42.606937+0000","last_became_active":"2026-03-09T15:54:17.370883+0000","last_became_peered":"2026-03-09T15:54:17.370883+0000","last_unstale":"2026-03-09T15:54:42.606937+0000","last_undegraded":"2026-03-09T15:54:42.606937+0000","last_fullsized":"2026-03-09T15:54:42.606937+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:51:05.623976+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"61'6","reported_seq":36,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072381+0000","last_change":"2026-03-09T15:54:13.360796+0000","last_active":"2026-03-09T15:54:43.072381+0000","last_peered":"2026-03-09T15:54:43.072381+0000","last_clean":"2026-03-09T15:54:43.072381+0000","last_became_active":"2026-03-09T15:54:13.354911+0000","last_became_peered":"2026-03-09T15:54:13.354911+0000","last_unstale":"2026-03-09T15:54:43.072381+0000","last_undegraded":"2026-03-09T15:54:43.072381+0000","last_fullsized":"2026-03-09T15:54:43.072381+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:41:05.489552+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612044+0000","last_change":"2026-03-09T15:54:11.352011+0000","last_active":"2026-03-09T15:54:42.612044+0000","last_peered":"2026-03-09T15:54:42.612044+0000","last_clean":"2026-03-09T15:54:42.612044+0000","last_became_active":"2026-03-09T15:54:11.351402+0000","last_became_peered":"2026-03-09T15:54:11.351402+0000","last_unstale":"2026-03-09T15:54:42.612044+0000","last_undegraded":"2026-03-09T15:54:42.612044+0000","last_fullsized":"2026-03-09T15:54:42.612044+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:03:58.213655+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072526+0000","last_change":"2026-03-09T15:54:15.370067+0000","last_active":"2026-03-09T15:54:43.072526+0000","last_peered":"2026-03-09T15:54:43.072526+0000","last_clean":"2026-03-09T15:54:43.072526+0000","last_became_active":"2026-03-09T15:54:15.369839+0000","last_became_peered":"2026-03-09T15:54:15.369839+0000","last_unstale":"2026-03-09T15:54:43.072526+0000","last_undegraded":"2026-03-09T15:54:43.072526+0000","last_fullsized":"2026-03-09T15:54:43.072526+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:47:24.327604+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071550+0000","last_change":"2026-03-09T15:54:17.391254+0000","last_active":"2026-03-09T15:54:43.071550+0000","last_peered":"2026-03-09T15:54:43.071550+0000","last_clean":"2026-03-09T15:54:43.071550+0000","last_became_active":"2026-03-09T15:54:17.391053+0000","last_became_peered":"2026-03-09T15:54:17.391053+0000","last_unstale":"2026-03-09T15:54:43.071550+0000","last_undegraded":"2026-03-09T15:54:43.071550+0000","last_fullsized":"2026-03-09T15:54:43.071550+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:33:26.486858+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611668+0000","last_change":"2026-03-09T15:54:13.351604+0000","last_active":"2026-03-09T15:54:42.611668+0000","last_peered":"2026-03-09T15:54:42.611668+0000","last_clean":"2026-03-09T15:54:42.611668+0000","last_became_active":"2026-03-09T15:54:13.351393+0000","last_became_peered":"2026-03-09T15:54:13.351393+0000","last_unstale":"2026-03-09T15:54:42.611668+0000","last_undegraded":"2026-03-09T15:54:42.611668+0000","last_fullsized":"2026-03-09T15:54:42.611668+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:05:09.857590+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072827+0000","last_change":"2026-03-09T15:54:11.355305+0000","last_active":"2026-03-09T15:54:43.072827+0000","last_peered":"2026-03-09T15:54:43.072827+0000","last_clean":"2026-03-09T15:54:43.072827+0000","last_became_active":"2026-03-09T15:54:11.355148+0000","last_became_peered":"2026-03-09T15:54:11.355148+0000","last_unstale":"2026-03-09T15:54:43.072827+0000","last_undegraded":"2026-03-09T15:54:43.072827+0000","last_fullsized":"2026-03-09T15:54:43.072827+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:57:20.859145+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071337+0000","last_change":"2026-03-09T15:54:15.378053+0000","last_active":"2026-03-09T15:54:43.071337+0000","last_peered":"2026-03-09T15:54:43.071337+0000","last_clean":"2026-03-09T15:54:43.071337+0000","last_became_active":"2026-03-09T15:54:15.377687+0000","last_became_peered":"2026-03-09T15:54:15.377687+0000","last_unstale":"2026-03-09T15:54:43.071337+0000","last_undegraded":"2026-03-09T15:54:43.071337+0000","last_fullsized":"2026-03-09T15:54:43.071337+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:53:18.443143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606459+0000","last_change":"2026-03-09T15:54:17.388336+0000","last_active":"2026-03-09T15:54:42.606459+0000","last_peered":"2026-03-09T15:54:42.606459+0000","last_clean":"2026-03-09T15:54:42.606459+0000","last_became_active":"2026-03-09T15:54:17.383020+0000","last_became_peered":"2026-03-09T15:54:17.383020+0000","last_unstale":"2026-03-09T15:54:42.606459+0000","last_undegraded":"2026-03-09T15:54:42.606459+0000","last_fullsized":"2026-03-09T15:54:42.606459+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:50:27.652230+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"61'1","reported_seq":21,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071045+0000","last_change":"2026-03-09T15:54:17.385058+0000","last_active":"2026-03-09T15:54:43.071045+0000","last_peered":"2026-03-09T15:54:43.071045+0000","last_clean":"2026-03-09T15:54:43.071045+0000","last_became_active":"2026-03-09T15:54:17.384902+0000","last_became_peered":"2026-03-09T15:54:17.384902+0000","last_unstale":"2026-03-09T15:54:43.071045+0000","last_undegraded":"2026-03-09T15:54:43.071045+0000","last_fullsized":"2026-03-09T15:54:43.071045+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:11:59.164498+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"62'15","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610123+0000","last_change":"2026-03-09T15:54:13.369280+0000","last_active":"2026-03-09T15:54:42.610123+0000","last_peered":"2026-03-09T15:54:42.610123+0000","last_clean":"2026-03-09T15:54:42.610123+0000","last_became_active":"2026-03-09T15:54:13.364512+0000","last_became_peered":"2026-03-09T15:54:13.364512+0000","last_unstale":"2026-03-09T15:54:42.610123+0000","last_undegraded":"2026-03-09T15:54:42.610123+0000","last_fullsized":"2026-03-09T15:54:42.610123+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:27:21.917673+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612345+0000","last_change":"2026-03-09T15:54:11.363351+0000","last_active":"2026-03-09T15:54:42.612345+0000","last_peered":"2026-03-09T15:54:42.612345+0000","last_clean":"2026-03-09T15:54:42.612345+0000","last_became_active":"2026-03-09T15:54:11.363125+0000","last_became_peered":"2026-03-09T15:54:11.363125+0000","last_unstale":"2026-03-09T15:54:42.612345+0000","last_undegraded":"2026-03-09T15:54:42.612345+0000","last_fullsized":"2026-03-09T15:54:42.612345+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:38:01.325428+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"62'11","reported_seq":51,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:16.412340+0000","last_change":"2026-03-09T15:54:15.370076+0000","last_active":"2026-03-09T15:55:16.412340+0000","last_peered":"2026-03-09T15:55:16.412340+0000","last_clean":"2026-03-09T15:55:16.412340+0000","last_became_active":"2026-03-09T15:54:15.369913+0000","last_became_peered":"2026-03-09T15:54:15.369913+0000","last_unstale":"2026-03-09T15:55:16.412340+0000","last_undegraded":"2026-03-09T15:55:16.412340+0000","last_fullsized":"2026-03-09T15:55:16.412340+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:56:21.155553+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610309+0000","last_change":"2026-03-09T15:54:17.386468+0000","last_active":"2026-03-09T15:54:42.610309+0000","last_peered":"2026-03-09T15:54:42.610309+0000","last_clean":"2026-03-09T15:54:42.610309+0000","last_became_active":"2026-03-09T15:54:17.386370+0000","last_became_peered":"2026-03-09T15:54:17.386370+0000","last_unstale":"2026-03-09T15:54:42.610309+0000","last_undegraded":"2026-03-09T15:54:42.610309+0000","last_fullsized":"2026-03-09T15:54:42.610309+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:20:00.063265+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607130+0000","last_change":"2026-03-09T15:54:13.357798+0000","last_active":"2026-03-09T15:54:42.607130+0000","last_peered":"2026-03-09T15:54:42.607130+0000","last_clean":"2026-03-09T15:54:42.607130+0000","last_became_active":"2026-03-09T15:54:13.357043+0000","last_became_peered":"2026-03-09T15:54:13.357043+0000","last_unstale":"2026-03-09T15:54:42.607130+0000","last_undegraded":"2026-03-09T15:54:42.607130+0000","last_fullsized":"2026-03-09T15:54:42.607130+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:28:56.952662+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"54'1","reported_seq":32,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607538+0000","last_change":"2026-03-09T15:54:11.347981+0000","last_active":"2026-03-09T15:54:42.607538+0000","last_peered":"2026-03-09T15:54:42.607538+0000","last_clean":"2026-03-09T15:54:42.607538+0000","last_became_active":"2026-03-09T15:54:11.347853+0000","last_became_peered":"2026-03-09T15:54:11.347853+0000","last_unstale":"2026-03-09T15:54:42.607538+0000","last_undegraded":"2026-03-09T15:54:42.607538+0000","last_fullsized":"2026-03-09T15:54:42.607538+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:47:57.685605+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071749+0000","last_change":"2026-03-09T15:54:15.383247+0000","last_active":"2026-03-09T15:54:43.071749+0000","last_peered":"2026-03-09T15:54:43.071749+0000","last_clean":"2026-03-09T15:54:43.071749+0000","last_became_active":"2026-03-09T15:54:15.382500+0000","last_became_peered":"2026-03-09T15:54:15.382500+0000","last_unstale":"2026-03-09T15:54:43.071749+0000","last_undegraded":"2026-03-09T15:54:43.071749+0000","last_fullsized":"2026-03-09T15:54:43.071749+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:03:41.748520+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725252+0000","last_change":"2026-03-09T15:54:17.390336+0000","last_active":"2026-03-09T15:54:42.725252+0000","last_peered":"2026-03-09T15:54:42.725252+0000","last_clean":"2026-03-09T15:54:42.725252+0000","last_became_active":"2026-03-09T15:54:17.385695+0000","last_became_peered":"2026-03-09T15:54:17.385695+0000","last_unstale":"2026-03-09T15:54:42.725252+0000","last_undegraded":"2026-03-09T15:54:42.725252+0000","last_fullsized":"2026-03-09T15:54:42.725252+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:40:29.688303+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072731+0000","last_change":"2026-03-09T15:54:11.360391+0000","last_active":"2026-03-09T15:54:43.072731+0000","last_peered":"2026-03-09T15:54:43.072731+0000","last_clean":"2026-03-09T15:54:43.072731+0000","last_became_active":"2026-03-09T15:54:11.360236+0000","last_became_peered":"2026-03-09T15:54:11.360236+0000","last_unstale":"2026-03-09T15:54:43.072731+0000","last_undegraded":"2026-03-09T15:54:43.072731+0000","last_fullsized":"2026-03-09T15:54:43.072731+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:25:47.079675+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"61'5","reported_seq":37,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072532+0000","last_change":"2026-03-09T15:54:13.359633+0000","last_active":"2026-03-09T15:54:43.072532+0000","last_peered":"2026-03-09T15:54:43.072532+0000","last_clean":"2026-03-09T15:54:43.072532+0000","last_became_active":"2026-03-09T15:54:13.354402+0000","last_became_peered":"2026-03-09T15:54:13.354402+0000","last_unstale":"2026-03-09T15:54:43.072532+0000","last_undegraded":"2026-03-09T15:54:43.072532+0000","last_fullsized":"2026-03-09T15:54:43.072532+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:35:33.389940+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610509+0000","last_change":"2026-03-09T15:54:15.366700+0000","last_active":"2026-03-09T15:54:42.610509+0000","last_peered":"2026-03-09T15:54:42.610509+0000","last_clean":"2026-03-09T15:54:42.610509+0000","last_became_active":"2026-03-09T15:54:15.366483+0000","last_became_peered":"2026-03-09T15:54:15.366483+0000","last_unstale":"2026-03-09T15:54:42.610509+0000","last_undegraded":"2026-03-09T15:54:42.610509+0000","last_fullsized":"2026-03-09T15:54:42.610509+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:21:02.939803+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608822+0000","last_change":"2026-03-09T15:54:17.366825+0000","last_active":"2026-03-09T15:54:42.608822+0000","last_peered":"2026-03-09T15:54:42.608822+0000","last_clean":"2026-03-09T15:54:42.608822+0000","last_became_active":"2026-03-09T15:54:17.366716+0000","last_became_peered":"2026-03-09T15:54:17.366716+0000","last_unstale":"2026-03-09T15:54:42.608822+0000","last_undegraded":"2026-03-09T15:54:42.608822+0000","last_fullsized":"2026-03-09T15:54:42.608822+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:54:58.728633+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607745+0000","last_change":"2026-03-09T15:54:11.359534+0000","last_active":"2026-03-09T15:54:42.607745+0000","last_peered":"2026-03-09T15:54:42.607745+0000","last_clean":"2026-03-09T15:54:42.607745+0000","last_became_active":"2026-03-09T15:54:11.359221+0000","last_became_peered":"2026-03-09T15:54:11.359221+0000","last_unstale":"2026-03-09T15:54:42.607745+0000","last_undegraded":"2026-03-09T15:54:42.607745+0000","last_fullsized":"2026-03-09T15:54:42.607745+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:36:42.081838+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726418+0000","last_change":"2026-03-09T15:54:13.353354+0000","last_active":"2026-03-09T15:54:42.726418+0000","last_peered":"2026-03-09T15:54:42.726418+0000","last_clean":"2026-03-09T15:54:42.726418+0000","last_became_active":"2026-03-09T15:54:13.353066+0000","last_became_peered":"2026-03-09T15:54:13.353066+0000","last_unstale":"2026-03-09T15:54:42.726418+0000","last_undegraded":"2026-03-09T15:54:42.726418+0000","last_fullsized":"2026-03-09T15:54:42.726418+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:19:18.554788+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726381+0000","last_change":"2026-03-09T15:54:15.374457+0000","last_active":"2026-03-09T15:54:42.726381+0000","last_peered":"2026-03-09T15:54:42.726381+0000","last_clean":"2026-03-09T15:54:42.726381+0000","last_became_active":"2026-03-09T15:54:15.373468+0000","last_became_peered":"2026-03-09T15:54:15.373468+0000","last_unstale":"2026-03-09T15:54:42.726381+0000","last_undegraded":"2026-03-09T15:54:42.726381+0000","last_fullsized":"2026-03-09T15:54:42.726381+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:19:43.648567+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":63,"num_read_kb":58,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":50,"seq":214748364820,"num_pgs":60,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":28000,"kb_used_data":1164,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939424,"statfs":{"total":21470642176,"available":21441970176,"internally_reserved":0,"allocated":1191936,"data_stored":752714,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":43,"seq":184683593755,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27968,"kb_used_data":1132,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939456,"statfs":{"total":21470642176,"available":21442002944,"internally_reserved":0,"allocated":1159168,"data_stored":750379,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":37,"seq":158913789986,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27532,"kb_used_data":692,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939892,"statfs":{"total":21470642176,"available":21442449408,"internally_reserved":0,"allocated":708608,"data_stored":291950,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986219,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27564,"kb_used_data":724,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939860,"statfs":{"total":21470642176,"available":21442416640,"internally_reserved":0,"allocated":741376,"data_stored":293037,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149744,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27524,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939900,"statfs":{"total":21470642176,"available":21442457600,"internally_reserved":0,"allocated":700416,"data_stored":291922,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411382,"num_pgs":38,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27524,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939900,"statfs":{"total":21470642176,"available":21442457600,"internally_reserved":0,"allocated":700416,"data_stored":291592,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574909,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27540,"kb_used_data":700,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939884,"statfs":{"total":21470642176,"available":21442441216,"internally_reserved":0,"allocated":716800,"data_stored":291443,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738436,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27996,"kb_used_data":1160,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939428,"statfs":{"total":21470642176,"available":21441974272,"internally_reserved":0,"allocated":1187840,"data_stored":752476,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1587,"internal_metadata":27457997},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1567,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":482,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T15:55:29.656 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph pg dump --format=json 2026-03-09T15:55:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:30 vm09 bash[22983]: audit 2026-03-09T15:55:29.572038+0000 mgr.y (mgr.14520) 63 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:30 vm09 bash[22983]: audit 2026-03-09T15:55:29.572038+0000 mgr.y (mgr.14520) 63 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:30 vm01 bash[28152]: audit 2026-03-09T15:55:29.572038+0000 mgr.y (mgr.14520) 63 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:30.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:30 vm01 bash[28152]: audit 2026-03-09T15:55:29.572038+0000 mgr.y (mgr.14520) 63 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:30.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:30 vm01 bash[20728]: audit 2026-03-09T15:55:29.572038+0000 mgr.y (mgr.14520) 63 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:30.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:30 vm01 bash[20728]: audit 2026-03-09T15:55:29.572038+0000 mgr.y (mgr.14520) 63 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:31 vm09 bash[22983]: cluster 2026-03-09T15:55:30.647373+0000 mgr.y (mgr.14520) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:31 vm09 bash[22983]: cluster 2026-03-09T15:55:30.647373+0000 mgr.y (mgr.14520) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:31.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:31 vm01 bash[28152]: cluster 2026-03-09T15:55:30.647373+0000 mgr.y (mgr.14520) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:31.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:31 vm01 bash[28152]: cluster 2026-03-09T15:55:30.647373+0000 mgr.y (mgr.14520) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:31 vm01 bash[20728]: cluster 2026-03-09T15:55:30.647373+0000 mgr.y (mgr.14520) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:31 vm01 bash[20728]: cluster 2026-03-09T15:55:30.647373+0000 mgr.y (mgr.14520) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:33.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:55:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:55:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:33 vm09 bash[22983]: cluster 2026-03-09T15:55:32.647662+0000 mgr.y (mgr.14520) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:33 vm09 bash[22983]: cluster 2026-03-09T15:55:32.647662+0000 mgr.y (mgr.14520) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:33 vm01 bash[28152]: cluster 2026-03-09T15:55:32.647662+0000 mgr.y (mgr.14520) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:33 vm01 bash[28152]: cluster 2026-03-09T15:55:32.647662+0000 mgr.y (mgr.14520) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:34.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:33 vm01 bash[20728]: cluster 2026-03-09T15:55:32.647662+0000 mgr.y (mgr.14520) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:34.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:33 vm01 bash[20728]: cluster 2026-03-09T15:55:32.647662+0000 mgr.y (mgr.14520) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:34.349 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:34.525 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 -- 192.168.123.101:0/4052372456 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 msgr2=0x7fea5c075d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:34.525 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 --2- 192.168.123.101:0/4052372456 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 0x7fea5c075d90 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7fea4c009a30 tx=0x7fea4c02f2b0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 -- 192.168.123.101:0/4052372456 shutdown_connections 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 --2- 192.168.123.101:0/4052372456 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea5c113830 0x7fea5c115c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 --2- 192.168.123.101:0/4052372456 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 0x7fea5c075d90 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 --2- 192.168.123.101:0/4052372456 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fea5c106930 0x7fea5c075410 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 -- 192.168.123.101:0/4052372456 >> 192.168.123.101:0/4052372456 conn(0x7fea5c0fe7a0 msgr2=0x7fea5c100bc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 -- 192.168.123.101:0/4052372456 shutdown_connections 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 -- 192.168.123.101:0/4052372456 wait complete. 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 Processor -- start 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.524+0000 7fea6402e640 1 -- start start 2026-03-09T15:55:34.526 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea6402e640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 0x7fea5c1a0e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea6402e640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fea5c106930 0x7fea5c1a1380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea61da3640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 0x7fea5c1a0e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea615a2640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fea5c106930 0x7fea5c1a1380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea615a2640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fea5c106930 0x7fea5c1a1380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:55122/0 (socket says 192.168.123.101:55122) 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea61da3640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 0x7fea5c1a0e40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:37042/0 (socket says 192.168.123.101:37042) 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea6402e640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea5c113830 0x7fea5c1a5710 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea6402e640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fea5c1187f0 con 0x7fea5c106930 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea6402e640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fea5c118670 con 0x7fea5c113830 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea6402e640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7fea5c118970 con 0x7fea5c075950 2026-03-09T15:55:34.527 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea615a2640 1 -- 192.168.123.101:0/2913660098 learned_addr learned my addr 192.168.123.101:0/2913660098 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea625a4640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea5c113830 0x7fea5c1a5710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea615a2640 1 -- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 msgr2=0x7fea5c1a0e40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea615a2640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 0x7fea5c1a0e40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea615a2640 1 -- 192.168.123.101:0/2913660098 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea5c113830 msgr2=0x7fea5c1a5710 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea615a2640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea5c113830 0x7fea5c1a5710 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea615a2640 1 -- 192.168.123.101:0/2913660098 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fea5c1a5e90 con 0x7fea5c106930 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea61da3640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 0x7fea5c1a0e40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello state changed while learned_addr, mark_down or replacing must be happened just now 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea615a2640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fea5c106930 0x7fea5c1a1380 secure :-1 s=READY pgs=170 cs=0 l=1 rev1=1 crypto rx=0x7fea4c02f7c0 tx=0x7fea4c02fd40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea52ffd640 1 -- 192.168.123.101:0/2913660098 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fea4c004240 con 0x7fea5c106930 2026-03-09T15:55:34.528 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea6402e640 1 -- 192.168.123.101:0/2913660098 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fea5c1a6120 con 0x7fea5c106930 2026-03-09T15:55:34.530 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea52ffd640 1 -- 192.168.123.101:0/2913660098 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fea4c0043e0 con 0x7fea5c106930 2026-03-09T15:55:34.530 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea52ffd640 1 -- 192.168.123.101:0/2913660098 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fea4c042680 con 0x7fea5c106930 2026-03-09T15:55:34.530 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea6402e640 1 -- 192.168.123.101:0/2913660098 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fea5c1ad9c0 con 0x7fea5c106930 2026-03-09T15:55:34.531 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.528+0000 7fea50ff9640 1 -- 192.168.123.101:0/2913660098 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fea28005180 con 0x7fea5c106930 2026-03-09T15:55:34.535 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.532+0000 7fea52ffd640 1 -- 192.168.123.101:0/2913660098 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fea4c04a020 con 0x7fea5c106930 2026-03-09T15:55:34.535 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.536+0000 7fea52ffd640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fea34077700 0x7fea34079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:34.535 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.536+0000 7fea52ffd640 1 -- 192.168.123.101:0/2913660098 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7fea4c0be810 con 0x7fea5c106930 2026-03-09T15:55:34.535 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.536+0000 7fea52ffd640 1 -- 192.168.123.101:0/2913660098 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fea4c0eea90 con 0x7fea5c106930 2026-03-09T15:55:34.535 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.536+0000 7fea61da3640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fea34077700 0x7fea34079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:34.536 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.536+0000 7fea61da3640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fea34077700 0x7fea34079bc0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fea44005e00 tx=0x7fea4400a600 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:34.641 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.640+0000 7fea50ff9640 1 -- 192.168.123.101:0/2913660098 --> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7fea28002bf0 con 0x7fea34077700 2026-03-09T15:55:34.647 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.648+0000 7fea52ffd640 1 -- 192.168.123.101:0/2913660098 <== mgr.14520 v2:192.168.123.101:6800/123914266 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+346481 (secure 0 0 0) 0x7fea28002bf0 con 0x7fea34077700 2026-03-09T15:55:34.647 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:55:34.649 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-09T15:55:34.651 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 -- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fea34077700 msgr2=0x7fea34079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:34.651 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fea34077700 0x7fea34079bc0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fea44005e00 tx=0x7fea4400a600 comp rx=0 tx=0).stop 2026-03-09T15:55:34.651 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 -- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fea5c106930 msgr2=0x7fea5c1a1380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:34.651 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fea5c106930 0x7fea5c1a1380 secure :-1 s=READY pgs=170 cs=0 l=1 rev1=1 crypto rx=0x7fea4c02f7c0 tx=0x7fea4c02fd40 comp rx=0 tx=0).stop 2026-03-09T15:55:34.652 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 -- 192.168.123.101:0/2913660098 shutdown_connections 2026-03-09T15:55:34.652 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fea34077700 0x7fea34079bc0 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.652 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea5c113830 0x7fea5c1a5710 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.652 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fea5c106930 0x7fea5c1a1380 unknown :-1 s=CLOSED pgs=170 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.652 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 --2- 192.168.123.101:0/2913660098 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fea5c075950 0x7fea5c1a0e40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:34.652 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 -- 192.168.123.101:0/2913660098 >> 192.168.123.101:0/2913660098 conn(0x7fea5c0fe7a0 msgr2=0x7fea5c0ff0a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:34.652 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 -- 192.168.123.101:0/2913660098 shutdown_connections 2026-03-09T15:55:34.652 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:34.652+0000 7fea50ff9640 1 -- 192.168.123.101:0/2913660098 wait complete. 2026-03-09T15:55:34.724 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":28,"stamp":"2026-03-09T15:55:32.647524+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":912,"num_read_kb":771,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221648,"kb_used_data":6940,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167517744,"statfs":{"total":171765137408,"available":171538169856,"internally_reserved":0,"allocated":7106560,"data_stored":3715513,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12709,"internal_metadata":219663963},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.003031"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608185+0000","last_change":"2026-03-09T15:54:17.381614+0000","last_active":"2026-03-09T15:54:42.608185+0000","last_peered":"2026-03-09T15:54:42.608185+0000","last_clean":"2026-03-09T15:54:42.608185+0000","last_became_active":"2026-03-09T15:54:17.381426+0000","last_became_peered":"2026-03-09T15:54:17.381426+0000","last_unstale":"2026-03-09T15:54:42.608185+0000","last_undegraded":"2026-03-09T15:54:42.608185+0000","last_fullsized":"2026-03-09T15:54:42.608185+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:21:13.560580+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071608+0000","last_change":"2026-03-09T15:54:11.362363+0000","last_active":"2026-03-09T15:54:43.071608+0000","last_peered":"2026-03-09T15:54:43.071608+0000","last_clean":"2026-03-09T15:54:43.071608+0000","last_became_active":"2026-03-09T15:54:11.361461+0000","last_became_peered":"2026-03-09T15:54:11.361461+0000","last_unstale":"2026-03-09T15:54:43.071608+0000","last_undegraded":"2026-03-09T15:54:43.071608+0000","last_fullsized":"2026-03-09T15:54:43.071608+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:14:16.946358+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"62'10","reported_seq":42,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608226+0000","last_change":"2026-03-09T15:54:13.355864+0000","last_active":"2026-03-09T15:54:42.608226+0000","last_peered":"2026-03-09T15:54:42.608226+0000","last_clean":"2026-03-09T15:54:42.608226+0000","last_became_active":"2026-03-09T15:54:13.355604+0000","last_became_peered":"2026-03-09T15:54:13.355604+0000","last_unstale":"2026-03-09T15:54:42.608226+0000","last_undegraded":"2026-03-09T15:54:42.608226+0000","last_fullsized":"2026-03-09T15:54:42.608226+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:57:26.744585+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725528+0000","last_change":"2026-03-09T15:54:15.366318+0000","last_active":"2026-03-09T15:54:42.725528+0000","last_peered":"2026-03-09T15:54:42.725528+0000","last_clean":"2026-03-09T15:54:42.725528+0000","last_became_active":"2026-03-09T15:54:15.366170+0000","last_became_peered":"2026-03-09T15:54:15.366170+0000","last_unstale":"2026-03-09T15:54:42.725528+0000","last_undegraded":"2026-03-09T15:54:42.725528+0000","last_fullsized":"2026-03-09T15:54:42.725528+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:58:08.472942+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"54'1","reported_seq":39,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607269+0000","last_change":"2026-03-09T15:54:11.344598+0000","last_active":"2026-03-09T15:54:42.607269+0000","last_peered":"2026-03-09T15:54:42.607269+0000","last_clean":"2026-03-09T15:54:42.607269+0000","last_became_active":"2026-03-09T15:54:11.344446+0000","last_became_peered":"2026-03-09T15:54:11.344446+0000","last_unstale":"2026-03-09T15:54:42.607269+0000","last_undegraded":"2026-03-09T15:54:42.607269+0000","last_fullsized":"2026-03-09T15:54:42.607269+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:56:45.531791+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"62'11","reported_seq":46,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071959+0000","last_change":"2026-03-09T15:54:13.360634+0000","last_active":"2026-03-09T15:54:43.071959+0000","last_peered":"2026-03-09T15:54:43.071959+0000","last_clean":"2026-03-09T15:54:43.071959+0000","last_became_active":"2026-03-09T15:54:13.354798+0000","last_became_peered":"2026-03-09T15:54:13.354798+0000","last_unstale":"2026-03-09T15:54:43.071959+0000","last_undegraded":"2026-03-09T15:54:43.071959+0000","last_fullsized":"2026-03-09T15:54:43.071959+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:34:21.064892+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610484+0000","last_change":"2026-03-09T15:54:15.369094+0000","last_active":"2026-03-09T15:54:42.610484+0000","last_peered":"2026-03-09T15:54:42.610484+0000","last_clean":"2026-03-09T15:54:42.610484+0000","last_became_active":"2026-03-09T15:54:15.368996+0000","last_became_peered":"2026-03-09T15:54:15.368996+0000","last_unstale":"2026-03-09T15:54:42.610484+0000","last_undegraded":"2026-03-09T15:54:42.610484+0000","last_fullsized":"2026-03-09T15:54:42.610484+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:53:29.393338+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.724858+0000","last_change":"2026-03-09T15:54:17.388558+0000","last_active":"2026-03-09T15:54:42.724858+0000","last_peered":"2026-03-09T15:54:42.724858+0000","last_clean":"2026-03-09T15:54:42.724858+0000","last_became_active":"2026-03-09T15:54:17.385514+0000","last_became_peered":"2026-03-09T15:54:17.385514+0000","last_unstale":"2026-03-09T15:54:42.724858+0000","last_undegraded":"2026-03-09T15:54:42.724858+0000","last_fullsized":"2026-03-09T15:54:42.724858+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:57:18.231700+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070785+0000","last_change":"2026-03-09T15:54:11.366913+0000","last_active":"2026-03-09T15:54:43.070785+0000","last_peered":"2026-03-09T15:54:43.070785+0000","last_clean":"2026-03-09T15:54:43.070785+0000","last_became_active":"2026-03-09T15:54:11.366719+0000","last_became_peered":"2026-03-09T15:54:11.366719+0000","last_unstale":"2026-03-09T15:54:43.070785+0000","last_undegraded":"2026-03-09T15:54:43.070785+0000","last_fullsized":"2026-03-09T15:54:43.070785+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:48:15.023241+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"61'15","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612587+0000","last_change":"2026-03-09T15:54:13.351676+0000","last_active":"2026-03-09T15:54:42.612587+0000","last_peered":"2026-03-09T15:54:42.612587+0000","last_clean":"2026-03-09T15:54:42.612587+0000","last_became_active":"2026-03-09T15:54:13.351510+0000","last_became_peered":"2026-03-09T15:54:13.351510+0000","last_unstale":"2026-03-09T15:54:42.612587+0000","last_undegraded":"2026-03-09T15:54:42.612587+0000","last_fullsized":"2026-03-09T15:54:42.612587+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:43:52.051889+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070873+0000","last_change":"2026-03-09T15:54:15.379526+0000","last_active":"2026-03-09T15:54:43.070873+0000","last_peered":"2026-03-09T15:54:43.070873+0000","last_clean":"2026-03-09T15:54:43.070873+0000","last_became_active":"2026-03-09T15:54:15.377552+0000","last_became_peered":"2026-03-09T15:54:15.377552+0000","last_unstale":"2026-03-09T15:54:43.070873+0000","last_undegraded":"2026-03-09T15:54:43.070873+0000","last_fullsized":"2026-03-09T15:54:43.070873+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:52:35.402533+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612480+0000","last_change":"2026-03-09T15:54:17.376486+0000","last_active":"2026-03-09T15:54:42.612480+0000","last_peered":"2026-03-09T15:54:42.612480+0000","last_clean":"2026-03-09T15:54:42.612480+0000","last_became_active":"2026-03-09T15:54:17.376386+0000","last_became_peered":"2026-03-09T15:54:17.376386+0000","last_unstale":"2026-03-09T15:54:42.612480+0000","last_undegraded":"2026-03-09T15:54:42.612480+0000","last_fullsized":"2026-03-09T15:54:42.612480+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:29:24.059321+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070685+0000","last_change":"2026-03-09T15:54:11.360504+0000","last_active":"2026-03-09T15:54:43.070685+0000","last_peered":"2026-03-09T15:54:43.070685+0000","last_clean":"2026-03-09T15:54:43.070685+0000","last_became_active":"2026-03-09T15:54:11.360411+0000","last_became_peered":"2026-03-09T15:54:11.360411+0000","last_unstale":"2026-03-09T15:54:43.070685+0000","last_undegraded":"2026-03-09T15:54:43.070685+0000","last_fullsized":"2026-03-09T15:54:43.070685+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:47:59.369509+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"62'12","reported_seq":50,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611421+0000","last_change":"2026-03-09T15:54:13.354473+0000","last_active":"2026-03-09T15:54:42.611421+0000","last_peered":"2026-03-09T15:54:42.611421+0000","last_clean":"2026-03-09T15:54:42.611421+0000","last_became_active":"2026-03-09T15:54:13.354400+0000","last_became_peered":"2026-03-09T15:54:13.354400+0000","last_unstale":"2026-03-09T15:54:42.611421+0000","last_undegraded":"2026-03-09T15:54:42.611421+0000","last_fullsized":"2026-03-09T15:54:42.611421+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:37:49.942670+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611349+0000","last_change":"2026-03-09T15:54:15.383541+0000","last_active":"2026-03-09T15:54:42.611349+0000","last_peered":"2026-03-09T15:54:42.611349+0000","last_clean":"2026-03-09T15:54:42.611349+0000","last_became_active":"2026-03-09T15:54:15.383409+0000","last_became_peered":"2026-03-09T15:54:15.383409+0000","last_unstale":"2026-03-09T15:54:42.611349+0000","last_undegraded":"2026-03-09T15:54:42.611349+0000","last_fullsized":"2026-03-09T15:54:42.611349+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:22:42.222598+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070974+0000","last_change":"2026-03-09T15:54:17.376725+0000","last_active":"2026-03-09T15:54:43.070974+0000","last_peered":"2026-03-09T15:54:43.070974+0000","last_clean":"2026-03-09T15:54:43.070974+0000","last_became_active":"2026-03-09T15:54:17.376498+0000","last_became_peered":"2026-03-09T15:54:17.376498+0000","last_unstale":"2026-03-09T15:54:43.070974+0000","last_undegraded":"2026-03-09T15:54:43.070974+0000","last_fullsized":"2026-03-09T15:54:43.070974+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:12:13.199965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"62'19","reported_seq":58,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072871+0000","last_change":"2026-03-09T15:54:13.359646+0000","last_active":"2026-03-09T15:54:43.072871+0000","last_peered":"2026-03-09T15:54:43.072871+0000","last_clean":"2026-03-09T15:54:43.072871+0000","last_became_active":"2026-03-09T15:54:13.359412+0000","last_became_peered":"2026-03-09T15:54:13.359412+0000","last_unstale":"2026-03-09T15:54:43.072871+0000","last_undegraded":"2026-03-09T15:54:43.072871+0000","last_fullsized":"2026-03-09T15:54:43.072871+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:18:47.639834+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070525+0000","last_change":"2026-03-09T15:54:11.360161+0000","last_active":"2026-03-09T15:54:43.070525+0000","last_peered":"2026-03-09T15:54:43.070525+0000","last_clean":"2026-03-09T15:54:43.070525+0000","last_became_active":"2026-03-09T15:54:11.360032+0000","last_became_peered":"2026-03-09T15:54:11.360032+0000","last_unstale":"2026-03-09T15:54:43.070525+0000","last_undegraded":"2026-03-09T15:54:43.070525+0000","last_fullsized":"2026-03-09T15:54:43.070525+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:05:25.991121+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610432+0000","last_change":"2026-03-09T15:54:15.367415+0000","last_active":"2026-03-09T15:54:42.610432+0000","last_peered":"2026-03-09T15:54:42.610432+0000","last_clean":"2026-03-09T15:54:42.610432+0000","last_became_active":"2026-03-09T15:54:15.366623+0000","last_became_peered":"2026-03-09T15:54:15.366623+0000","last_unstale":"2026-03-09T15:54:42.610432+0000","last_undegraded":"2026-03-09T15:54:42.610432+0000","last_fullsized":"2026-03-09T15:54:42.610432+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:03:29.130787+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073979+0000","last_change":"2026-03-09T15:54:17.389455+0000","last_active":"2026-03-09T15:54:43.073979+0000","last_peered":"2026-03-09T15:54:43.073979+0000","last_clean":"2026-03-09T15:54:43.073979+0000","last_became_active":"2026-03-09T15:54:17.387802+0000","last_became_peered":"2026-03-09T15:54:17.387802+0000","last_unstale":"2026-03-09T15:54:43.073979+0000","last_undegraded":"2026-03-09T15:54:43.073979+0000","last_fullsized":"2026-03-09T15:54:43.073979+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:54:13.560516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607324+0000","last_change":"2026-03-09T15:54:13.357286+0000","last_active":"2026-03-09T15:54:42.607324+0000","last_peered":"2026-03-09T15:54:42.607324+0000","last_clean":"2026-03-09T15:54:42.607324+0000","last_became_active":"2026-03-09T15:54:13.356206+0000","last_became_peered":"2026-03-09T15:54:13.356206+0000","last_unstale":"2026-03-09T15:54:42.607324+0000","last_undegraded":"2026-03-09T15:54:42.607324+0000","last_fullsized":"2026-03-09T15:54:42.607324+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:15:34.380881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609851+0000","last_change":"2026-03-09T15:54:11.364404+0000","last_active":"2026-03-09T15:54:42.609851+0000","last_peered":"2026-03-09T15:54:42.609851+0000","last_clean":"2026-03-09T15:54:42.609851+0000","last_became_active":"2026-03-09T15:54:11.363935+0000","last_became_peered":"2026-03-09T15:54:11.363935+0000","last_unstale":"2026-03-09T15:54:42.609851+0000","last_undegraded":"2026-03-09T15:54:42.609851+0000","last_fullsized":"2026-03-09T15:54:42.609851+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:33:04.895728+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"62'11","reported_seq":49,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.409628+0000","last_change":"2026-03-09T15:54:15.378921+0000","last_active":"2026-03-09T15:55:21.409628+0000","last_peered":"2026-03-09T15:55:21.409628+0000","last_clean":"2026-03-09T15:55:21.409628+0000","last_became_active":"2026-03-09T15:54:15.378285+0000","last_became_peered":"2026-03-09T15:54:15.378285+0000","last_unstale":"2026-03-09T15:55:21.409628+0000","last_undegraded":"2026-03-09T15:55:21.409628+0000","last_fullsized":"2026-03-09T15:55:21.409628+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:15.316324+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726576+0000","last_change":"2026-03-09T15:54:17.381082+0000","last_active":"2026-03-09T15:54:42.726576+0000","last_peered":"2026-03-09T15:54:42.726576+0000","last_clean":"2026-03-09T15:54:42.726576+0000","last_became_active":"2026-03-09T15:54:17.380847+0000","last_became_peered":"2026-03-09T15:54:17.380847+0000","last_unstale":"2026-03-09T15:54:42.726576+0000","last_undegraded":"2026-03-09T15:54:42.726576+0000","last_fullsized":"2026-03-09T15:54:42.726576+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:08:20.888387+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"61'15","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607386+0000","last_change":"2026-03-09T15:54:13.357355+0000","last_active":"2026-03-09T15:54:42.607386+0000","last_peered":"2026-03-09T15:54:42.607386+0000","last_clean":"2026-03-09T15:54:42.607386+0000","last_became_active":"2026-03-09T15:54:13.356303+0000","last_became_peered":"2026-03-09T15:54:13.356303+0000","last_unstale":"2026-03-09T15:54:42.607386+0000","last_undegraded":"2026-03-09T15:54:42.607386+0000","last_fullsized":"2026-03-09T15:54:42.607386+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:14:41.902134+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609898+0000","last_change":"2026-03-09T15:54:11.364112+0000","last_active":"2026-03-09T15:54:42.609898+0000","last_peered":"2026-03-09T15:54:42.609898+0000","last_clean":"2026-03-09T15:54:42.609898+0000","last_became_active":"2026-03-09T15:54:11.363963+0000","last_became_peered":"2026-03-09T15:54:11.363963+0000","last_unstale":"2026-03-09T15:54:42.609898+0000","last_undegraded":"2026-03-09T15:54:42.609898+0000","last_fullsized":"2026-03-09T15:54:42.609898+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:53:10.209701+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"62'11","reported_seq":49,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.408614+0000","last_change":"2026-03-09T15:54:15.391051+0000","last_active":"2026-03-09T15:55:21.408614+0000","last_peered":"2026-03-09T15:55:21.408614+0000","last_clean":"2026-03-09T15:55:21.408614+0000","last_became_active":"2026-03-09T15:54:15.390780+0000","last_became_peered":"2026-03-09T15:54:15.390780+0000","last_unstale":"2026-03-09T15:55:21.408614+0000","last_undegraded":"2026-03-09T15:55:21.408614+0000","last_fullsized":"2026-03-09T15:55:21.408614+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:48:14.737603+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611122+0000","last_change":"2026-03-09T15:54:17.383942+0000","last_active":"2026-03-09T15:54:42.611122+0000","last_peered":"2026-03-09T15:54:42.611122+0000","last_clean":"2026-03-09T15:54:42.611122+0000","last_became_active":"2026-03-09T15:54:17.377541+0000","last_became_peered":"2026-03-09T15:54:17.377541+0000","last_unstale":"2026-03-09T15:54:42.611122+0000","last_undegraded":"2026-03-09T15:54:42.611122+0000","last_fullsized":"2026-03-09T15:54:42.611122+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:11:47.679015+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"61'12","reported_seq":50,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726140+0000","last_change":"2026-03-09T15:54:13.353669+0000","last_active":"2026-03-09T15:54:42.726140+0000","last_peered":"2026-03-09T15:54:42.726140+0000","last_clean":"2026-03-09T15:54:42.726140+0000","last_became_active":"2026-03-09T15:54:13.353571+0000","last_became_peered":"2026-03-09T15:54:13.353571+0000","last_unstale":"2026-03-09T15:54:42.726140+0000","last_undegraded":"2026-03-09T15:54:42.726140+0000","last_fullsized":"2026-03-09T15:54:42.726140+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:10:20.134042+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070447+0000","last_change":"2026-03-09T15:54:11.365353+0000","last_active":"2026-03-09T15:54:43.070447+0000","last_peered":"2026-03-09T15:54:43.070447+0000","last_clean":"2026-03-09T15:54:43.070447+0000","last_became_active":"2026-03-09T15:54:11.365023+0000","last_became_peered":"2026-03-09T15:54:11.365023+0000","last_unstale":"2026-03-09T15:54:43.070447+0000","last_undegraded":"2026-03-09T15:54:43.070447+0000","last_fullsized":"2026-03-09T15:54:43.070447+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:04:10.825623+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612917+0000","last_change":"2026-03-09T15:54:15.379001+0000","last_active":"2026-03-09T15:54:42.612917+0000","last_peered":"2026-03-09T15:54:42.612917+0000","last_clean":"2026-03-09T15:54:42.612917+0000","last_became_active":"2026-03-09T15:54:15.378796+0000","last_became_peered":"2026-03-09T15:54:15.378796+0000","last_unstale":"2026-03-09T15:54:42.612917+0000","last_undegraded":"2026-03-09T15:54:42.612917+0000","last_fullsized":"2026-03-09T15:54:42.612917+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:08:15.930760+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606449+0000","last_change":"2026-03-09T15:54:17.365054+0000","last_active":"2026-03-09T15:54:42.606449+0000","last_peered":"2026-03-09T15:54:42.606449+0000","last_clean":"2026-03-09T15:54:42.606449+0000","last_became_active":"2026-03-09T15:54:17.364925+0000","last_became_peered":"2026-03-09T15:54:17.364925+0000","last_unstale":"2026-03-09T15:54:42.606449+0000","last_undegraded":"2026-03-09T15:54:42.606449+0000","last_fullsized":"2026-03-09T15:54:42.606449+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:03:16.498916+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"61'12","reported_seq":45,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072345+0000","last_change":"2026-03-09T15:54:13.363234+0000","last_active":"2026-03-09T15:54:43.072345+0000","last_peered":"2026-03-09T15:54:43.072345+0000","last_clean":"2026-03-09T15:54:43.072345+0000","last_became_active":"2026-03-09T15:54:13.355094+0000","last_became_peered":"2026-03-09T15:54:13.355094+0000","last_unstale":"2026-03-09T15:54:43.072345+0000","last_undegraded":"2026-03-09T15:54:43.072345+0000","last_fullsized":"2026-03-09T15:54:43.072345+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:15:36.140709+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.075100+0000","last_change":"2026-03-09T15:54:11.360325+0000","last_active":"2026-03-09T15:54:43.075100+0000","last_peered":"2026-03-09T15:54:43.075100+0000","last_clean":"2026-03-09T15:54:43.075100+0000","last_became_active":"2026-03-09T15:54:11.360100+0000","last_became_peered":"2026-03-09T15:54:11.360100+0000","last_unstale":"2026-03-09T15:54:43.075100+0000","last_undegraded":"2026-03-09T15:54:43.075100+0000","last_fullsized":"2026-03-09T15:54:43.075100+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:11:35.282349+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"60'1","reported_seq":35,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726496+0000","last_change":"2026-03-09T15:54:19.477060+0000","last_active":"2026-03-09T15:54:42.726496+0000","last_peered":"2026-03-09T15:54:42.726496+0000","last_clean":"2026-03-09T15:54:42.726496+0000","last_became_active":"2026-03-09T15:54:13.352621+0000","last_became_peered":"2026-03-09T15:54:13.352621+0000","last_unstale":"2026-03-09T15:54:42.726496+0000","last_undegraded":"2026-03-09T15:54:42.726496+0000","last_fullsized":"2026-03-09T15:54:42.726496+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:57:19.014710+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00018030699999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"62'11","reported_seq":49,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.409587+0000","last_change":"2026-03-09T15:54:15.366416+0000","last_active":"2026-03-09T15:55:21.409587+0000","last_peered":"2026-03-09T15:55:21.409587+0000","last_clean":"2026-03-09T15:55:21.409587+0000","last_became_active":"2026-03-09T15:54:15.366261+0000","last_became_peered":"2026-03-09T15:54:15.366261+0000","last_unstale":"2026-03-09T15:55:21.409587+0000","last_undegraded":"2026-03-09T15:55:21.409587+0000","last_fullsized":"2026-03-09T15:55:21.409587+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:57:53.622224+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070139+0000","last_change":"2026-03-09T15:54:17.379918+0000","last_active":"2026-03-09T15:54:43.070139+0000","last_peered":"2026-03-09T15:54:43.070139+0000","last_clean":"2026-03-09T15:54:43.070139+0000","last_became_active":"2026-03-09T15:54:17.379722+0000","last_became_peered":"2026-03-09T15:54:17.379722+0000","last_unstale":"2026-03-09T15:54:43.070139+0000","last_undegraded":"2026-03-09T15:54:43.070139+0000","last_fullsized":"2026-03-09T15:54:43.070139+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:04:27.911032+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"61'13","reported_seq":54,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608110+0000","last_change":"2026-03-09T15:54:13.357614+0000","last_active":"2026-03-09T15:54:42.608110+0000","last_peered":"2026-03-09T15:54:42.608110+0000","last_clean":"2026-03-09T15:54:42.608110+0000","last_became_active":"2026-03-09T15:54:13.357443+0000","last_became_peered":"2026-03-09T15:54:13.357443+0000","last_unstale":"2026-03-09T15:54:42.608110+0000","last_undegraded":"2026-03-09T15:54:42.608110+0000","last_fullsized":"2026-03-09T15:54:42.608110+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:19:17.150066+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"54'1","reported_seq":32,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609944+0000","last_change":"2026-03-09T15:54:11.347738+0000","last_active":"2026-03-09T15:54:42.609944+0000","last_peered":"2026-03-09T15:54:42.609944+0000","last_clean":"2026-03-09T15:54:42.609944+0000","last_became_active":"2026-03-09T15:54:11.347617+0000","last_became_peered":"2026-03-09T15:54:11.347617+0000","last_unstale":"2026-03-09T15:54:42.609944+0000","last_undegraded":"2026-03-09T15:54:42.609944+0000","last_fullsized":"2026-03-09T15:54:42.609944+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:48:01.615624+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"62'5","reported_seq":105,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:27.210793+0000","last_change":"2026-03-09T15:54:19.476432+0000","last_active":"2026-03-09T15:55:27.210793+0000","last_peered":"2026-03-09T15:55:27.210793+0000","last_clean":"2026-03-09T15:55:27.210793+0000","last_became_active":"2026-03-09T15:54:13.356955+0000","last_became_peered":"2026-03-09T15:54:13.356955+0000","last_unstale":"2026-03-09T15:55:27.210793+0000","last_undegraded":"2026-03-09T15:55:27.210793+0000","last_fullsized":"2026-03-09T15:55:27.210793+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:36:51.751332+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000456298,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":68,"num_read_kb":63,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725688+0000","last_change":"2026-03-09T15:54:15.369299+0000","last_active":"2026-03-09T15:54:42.725688+0000","last_peered":"2026-03-09T15:54:42.725688+0000","last_clean":"2026-03-09T15:54:42.725688+0000","last_became_active":"2026-03-09T15:54:15.369181+0000","last_became_peered":"2026-03-09T15:54:15.369181+0000","last_unstale":"2026-03-09T15:54:42.725688+0000","last_undegraded":"2026-03-09T15:54:42.725688+0000","last_fullsized":"2026-03-09T15:54:42.725688+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:07:18.415359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725678+0000","last_change":"2026-03-09T15:54:17.374330+0000","last_active":"2026-03-09T15:54:42.725678+0000","last_peered":"2026-03-09T15:54:42.725678+0000","last_clean":"2026-03-09T15:54:42.725678+0000","last_became_active":"2026-03-09T15:54:17.374141+0000","last_became_peered":"2026-03-09T15:54:17.374141+0000","last_unstale":"2026-03-09T15:54:42.725678+0000","last_undegraded":"2026-03-09T15:54:42.725678+0000","last_fullsized":"2026-03-09T15:54:42.725678+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:02:48.437540+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"62'30","reported_seq":93,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.409929+0000","last_change":"2026-03-09T15:54:13.362622+0000","last_active":"2026-03-09T15:55:21.409929+0000","last_peered":"2026-03-09T15:55:21.409929+0000","last_clean":"2026-03-09T15:55:21.409929+0000","last_became_active":"2026-03-09T15:54:13.362110+0000","last_became_peered":"2026-03-09T15:54:13.362110+0000","last_unstale":"2026-03-09T15:55:21.409929+0000","last_undegraded":"2026-03-09T15:55:21.409929+0000","last_fullsized":"2026-03-09T15:55:21.409929+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:40:55.419542+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070407+0000","last_change":"2026-03-09T15:54:11.362001+0000","last_active":"2026-03-09T15:54:43.070407+0000","last_peered":"2026-03-09T15:54:43.070407+0000","last_clean":"2026-03-09T15:54:43.070407+0000","last_became_active":"2026-03-09T15:54:11.361837+0000","last_became_peered":"2026-03-09T15:54:11.361837+0000","last_unstale":"2026-03-09T15:54:43.070407+0000","last_undegraded":"2026-03-09T15:54:43.070407+0000","last_fullsized":"2026-03-09T15:54:43.070407+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:02:01.367370+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.075346+0000","last_change":"2026-03-09T15:54:15.363894+0000","last_active":"2026-03-09T15:54:43.075346+0000","last_peered":"2026-03-09T15:54:43.075346+0000","last_clean":"2026-03-09T15:54:43.075346+0000","last_became_active":"2026-03-09T15:54:15.363784+0000","last_became_peered":"2026-03-09T15:54:15.363784+0000","last_unstale":"2026-03-09T15:54:43.075346+0000","last_undegraded":"2026-03-09T15:54:43.075346+0000","last_fullsized":"2026-03-09T15:54:43.075346+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:46:06.356956+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609298+0000","last_change":"2026-03-09T15:54:17.377576+0000","last_active":"2026-03-09T15:54:42.609298+0000","last_peered":"2026-03-09T15:54:42.609298+0000","last_clean":"2026-03-09T15:54:42.609298+0000","last_became_active":"2026-03-09T15:54:17.377475+0000","last_became_peered":"2026-03-09T15:54:17.377475+0000","last_unstale":"2026-03-09T15:54:42.609298+0000","last_undegraded":"2026-03-09T15:54:42.609298+0000","last_fullsized":"2026-03-09T15:54:42.609298+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:17:00.658582+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"61'16","reported_seq":65,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.408237+0000","last_change":"2026-03-09T15:54:13.357688+0000","last_active":"2026-03-09T15:55:21.408237+0000","last_peered":"2026-03-09T15:55:21.408237+0000","last_clean":"2026-03-09T15:55:21.408237+0000","last_became_active":"2026-03-09T15:54:13.357617+0000","last_became_peered":"2026-03-09T15:54:13.357617+0000","last_unstale":"2026-03-09T15:55:21.408237+0000","last_undegraded":"2026-03-09T15:55:21.408237+0000","last_fullsized":"2026-03-09T15:55:21.408237+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:34:47.013985+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610019+0000","last_change":"2026-03-09T15:54:11.364488+0000","last_active":"2026-03-09T15:54:42.610019+0000","last_peered":"2026-03-09T15:54:42.610019+0000","last_clean":"2026-03-09T15:54:42.610019+0000","last_became_active":"2026-03-09T15:54:11.363805+0000","last_became_peered":"2026-03-09T15:54:11.363805+0000","last_unstale":"2026-03-09T15:54:42.610019+0000","last_undegraded":"2026-03-09T15:54:42.610019+0000","last_fullsized":"2026-03-09T15:54:42.610019+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:11:37.607532+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"62'2","reported_seq":36,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610084+0000","last_change":"2026-03-09T15:54:19.473850+0000","last_active":"2026-03-09T15:54:42.610084+0000","last_peered":"2026-03-09T15:54:42.610084+0000","last_clean":"2026-03-09T15:54:42.610084+0000","last_became_active":"2026-03-09T15:54:13.358993+0000","last_became_peered":"2026-03-09T15:54:13.358993+0000","last_unstale":"2026-03-09T15:54:42.610084+0000","last_undegraded":"2026-03-09T15:54:42.610084+0000","last_fullsized":"2026-03-09T15:54:42.610084+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:25:54.693820+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00059841400000000002,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"62'11","reported_seq":49,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.409941+0000","last_change":"2026-03-09T15:54:15.383386+0000","last_active":"2026-03-09T15:55:21.409941+0000","last_peered":"2026-03-09T15:55:21.409941+0000","last_clean":"2026-03-09T15:55:21.409941+0000","last_became_active":"2026-03-09T15:54:15.382963+0000","last_became_peered":"2026-03-09T15:54:15.382963+0000","last_unstale":"2026-03-09T15:55:21.409941+0000","last_undegraded":"2026-03-09T15:55:21.409941+0000","last_fullsized":"2026-03-09T15:55:21.409941+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:35:57.637932+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071487+0000","last_change":"2026-03-09T15:54:17.384926+0000","last_active":"2026-03-09T15:54:43.071487+0000","last_peered":"2026-03-09T15:54:43.071487+0000","last_clean":"2026-03-09T15:54:43.071487+0000","last_became_active":"2026-03-09T15:54:17.384399+0000","last_became_peered":"2026-03-09T15:54:17.384399+0000","last_unstale":"2026-03-09T15:54:43.071487+0000","last_undegraded":"2026-03-09T15:54:43.071487+0000","last_fullsized":"2026-03-09T15:54:43.071487+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:26:38.376356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"61'19","reported_seq":63,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725458+0000","last_change":"2026-03-09T15:54:13.361579+0000","last_active":"2026-03-09T15:54:42.725458+0000","last_peered":"2026-03-09T15:54:42.725458+0000","last_clean":"2026-03-09T15:54:42.725458+0000","last_became_active":"2026-03-09T15:54:13.361434+0000","last_became_peered":"2026-03-09T15:54:13.361434+0000","last_unstale":"2026-03-09T15:54:42.725458+0000","last_undegraded":"2026-03-09T15:54:42.725458+0000","last_fullsized":"2026-03-09T15:54:42.725458+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:24:22.376482+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612390+0000","last_change":"2026-03-09T15:54:11.349770+0000","last_active":"2026-03-09T15:54:42.612390+0000","last_peered":"2026-03-09T15:54:42.612390+0000","last_clean":"2026-03-09T15:54:42.612390+0000","last_became_active":"2026-03-09T15:54:11.349631+0000","last_became_peered":"2026-03-09T15:54:11.349631+0000","last_unstale":"2026-03-09T15:54:42.612390+0000","last_undegraded":"2026-03-09T15:54:42.612390+0000","last_fullsized":"2026-03-09T15:54:42.612390+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:39:36.701559+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071651+0000","last_change":"2026-03-09T15:54:15.383483+0000","last_active":"2026-03-09T15:54:43.071651+0000","last_peered":"2026-03-09T15:54:43.071651+0000","last_clean":"2026-03-09T15:54:43.071651+0000","last_became_active":"2026-03-09T15:54:15.383127+0000","last_became_peered":"2026-03-09T15:54:15.383127+0000","last_unstale":"2026-03-09T15:54:43.071651+0000","last_undegraded":"2026-03-09T15:54:43.071651+0000","last_fullsized":"2026-03-09T15:54:43.071651+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:54:07.441803+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606614+0000","last_change":"2026-03-09T15:54:17.385586+0000","last_active":"2026-03-09T15:54:42.606614+0000","last_peered":"2026-03-09T15:54:42.606614+0000","last_clean":"2026-03-09T15:54:42.606614+0000","last_became_active":"2026-03-09T15:54:17.379501+0000","last_became_peered":"2026-03-09T15:54:17.379501+0000","last_unstale":"2026-03-09T15:54:42.606614+0000","last_undegraded":"2026-03-09T15:54:42.606614+0000","last_fullsized":"2026-03-09T15:54:42.606614+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:16:22.811449+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"61'18","reported_seq":59,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609404+0000","last_change":"2026-03-09T15:54:13.362741+0000","last_active":"2026-03-09T15:54:42.609404+0000","last_peered":"2026-03-09T15:54:42.609404+0000","last_clean":"2026-03-09T15:54:42.609404+0000","last_became_active":"2026-03-09T15:54:13.362250+0000","last_became_peered":"2026-03-09T15:54:13.362250+0000","last_unstale":"2026-03-09T15:54:42.609404+0000","last_undegraded":"2026-03-09T15:54:42.609404+0000","last_fullsized":"2026-03-09T15:54:42.609404+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:58:53.163207+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073422+0000","last_change":"2026-03-09T15:54:11.346346+0000","last_active":"2026-03-09T15:54:43.073422+0000","last_peered":"2026-03-09T15:54:43.073422+0000","last_clean":"2026-03-09T15:54:43.073422+0000","last_became_active":"2026-03-09T15:54:11.346081+0000","last_became_peered":"2026-03-09T15:54:11.346081+0000","last_unstale":"2026-03-09T15:54:43.073422+0000","last_undegraded":"2026-03-09T15:54:43.073422+0000","last_fullsized":"2026-03-09T15:54:43.073422+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:38:59.379815+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073504+0000","last_change":"2026-03-09T15:54:15.387229+0000","last_active":"2026-03-09T15:54:43.073504+0000","last_peered":"2026-03-09T15:54:43.073504+0000","last_clean":"2026-03-09T15:54:43.073504+0000","last_became_active":"2026-03-09T15:54:15.377757+0000","last_became_peered":"2026-03-09T15:54:15.377757+0000","last_unstale":"2026-03-09T15:54:43.073504+0000","last_undegraded":"2026-03-09T15:54:43.073504+0000","last_fullsized":"2026-03-09T15:54:43.073504+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:54:46.791680+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071621+0000","last_change":"2026-03-09T15:54:17.389850+0000","last_active":"2026-03-09T15:54:43.071621+0000","last_peered":"2026-03-09T15:54:43.071621+0000","last_clean":"2026-03-09T15:54:43.071621+0000","last_became_active":"2026-03-09T15:54:17.389614+0000","last_became_peered":"2026-03-09T15:54:17.389614+0000","last_unstale":"2026-03-09T15:54:43.071621+0000","last_undegraded":"2026-03-09T15:54:43.071621+0000","last_fullsized":"2026-03-09T15:54:43.071621+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:29:45.220106+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"62'14","reported_seq":48,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072243+0000","last_change":"2026-03-09T15:54:13.362624+0000","last_active":"2026-03-09T15:54:43.072243+0000","last_peered":"2026-03-09T15:54:43.072243+0000","last_clean":"2026-03-09T15:54:43.072243+0000","last_became_active":"2026-03-09T15:54:13.354383+0000","last_became_peered":"2026-03-09T15:54:13.354383+0000","last_unstale":"2026-03-09T15:54:43.072243+0000","last_undegraded":"2026-03-09T15:54:43.072243+0000","last_fullsized":"2026-03-09T15:54:43.072243+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:04:11.930205+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070780+0000","last_change":"2026-03-09T15:54:11.365192+0000","last_active":"2026-03-09T15:54:43.070780+0000","last_peered":"2026-03-09T15:54:43.070780+0000","last_clean":"2026-03-09T15:54:43.070780+0000","last_became_active":"2026-03-09T15:54:11.362397+0000","last_became_peered":"2026-03-09T15:54:11.362397+0000","last_unstale":"2026-03-09T15:54:43.070780+0000","last_undegraded":"2026-03-09T15:54:43.070780+0000","last_fullsized":"2026-03-09T15:54:43.070780+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:58:01.912069+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612683+0000","last_change":"2026-03-09T15:54:15.383701+0000","last_active":"2026-03-09T15:54:42.612683+0000","last_peered":"2026-03-09T15:54:42.612683+0000","last_clean":"2026-03-09T15:54:42.612683+0000","last_became_active":"2026-03-09T15:54:15.383597+0000","last_became_peered":"2026-03-09T15:54:15.383597+0000","last_unstale":"2026-03-09T15:54:42.612683+0000","last_undegraded":"2026-03-09T15:54:42.612683+0000","last_fullsized":"2026-03-09T15:54:42.612683+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:36:56.046748+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609737+0000","last_change":"2026-03-09T15:54:17.391377+0000","last_active":"2026-03-09T15:54:42.609737+0000","last_peered":"2026-03-09T15:54:42.609737+0000","last_clean":"2026-03-09T15:54:42.609737+0000","last_became_active":"2026-03-09T15:54:17.391273+0000","last_became_peered":"2026-03-09T15:54:17.391273+0000","last_unstale":"2026-03-09T15:54:42.609737+0000","last_undegraded":"2026-03-09T15:54:42.609737+0000","last_fullsized":"2026-03-09T15:54:42.609737+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:12:34.558535+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"61'10","reported_seq":42,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608028+0000","last_change":"2026-03-09T15:54:13.357209+0000","last_active":"2026-03-09T15:54:42.608028+0000","last_peered":"2026-03-09T15:54:42.608028+0000","last_clean":"2026-03-09T15:54:42.608028+0000","last_became_active":"2026-03-09T15:54:13.356177+0000","last_became_peered":"2026-03-09T15:54:13.356177+0000","last_unstale":"2026-03-09T15:54:42.608028+0000","last_undegraded":"2026-03-09T15:54:42.608028+0000","last_fullsized":"2026-03-09T15:54:42.608028+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:32:54.942043+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611562+0000","last_change":"2026-03-09T15:54:11.363282+0000","last_active":"2026-03-09T15:54:42.611562+0000","last_peered":"2026-03-09T15:54:42.611562+0000","last_clean":"2026-03-09T15:54:42.611562+0000","last_became_active":"2026-03-09T15:54:11.363009+0000","last_became_peered":"2026-03-09T15:54:11.363009+0000","last_unstale":"2026-03-09T15:54:42.611562+0000","last_undegraded":"2026-03-09T15:54:42.611562+0000","last_fullsized":"2026-03-09T15:54:42.611562+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:26:38.451615+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"64'39","reported_seq":66,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:44.722666+0000","last_change":"2026-03-09T15:53:51.503795+0000","last_active":"2026-03-09T15:54:44.722666+0000","last_peered":"2026-03-09T15:54:44.722666+0000","last_clean":"2026-03-09T15:54:44.722666+0000","last_became_active":"2026-03-09T15:53:51.497232+0000","last_became_peered":"2026-03-09T15:53:51.497232+0000","last_unstale":"2026-03-09T15:54:44.722666+0000","last_undegraded":"2026-03-09T15:54:44.722666+0000","last_fullsized":"2026-03-09T15:54:44.722666+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:51:01.337312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:51:01.337312+0000","last_clean_scrub_stamp":"2026-03-09T15:51:01.337312+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:13:46.716016+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070724+0000","last_change":"2026-03-09T15:54:15.386706+0000","last_active":"2026-03-09T15:54:43.070724+0000","last_peered":"2026-03-09T15:54:43.070724+0000","last_clean":"2026-03-09T15:54:43.070724+0000","last_became_active":"2026-03-09T15:54:15.386260+0000","last_became_peered":"2026-03-09T15:54:15.386260+0000","last_unstale":"2026-03-09T15:54:43.070724+0000","last_undegraded":"2026-03-09T15:54:43.070724+0000","last_fullsized":"2026-03-09T15:54:43.070724+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:18:52.112469+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611485+0000","last_change":"2026-03-09T15:54:17.383779+0000","last_active":"2026-03-09T15:54:42.611485+0000","last_peered":"2026-03-09T15:54:42.611485+0000","last_clean":"2026-03-09T15:54:42.611485+0000","last_became_active":"2026-03-09T15:54:17.377406+0000","last_became_peered":"2026-03-09T15:54:17.377406+0000","last_unstale":"2026-03-09T15:54:42.611485+0000","last_undegraded":"2026-03-09T15:54:42.611485+0000","last_fullsized":"2026-03-09T15:54:42.611485+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:01:05.220739+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"61'17","reported_seq":55,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.069888+0000","last_change":"2026-03-09T15:54:13.354702+0000","last_active":"2026-03-09T15:54:43.069888+0000","last_peered":"2026-03-09T15:54:43.069888+0000","last_clean":"2026-03-09T15:54:43.069888+0000","last_became_active":"2026-03-09T15:54:13.354434+0000","last_became_peered":"2026-03-09T15:54:13.354434+0000","last_unstale":"2026-03-09T15:54:43.069888+0000","last_undegraded":"2026-03-09T15:54:43.069888+0000","last_fullsized":"2026-03-09T15:54:43.069888+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:26:03.884511+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073470+0000","last_change":"2026-03-09T15:54:11.345850+0000","last_active":"2026-03-09T15:54:43.073470+0000","last_peered":"2026-03-09T15:54:43.073470+0000","last_clean":"2026-03-09T15:54:43.073470+0000","last_became_active":"2026-03-09T15:54:11.345595+0000","last_became_peered":"2026-03-09T15:54:11.345595+0000","last_unstale":"2026-03-09T15:54:43.073470+0000","last_undegraded":"2026-03-09T15:54:43.073470+0000","last_fullsized":"2026-03-09T15:54:43.073470+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:55:46.131966+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073394+0000","last_change":"2026-03-09T15:54:15.378750+0000","last_active":"2026-03-09T15:54:43.073394+0000","last_peered":"2026-03-09T15:54:43.073394+0000","last_clean":"2026-03-09T15:54:43.073394+0000","last_became_active":"2026-03-09T15:54:15.378016+0000","last_became_peered":"2026-03-09T15:54:15.378016+0000","last_unstale":"2026-03-09T15:54:43.073394+0000","last_undegraded":"2026-03-09T15:54:43.073394+0000","last_fullsized":"2026-03-09T15:54:43.073394+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:47:00.475914+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.069916+0000","last_change":"2026-03-09T15:54:17.384669+0000","last_active":"2026-03-09T15:54:43.069916+0000","last_peered":"2026-03-09T15:54:43.069916+0000","last_clean":"2026-03-09T15:54:43.069916+0000","last_became_active":"2026-03-09T15:54:17.384573+0000","last_became_peered":"2026-03-09T15:54:17.384573+0000","last_unstale":"2026-03-09T15:54:43.069916+0000","last_undegraded":"2026-03-09T15:54:43.069916+0000","last_fullsized":"2026-03-09T15:54:43.069916+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:17:01.831418+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"61'10","reported_seq":42,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612431+0000","last_change":"2026-03-09T15:54:13.357440+0000","last_active":"2026-03-09T15:54:42.612431+0000","last_peered":"2026-03-09T15:54:42.612431+0000","last_clean":"2026-03-09T15:54:42.612431+0000","last_became_active":"2026-03-09T15:54:13.357356+0000","last_became_peered":"2026-03-09T15:54:13.357356+0000","last_unstale":"2026-03-09T15:54:42.612431+0000","last_undegraded":"2026-03-09T15:54:42.612431+0000","last_fullsized":"2026-03-09T15:54:42.612431+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:01:58.187263+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609805+0000","last_change":"2026-03-09T15:54:11.342596+0000","last_active":"2026-03-09T15:54:42.609805+0000","last_peered":"2026-03-09T15:54:42.609805+0000","last_clean":"2026-03-09T15:54:42.609805+0000","last_became_active":"2026-03-09T15:54:11.341331+0000","last_became_peered":"2026-03-09T15:54:11.341331+0000","last_unstale":"2026-03-09T15:54:42.609805+0000","last_undegraded":"2026-03-09T15:54:42.609805+0000","last_fullsized":"2026-03-09T15:54:42.609805+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:57:09.711591+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073832+0000","last_change":"2026-03-09T15:54:15.380894+0000","last_active":"2026-03-09T15:54:43.073832+0000","last_peered":"2026-03-09T15:54:43.073832+0000","last_clean":"2026-03-09T15:54:43.073832+0000","last_became_active":"2026-03-09T15:54:15.380635+0000","last_became_peered":"2026-03-09T15:54:15.380635+0000","last_unstale":"2026-03-09T15:54:43.073832+0000","last_undegraded":"2026-03-09T15:54:43.073832+0000","last_fullsized":"2026-03-09T15:54:43.073832+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:41:01.786993+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070955+0000","last_change":"2026-03-09T15:54:17.376658+0000","last_active":"2026-03-09T15:54:43.070955+0000","last_peered":"2026-03-09T15:54:43.070955+0000","last_clean":"2026-03-09T15:54:43.070955+0000","last_became_active":"2026-03-09T15:54:17.376340+0000","last_became_peered":"2026-03-09T15:54:17.376340+0000","last_unstale":"2026-03-09T15:54:43.070955+0000","last_undegraded":"2026-03-09T15:54:43.070955+0000","last_fullsized":"2026-03-09T15:54:43.070955+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:06:31.736886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"62'15","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070861+0000","last_change":"2026-03-09T15:54:13.354792+0000","last_active":"2026-03-09T15:54:43.070861+0000","last_peered":"2026-03-09T15:54:43.070861+0000","last_clean":"2026-03-09T15:54:43.070861+0000","last_became_active":"2026-03-09T15:54:13.354551+0000","last_became_peered":"2026-03-09T15:54:13.354551+0000","last_unstale":"2026-03-09T15:54:43.070861+0000","last_undegraded":"2026-03-09T15:54:43.070861+0000","last_fullsized":"2026-03-09T15:54:43.070861+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:23:28.330477+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073333+0000","last_change":"2026-03-09T15:54:11.359936+0000","last_active":"2026-03-09T15:54:43.073333+0000","last_peered":"2026-03-09T15:54:43.073333+0000","last_clean":"2026-03-09T15:54:43.073333+0000","last_became_active":"2026-03-09T15:54:11.359674+0000","last_became_peered":"2026-03-09T15:54:11.359674+0000","last_unstale":"2026-03-09T15:54:43.073333+0000","last_undegraded":"2026-03-09T15:54:43.073333+0000","last_fullsized":"2026-03-09T15:54:43.073333+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:14:35.621046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"62'11","reported_seq":49,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.408134+0000","last_change":"2026-03-09T15:54:15.377490+0000","last_active":"2026-03-09T15:55:21.408134+0000","last_peered":"2026-03-09T15:55:21.408134+0000","last_clean":"2026-03-09T15:55:21.408134+0000","last_became_active":"2026-03-09T15:54:15.376887+0000","last_became_peered":"2026-03-09T15:54:15.376887+0000","last_unstale":"2026-03-09T15:55:21.408134+0000","last_undegraded":"2026-03-09T15:55:21.408134+0000","last_fullsized":"2026-03-09T15:55:21.408134+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:53:19.135979+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609820+0000","last_change":"2026-03-09T15:54:17.384058+0000","last_active":"2026-03-09T15:54:42.609820+0000","last_peered":"2026-03-09T15:54:42.609820+0000","last_clean":"2026-03-09T15:54:42.609820+0000","last_became_active":"2026-03-09T15:54:17.377722+0000","last_became_peered":"2026-03-09T15:54:17.377722+0000","last_unstale":"2026-03-09T15:54:42.609820+0000","last_undegraded":"2026-03-09T15:54:42.609820+0000","last_fullsized":"2026-03-09T15:54:42.609820+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:22:47.509069+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"62'11","reported_seq":46,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070203+0000","last_change":"2026-03-09T15:54:13.351089+0000","last_active":"2026-03-09T15:54:43.070203+0000","last_peered":"2026-03-09T15:54:43.070203+0000","last_clean":"2026-03-09T15:54:43.070203+0000","last_became_active":"2026-03-09T15:54:13.350994+0000","last_became_peered":"2026-03-09T15:54:13.350994+0000","last_unstale":"2026-03-09T15:54:43.070203+0000","last_undegraded":"2026-03-09T15:54:43.070203+0000","last_fullsized":"2026-03-09T15:54:43.070203+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:10:39.839271+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"54'3","reported_seq":55,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725776+0000","last_change":"2026-03-09T15:54:11.357983+0000","last_active":"2026-03-09T15:54:42.725776+0000","last_peered":"2026-03-09T15:54:42.725776+0000","last_clean":"2026-03-09T15:54:42.725776+0000","last_became_active":"2026-03-09T15:54:11.357812+0000","last_became_peered":"2026-03-09T15:54:11.357812+0000","last_unstale":"2026-03-09T15:54:42.725776+0000","last_undegraded":"2026-03-09T15:54:42.725776+0000","last_fullsized":"2026-03-09T15:54:42.725776+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":3,"log_dups_size":0,"ondisk_log_size":3,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:53:07.452340+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":1085,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073717+0000","last_change":"2026-03-09T15:54:15.380796+0000","last_active":"2026-03-09T15:54:43.073717+0000","last_peered":"2026-03-09T15:54:43.073717+0000","last_clean":"2026-03-09T15:54:43.073717+0000","last_became_active":"2026-03-09T15:54:15.380402+0000","last_became_peered":"2026-03-09T15:54:15.380402+0000","last_unstale":"2026-03-09T15:54:43.073717+0000","last_undegraded":"2026-03-09T15:54:43.073717+0000","last_fullsized":"2026-03-09T15:54:43.073717+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:17:26.492706+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606518+0000","last_change":"2026-03-09T15:54:17.386081+0000","last_active":"2026-03-09T15:54:42.606518+0000","last_peered":"2026-03-09T15:54:42.606518+0000","last_clean":"2026-03-09T15:54:42.606518+0000","last_became_active":"2026-03-09T15:54:17.385979+0000","last_became_peered":"2026-03-09T15:54:17.385979+0000","last_unstale":"2026-03-09T15:54:42.606518+0000","last_undegraded":"2026-03-09T15:54:42.606518+0000","last_fullsized":"2026-03-09T15:54:42.606518+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:41:44.332789+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"62'11","reported_seq":46,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.069831+0000","last_change":"2026-03-09T15:54:13.354701+0000","last_active":"2026-03-09T15:54:43.069831+0000","last_peered":"2026-03-09T15:54:43.069831+0000","last_clean":"2026-03-09T15:54:43.069831+0000","last_became_active":"2026-03-09T15:54:13.354275+0000","last_became_peered":"2026-03-09T15:54:13.354275+0000","last_unstale":"2026-03-09T15:54:43.069831+0000","last_undegraded":"2026-03-09T15:54:43.069831+0000","last_fullsized":"2026-03-09T15:54:43.069831+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:46:13.709201+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073248+0000","last_change":"2026-03-09T15:54:11.346351+0000","last_active":"2026-03-09T15:54:43.073248+0000","last_peered":"2026-03-09T15:54:43.073248+0000","last_clean":"2026-03-09T15:54:43.073248+0000","last_became_active":"2026-03-09T15:54:11.345897+0000","last_became_peered":"2026-03-09T15:54:11.345897+0000","last_unstale":"2026-03-09T15:54:43.073248+0000","last_undegraded":"2026-03-09T15:54:43.073248+0000","last_fullsized":"2026-03-09T15:54:43.073248+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:12:18.097678+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606737+0000","last_change":"2026-03-09T15:54:15.372775+0000","last_active":"2026-03-09T15:54:42.606737+0000","last_peered":"2026-03-09T15:54:42.606737+0000","last_clean":"2026-03-09T15:54:42.606737+0000","last_became_active":"2026-03-09T15:54:15.372480+0000","last_became_peered":"2026-03-09T15:54:15.372480+0000","last_unstale":"2026-03-09T15:54:42.606737+0000","last_undegraded":"2026-03-09T15:54:42.606737+0000","last_fullsized":"2026-03-09T15:54:42.606737+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:42:23.783965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.073279+0000","last_change":"2026-03-09T15:54:17.382329+0000","last_active":"2026-03-09T15:54:43.073279+0000","last_peered":"2026-03-09T15:54:43.073279+0000","last_clean":"2026-03-09T15:54:43.073279+0000","last_became_active":"2026-03-09T15:54:17.382204+0000","last_became_peered":"2026-03-09T15:54:17.382204+0000","last_unstale":"2026-03-09T15:54:43.073279+0000","last_undegraded":"2026-03-09T15:54:43.073279+0000","last_fullsized":"2026-03-09T15:54:43.073279+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:53:02.263771+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"61'4","reported_seq":33,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.075048+0000","last_change":"2026-03-09T15:54:13.372099+0000","last_active":"2026-03-09T15:54:43.075048+0000","last_peered":"2026-03-09T15:54:43.075048+0000","last_clean":"2026-03-09T15:54:43.075048+0000","last_became_active":"2026-03-09T15:54:13.371765+0000","last_became_peered":"2026-03-09T15:54:13.371765+0000","last_unstale":"2026-03-09T15:54:43.075048+0000","last_undegraded":"2026-03-09T15:54:43.075048+0000","last_fullsized":"2026-03-09T15:54:43.075048+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:15:07.689265+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.075024+0000","last_change":"2026-03-09T15:54:11.347507+0000","last_active":"2026-03-09T15:54:43.075024+0000","last_peered":"2026-03-09T15:54:43.075024+0000","last_clean":"2026-03-09T15:54:43.075024+0000","last_became_active":"2026-03-09T15:54:11.347321+0000","last_became_peered":"2026-03-09T15:54:11.347321+0000","last_unstale":"2026-03-09T15:54:43.075024+0000","last_undegraded":"2026-03-09T15:54:43.075024+0000","last_fullsized":"2026-03-09T15:54:43.075024+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:50:28.524571+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612774+0000","last_change":"2026-03-09T15:54:15.373252+0000","last_active":"2026-03-09T15:54:42.612774+0000","last_peered":"2026-03-09T15:54:42.612774+0000","last_clean":"2026-03-09T15:54:42.612774+0000","last_became_active":"2026-03-09T15:54:15.373123+0000","last_became_peered":"2026-03-09T15:54:15.373123+0000","last_unstale":"2026-03-09T15:54:42.612774+0000","last_undegraded":"2026-03-09T15:54:42.612774+0000","last_fullsized":"2026-03-09T15:54:42.612774+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:42:00.869112+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071694+0000","last_change":"2026-03-09T15:54:17.392767+0000","last_active":"2026-03-09T15:54:43.071694+0000","last_peered":"2026-03-09T15:54:43.071694+0000","last_clean":"2026-03-09T15:54:43.071694+0000","last_became_active":"2026-03-09T15:54:17.391362+0000","last_became_peered":"2026-03-09T15:54:17.391362+0000","last_unstale":"2026-03-09T15:54:43.071694+0000","last_undegraded":"2026-03-09T15:54:43.071694+0000","last_fullsized":"2026-03-09T15:54:43.071694+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:53:11.991377+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"61'11","reported_seq":46,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071065+0000","last_change":"2026-03-09T15:54:13.345821+0000","last_active":"2026-03-09T15:54:43.071065+0000","last_peered":"2026-03-09T15:54:43.071065+0000","last_clean":"2026-03-09T15:54:43.071065+0000","last_became_active":"2026-03-09T15:54:13.345704+0000","last_became_peered":"2026-03-09T15:54:13.345704+0000","last_unstale":"2026-03-09T15:54:43.071065+0000","last_undegraded":"2026-03-09T15:54:43.071065+0000","last_fullsized":"2026-03-09T15:54:43.071065+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:55:41.783650+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612127+0000","last_change":"2026-03-09T15:54:11.363414+0000","last_active":"2026-03-09T15:54:42.612127+0000","last_peered":"2026-03-09T15:54:42.612127+0000","last_clean":"2026-03-09T15:54:42.612127+0000","last_became_active":"2026-03-09T15:54:11.362873+0000","last_became_peered":"2026-03-09T15:54:11.362873+0000","last_unstale":"2026-03-09T15:54:42.612127+0000","last_undegraded":"2026-03-09T15:54:42.612127+0000","last_fullsized":"2026-03-09T15:54:42.612127+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:00:46.857159+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"62'11","reported_seq":49,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.408133+0000","last_change":"2026-03-09T15:54:15.385258+0000","last_active":"2026-03-09T15:55:21.408133+0000","last_peered":"2026-03-09T15:55:21.408133+0000","last_clean":"2026-03-09T15:55:21.408133+0000","last_became_active":"2026-03-09T15:54:15.385077+0000","last_became_peered":"2026-03-09T15:54:15.385077+0000","last_unstale":"2026-03-09T15:55:21.408133+0000","last_undegraded":"2026-03-09T15:55:21.408133+0000","last_fullsized":"2026-03-09T15:55:21.408133+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:16:17.940588+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071075+0000","last_change":"2026-03-09T15:54:17.384611+0000","last_active":"2026-03-09T15:54:43.071075+0000","last_peered":"2026-03-09T15:54:43.071075+0000","last_clean":"2026-03-09T15:54:43.071075+0000","last_became_active":"2026-03-09T15:54:17.384473+0000","last_became_peered":"2026-03-09T15:54:17.384473+0000","last_unstale":"2026-03-09T15:54:43.071075+0000","last_undegraded":"2026-03-09T15:54:43.071075+0000","last_fullsized":"2026-03-09T15:54:43.071075+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:10:41.697037+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071386+0000","last_change":"2026-03-09T15:54:13.360501+0000","last_active":"2026-03-09T15:54:43.071386+0000","last_peered":"2026-03-09T15:54:43.071386+0000","last_clean":"2026-03-09T15:54:43.071386+0000","last_became_active":"2026-03-09T15:54:13.354566+0000","last_became_peered":"2026-03-09T15:54:13.354566+0000","last_unstale":"2026-03-09T15:54:43.071386+0000","last_undegraded":"2026-03-09T15:54:43.071386+0000","last_fullsized":"2026-03-09T15:54:43.071386+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:28:10.760258+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071336+0000","last_change":"2026-03-09T15:54:11.351023+0000","last_active":"2026-03-09T15:54:43.071336+0000","last_peered":"2026-03-09T15:54:43.071336+0000","last_clean":"2026-03-09T15:54:43.071336+0000","last_became_active":"2026-03-09T15:54:11.350784+0000","last_became_peered":"2026-03-09T15:54:11.350784+0000","last_unstale":"2026-03-09T15:54:43.071336+0000","last_undegraded":"2026-03-09T15:54:43.071336+0000","last_fullsized":"2026-03-09T15:54:43.071336+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:05:15.491466+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"62'11","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.409659+0000","last_change":"2026-03-09T15:54:15.372812+0000","last_active":"2026-03-09T15:55:21.409659+0000","last_peered":"2026-03-09T15:55:21.409659+0000","last_clean":"2026-03-09T15:55:21.409659+0000","last_became_active":"2026-03-09T15:54:15.372568+0000","last_became_peered":"2026-03-09T15:54:15.372568+0000","last_unstale":"2026-03-09T15:55:21.409659+0000","last_undegraded":"2026-03-09T15:55:21.409659+0000","last_fullsized":"2026-03-09T15:55:21.409659+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:38:50.990948+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725622+0000","last_change":"2026-03-09T15:54:17.390758+0000","last_active":"2026-03-09T15:54:42.725622+0000","last_peered":"2026-03-09T15:54:42.725622+0000","last_clean":"2026-03-09T15:54:42.725622+0000","last_became_active":"2026-03-09T15:54:17.388401+0000","last_became_peered":"2026-03-09T15:54:17.388401+0000","last_unstale":"2026-03-09T15:54:42.725622+0000","last_undegraded":"2026-03-09T15:54:42.725622+0000","last_fullsized":"2026-03-09T15:54:42.725622+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:09:23.742975+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.070090+0000","last_change":"2026-03-09T15:54:13.346205+0000","last_active":"2026-03-09T15:54:43.070090+0000","last_peered":"2026-03-09T15:54:43.070090+0000","last_clean":"2026-03-09T15:54:43.070090+0000","last_became_active":"2026-03-09T15:54:13.346112+0000","last_became_peered":"2026-03-09T15:54:13.346112+0000","last_unstale":"2026-03-09T15:54:43.070090+0000","last_undegraded":"2026-03-09T15:54:43.070090+0000","last_fullsized":"2026-03-09T15:54:43.070090+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:32:03.527436+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072944+0000","last_change":"2026-03-09T15:54:11.355245+0000","last_active":"2026-03-09T15:54:43.072944+0000","last_peered":"2026-03-09T15:54:43.072944+0000","last_clean":"2026-03-09T15:54:43.072944+0000","last_became_active":"2026-03-09T15:54:11.355030+0000","last_became_peered":"2026-03-09T15:54:11.355030+0000","last_unstale":"2026-03-09T15:54:43.072944+0000","last_undegraded":"2026-03-09T15:54:43.072944+0000","last_fullsized":"2026-03-09T15:54:43.072944+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:34:03.269455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607585+0000","last_change":"2026-03-09T15:54:15.363680+0000","last_active":"2026-03-09T15:54:42.607585+0000","last_peered":"2026-03-09T15:54:42.607585+0000","last_clean":"2026-03-09T15:54:42.607585+0000","last_became_active":"2026-03-09T15:54:15.363338+0000","last_became_peered":"2026-03-09T15:54:15.363338+0000","last_unstale":"2026-03-09T15:54:42.607585+0000","last_undegraded":"2026-03-09T15:54:42.607585+0000","last_fullsized":"2026-03-09T15:54:42.607585+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:28:47.491881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"61'1","reported_seq":20,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071199+0000","last_change":"2026-03-09T15:54:17.370138+0000","last_active":"2026-03-09T15:54:43.071199+0000","last_peered":"2026-03-09T15:54:43.071199+0000","last_clean":"2026-03-09T15:54:43.071199+0000","last_became_active":"2026-03-09T15:54:17.370019+0000","last_became_peered":"2026-03-09T15:54:17.370019+0000","last_unstale":"2026-03-09T15:54:43.071199+0000","last_undegraded":"2026-03-09T15:54:43.071199+0000","last_fullsized":"2026-03-09T15:54:43.071199+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:23:08.190448+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"61'10","reported_seq":42,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726199+0000","last_change":"2026-03-09T15:54:13.353249+0000","last_active":"2026-03-09T15:54:42.726199+0000","last_peered":"2026-03-09T15:54:42.726199+0000","last_clean":"2026-03-09T15:54:42.726199+0000","last_became_active":"2026-03-09T15:54:13.352935+0000","last_became_peered":"2026-03-09T15:54:13.352935+0000","last_unstale":"2026-03-09T15:54:42.726199+0000","last_undegraded":"2026-03-09T15:54:42.726199+0000","last_fullsized":"2026-03-09T15:54:42.726199+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:47:44.870051+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609545+0000","last_change":"2026-03-09T15:54:11.343303+0000","last_active":"2026-03-09T15:54:42.609545+0000","last_peered":"2026-03-09T15:54:42.609545+0000","last_clean":"2026-03-09T15:54:42.609545+0000","last_became_active":"2026-03-09T15:54:11.342666+0000","last_became_peered":"2026-03-09T15:54:11.342666+0000","last_unstale":"2026-03-09T15:54:42.609545+0000","last_undegraded":"2026-03-09T15:54:42.609545+0000","last_fullsized":"2026-03-09T15:54:42.609545+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:07:34.463689+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.609574+0000","last_change":"2026-03-09T15:54:15.367270+0000","last_active":"2026-03-09T15:54:42.609574+0000","last_peered":"2026-03-09T15:54:42.609574+0000","last_clean":"2026-03-09T15:54:42.609574+0000","last_became_active":"2026-03-09T15:54:15.366936+0000","last_became_peered":"2026-03-09T15:54:15.366936+0000","last_unstale":"2026-03-09T15:54:42.609574+0000","last_undegraded":"2026-03-09T15:54:42.609574+0000","last_fullsized":"2026-03-09T15:54:42.609574+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:00:10.712160+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606937+0000","last_change":"2026-03-09T15:54:17.375349+0000","last_active":"2026-03-09T15:54:42.606937+0000","last_peered":"2026-03-09T15:54:42.606937+0000","last_clean":"2026-03-09T15:54:42.606937+0000","last_became_active":"2026-03-09T15:54:17.370883+0000","last_became_peered":"2026-03-09T15:54:17.370883+0000","last_unstale":"2026-03-09T15:54:42.606937+0000","last_undegraded":"2026-03-09T15:54:42.606937+0000","last_fullsized":"2026-03-09T15:54:42.606937+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:51:05.623976+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"61'6","reported_seq":36,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072381+0000","last_change":"2026-03-09T15:54:13.360796+0000","last_active":"2026-03-09T15:54:43.072381+0000","last_peered":"2026-03-09T15:54:43.072381+0000","last_clean":"2026-03-09T15:54:43.072381+0000","last_became_active":"2026-03-09T15:54:13.354911+0000","last_became_peered":"2026-03-09T15:54:13.354911+0000","last_unstale":"2026-03-09T15:54:43.072381+0000","last_undegraded":"2026-03-09T15:54:43.072381+0000","last_fullsized":"2026-03-09T15:54:43.072381+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:41:05.489552+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612044+0000","last_change":"2026-03-09T15:54:11.352011+0000","last_active":"2026-03-09T15:54:42.612044+0000","last_peered":"2026-03-09T15:54:42.612044+0000","last_clean":"2026-03-09T15:54:42.612044+0000","last_became_active":"2026-03-09T15:54:11.351402+0000","last_became_peered":"2026-03-09T15:54:11.351402+0000","last_unstale":"2026-03-09T15:54:42.612044+0000","last_undegraded":"2026-03-09T15:54:42.612044+0000","last_fullsized":"2026-03-09T15:54:42.612044+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:03:58.213655+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072526+0000","last_change":"2026-03-09T15:54:15.370067+0000","last_active":"2026-03-09T15:54:43.072526+0000","last_peered":"2026-03-09T15:54:43.072526+0000","last_clean":"2026-03-09T15:54:43.072526+0000","last_became_active":"2026-03-09T15:54:15.369839+0000","last_became_peered":"2026-03-09T15:54:15.369839+0000","last_unstale":"2026-03-09T15:54:43.072526+0000","last_undegraded":"2026-03-09T15:54:43.072526+0000","last_fullsized":"2026-03-09T15:54:43.072526+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:47:24.327604+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071550+0000","last_change":"2026-03-09T15:54:17.391254+0000","last_active":"2026-03-09T15:54:43.071550+0000","last_peered":"2026-03-09T15:54:43.071550+0000","last_clean":"2026-03-09T15:54:43.071550+0000","last_became_active":"2026-03-09T15:54:17.391053+0000","last_became_peered":"2026-03-09T15:54:17.391053+0000","last_unstale":"2026-03-09T15:54:43.071550+0000","last_undegraded":"2026-03-09T15:54:43.071550+0000","last_fullsized":"2026-03-09T15:54:43.071550+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:33:26.486858+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.611668+0000","last_change":"2026-03-09T15:54:13.351604+0000","last_active":"2026-03-09T15:54:42.611668+0000","last_peered":"2026-03-09T15:54:42.611668+0000","last_clean":"2026-03-09T15:54:42.611668+0000","last_became_active":"2026-03-09T15:54:13.351393+0000","last_became_peered":"2026-03-09T15:54:13.351393+0000","last_unstale":"2026-03-09T15:54:42.611668+0000","last_undegraded":"2026-03-09T15:54:42.611668+0000","last_fullsized":"2026-03-09T15:54:42.611668+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:05:09.857590+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072827+0000","last_change":"2026-03-09T15:54:11.355305+0000","last_active":"2026-03-09T15:54:43.072827+0000","last_peered":"2026-03-09T15:54:43.072827+0000","last_clean":"2026-03-09T15:54:43.072827+0000","last_became_active":"2026-03-09T15:54:11.355148+0000","last_became_peered":"2026-03-09T15:54:11.355148+0000","last_unstale":"2026-03-09T15:54:43.072827+0000","last_undegraded":"2026-03-09T15:54:43.072827+0000","last_fullsized":"2026-03-09T15:54:43.072827+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:57:20.859145+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071337+0000","last_change":"2026-03-09T15:54:15.378053+0000","last_active":"2026-03-09T15:54:43.071337+0000","last_peered":"2026-03-09T15:54:43.071337+0000","last_clean":"2026-03-09T15:54:43.071337+0000","last_became_active":"2026-03-09T15:54:15.377687+0000","last_became_peered":"2026-03-09T15:54:15.377687+0000","last_unstale":"2026-03-09T15:54:43.071337+0000","last_undegraded":"2026-03-09T15:54:43.071337+0000","last_fullsized":"2026-03-09T15:54:43.071337+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:53:18.443143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.606459+0000","last_change":"2026-03-09T15:54:17.388336+0000","last_active":"2026-03-09T15:54:42.606459+0000","last_peered":"2026-03-09T15:54:42.606459+0000","last_clean":"2026-03-09T15:54:42.606459+0000","last_became_active":"2026-03-09T15:54:17.383020+0000","last_became_peered":"2026-03-09T15:54:17.383020+0000","last_unstale":"2026-03-09T15:54:42.606459+0000","last_undegraded":"2026-03-09T15:54:42.606459+0000","last_fullsized":"2026-03-09T15:54:42.606459+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:50:27.652230+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"61'1","reported_seq":21,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071045+0000","last_change":"2026-03-09T15:54:17.385058+0000","last_active":"2026-03-09T15:54:43.071045+0000","last_peered":"2026-03-09T15:54:43.071045+0000","last_clean":"2026-03-09T15:54:43.071045+0000","last_became_active":"2026-03-09T15:54:17.384902+0000","last_became_peered":"2026-03-09T15:54:17.384902+0000","last_unstale":"2026-03-09T15:54:43.071045+0000","last_undegraded":"2026-03-09T15:54:43.071045+0000","last_fullsized":"2026-03-09T15:54:43.071045+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:11:59.164498+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"62'15","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610123+0000","last_change":"2026-03-09T15:54:13.369280+0000","last_active":"2026-03-09T15:54:42.610123+0000","last_peered":"2026-03-09T15:54:42.610123+0000","last_clean":"2026-03-09T15:54:42.610123+0000","last_became_active":"2026-03-09T15:54:13.364512+0000","last_became_peered":"2026-03-09T15:54:13.364512+0000","last_unstale":"2026-03-09T15:54:42.610123+0000","last_undegraded":"2026-03-09T15:54:42.610123+0000","last_fullsized":"2026-03-09T15:54:42.610123+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:27:21.917673+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.612345+0000","last_change":"2026-03-09T15:54:11.363351+0000","last_active":"2026-03-09T15:54:42.612345+0000","last_peered":"2026-03-09T15:54:42.612345+0000","last_clean":"2026-03-09T15:54:42.612345+0000","last_became_active":"2026-03-09T15:54:11.363125+0000","last_became_peered":"2026-03-09T15:54:11.363125+0000","last_unstale":"2026-03-09T15:54:42.612345+0000","last_undegraded":"2026-03-09T15:54:42.612345+0000","last_fullsized":"2026-03-09T15:54:42.612345+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:38:01.325428+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"62'11","reported_seq":52,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:55:21.408638+0000","last_change":"2026-03-09T15:54:15.370076+0000","last_active":"2026-03-09T15:55:21.408638+0000","last_peered":"2026-03-09T15:55:21.408638+0000","last_clean":"2026-03-09T15:55:21.408638+0000","last_became_active":"2026-03-09T15:54:15.369913+0000","last_became_peered":"2026-03-09T15:54:15.369913+0000","last_unstale":"2026-03-09T15:55:21.408638+0000","last_undegraded":"2026-03-09T15:55:21.408638+0000","last_fullsized":"2026-03-09T15:55:21.408638+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:56:21.155553+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610309+0000","last_change":"2026-03-09T15:54:17.386468+0000","last_active":"2026-03-09T15:54:42.610309+0000","last_peered":"2026-03-09T15:54:42.610309+0000","last_clean":"2026-03-09T15:54:42.610309+0000","last_became_active":"2026-03-09T15:54:17.386370+0000","last_became_peered":"2026-03-09T15:54:17.386370+0000","last_unstale":"2026-03-09T15:54:42.610309+0000","last_undegraded":"2026-03-09T15:54:42.610309+0000","last_fullsized":"2026-03-09T15:54:42.610309+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:20:00.063265+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607130+0000","last_change":"2026-03-09T15:54:13.357798+0000","last_active":"2026-03-09T15:54:42.607130+0000","last_peered":"2026-03-09T15:54:42.607130+0000","last_clean":"2026-03-09T15:54:42.607130+0000","last_became_active":"2026-03-09T15:54:13.357043+0000","last_became_peered":"2026-03-09T15:54:13.357043+0000","last_unstale":"2026-03-09T15:54:42.607130+0000","last_undegraded":"2026-03-09T15:54:42.607130+0000","last_fullsized":"2026-03-09T15:54:42.607130+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:28:56.952662+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"54'1","reported_seq":32,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607538+0000","last_change":"2026-03-09T15:54:11.347981+0000","last_active":"2026-03-09T15:54:42.607538+0000","last_peered":"2026-03-09T15:54:42.607538+0000","last_clean":"2026-03-09T15:54:42.607538+0000","last_became_active":"2026-03-09T15:54:11.347853+0000","last_became_peered":"2026-03-09T15:54:11.347853+0000","last_unstale":"2026-03-09T15:54:42.607538+0000","last_undegraded":"2026-03-09T15:54:42.607538+0000","last_fullsized":"2026-03-09T15:54:42.607538+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:47:57.685605+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.071749+0000","last_change":"2026-03-09T15:54:15.383247+0000","last_active":"2026-03-09T15:54:43.071749+0000","last_peered":"2026-03-09T15:54:43.071749+0000","last_clean":"2026-03-09T15:54:43.071749+0000","last_became_active":"2026-03-09T15:54:15.382500+0000","last_became_peered":"2026-03-09T15:54:15.382500+0000","last_unstale":"2026-03-09T15:54:43.071749+0000","last_undegraded":"2026-03-09T15:54:43.071749+0000","last_fullsized":"2026-03-09T15:54:43.071749+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:03:41.748520+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.725252+0000","last_change":"2026-03-09T15:54:17.390336+0000","last_active":"2026-03-09T15:54:42.725252+0000","last_peered":"2026-03-09T15:54:42.725252+0000","last_clean":"2026-03-09T15:54:42.725252+0000","last_became_active":"2026-03-09T15:54:17.385695+0000","last_became_peered":"2026-03-09T15:54:17.385695+0000","last_unstale":"2026-03-09T15:54:42.725252+0000","last_undegraded":"2026-03-09T15:54:42.725252+0000","last_fullsized":"2026-03-09T15:54:42.725252+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:40:29.688303+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072731+0000","last_change":"2026-03-09T15:54:11.360391+0000","last_active":"2026-03-09T15:54:43.072731+0000","last_peered":"2026-03-09T15:54:43.072731+0000","last_clean":"2026-03-09T15:54:43.072731+0000","last_became_active":"2026-03-09T15:54:11.360236+0000","last_became_peered":"2026-03-09T15:54:11.360236+0000","last_unstale":"2026-03-09T15:54:43.072731+0000","last_undegraded":"2026-03-09T15:54:43.072731+0000","last_fullsized":"2026-03-09T15:54:43.072731+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:25:47.079675+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"61'5","reported_seq":37,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:43.072532+0000","last_change":"2026-03-09T15:54:13.359633+0000","last_active":"2026-03-09T15:54:43.072532+0000","last_peered":"2026-03-09T15:54:43.072532+0000","last_clean":"2026-03-09T15:54:43.072532+0000","last_became_active":"2026-03-09T15:54:13.354402+0000","last_became_peered":"2026-03-09T15:54:13.354402+0000","last_unstale":"2026-03-09T15:54:43.072532+0000","last_undegraded":"2026-03-09T15:54:43.072532+0000","last_fullsized":"2026-03-09T15:54:43.072532+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:35:33.389940+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.610509+0000","last_change":"2026-03-09T15:54:15.366700+0000","last_active":"2026-03-09T15:54:42.610509+0000","last_peered":"2026-03-09T15:54:42.610509+0000","last_clean":"2026-03-09T15:54:42.610509+0000","last_became_active":"2026-03-09T15:54:15.366483+0000","last_became_peered":"2026-03-09T15:54:15.366483+0000","last_unstale":"2026-03-09T15:54:42.610509+0000","last_undegraded":"2026-03-09T15:54:42.610509+0000","last_fullsized":"2026-03-09T15:54:42.610509+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:21:02.939803+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":19,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.608822+0000","last_change":"2026-03-09T15:54:17.366825+0000","last_active":"2026-03-09T15:54:42.608822+0000","last_peered":"2026-03-09T15:54:42.608822+0000","last_clean":"2026-03-09T15:54:42.608822+0000","last_became_active":"2026-03-09T15:54:17.366716+0000","last_became_peered":"2026-03-09T15:54:17.366716+0000","last_unstale":"2026-03-09T15:54:42.608822+0000","last_undegraded":"2026-03-09T15:54:42.608822+0000","last_fullsized":"2026-03-09T15:54:42.608822+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:16.342723+0000","last_clean_scrub_stamp":"2026-03-09T15:54:16.342723+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:54:58.728633+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":31,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.607745+0000","last_change":"2026-03-09T15:54:11.359534+0000","last_active":"2026-03-09T15:54:42.607745+0000","last_peered":"2026-03-09T15:54:42.607745+0000","last_clean":"2026-03-09T15:54:42.607745+0000","last_became_active":"2026-03-09T15:54:11.359221+0000","last_became_peered":"2026-03-09T15:54:11.359221+0000","last_unstale":"2026-03-09T15:54:42.607745+0000","last_undegraded":"2026-03-09T15:54:42.607745+0000","last_fullsized":"2026-03-09T15:54:42.607745+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:10.306907+0000","last_clean_scrub_stamp":"2026-03-09T15:54:10.306907+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:36:42.081838+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"61'9","reported_seq":43,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726418+0000","last_change":"2026-03-09T15:54:13.353354+0000","last_active":"2026-03-09T15:54:42.726418+0000","last_peered":"2026-03-09T15:54:42.726418+0000","last_clean":"2026-03-09T15:54:42.726418+0000","last_became_active":"2026-03-09T15:54:13.353066+0000","last_became_peered":"2026-03-09T15:54:13.353066+0000","last_unstale":"2026-03-09T15:54:42.726418+0000","last_undegraded":"2026-03-09T15:54:42.726418+0000","last_fullsized":"2026-03-09T15:54:42.726418+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:12.321206+0000","last_clean_scrub_stamp":"2026-03-09T15:54:12.321206+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:19:18.554788+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":23,"reported_epoch":64,"state":"active+clean","last_fresh":"2026-03-09T15:54:42.726381+0000","last_change":"2026-03-09T15:54:15.374457+0000","last_active":"2026-03-09T15:54:42.726381+0000","last_peered":"2026-03-09T15:54:42.726381+0000","last_clean":"2026-03-09T15:54:42.726381+0000","last_became_active":"2026-03-09T15:54:15.373468+0000","last_became_peered":"2026-03-09T15:54:15.373468+0000","last_unstale":"2026-03-09T15:54:42.726381+0000","last_undegraded":"2026-03-09T15:54:42.726381+0000","last_fullsized":"2026-03-09T15:54:42.726381+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T15:54:14.329526+0000","last_clean_scrub_stamp":"2026-03-09T15:54:14.329526+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:19:43.648567+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":68,"num_read_kb":63,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":50,"seq":214748364821,"num_pgs":60,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":28000,"kb_used_data":1164,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939424,"statfs":{"total":21470642176,"available":21441970176,"internally_reserved":0,"allocated":1191936,"data_stored":752714,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":43,"seq":184683593756,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27968,"kb_used_data":1132,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939456,"statfs":{"total":21470642176,"available":21442002944,"internally_reserved":0,"allocated":1159168,"data_stored":750379,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":37,"seq":158913789987,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27532,"kb_used_data":692,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939892,"statfs":{"total":21470642176,"available":21442449408,"internally_reserved":0,"allocated":708608,"data_stored":291950,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986220,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27564,"kb_used_data":724,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939860,"statfs":{"total":21470642176,"available":21442416640,"internally_reserved":0,"allocated":741376,"data_stored":293037,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149745,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27524,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939900,"statfs":{"total":21470642176,"available":21442457600,"internally_reserved":0,"allocated":700416,"data_stored":291922,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411383,"num_pgs":38,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27524,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939900,"statfs":{"total":21470642176,"available":21442457600,"internally_reserved":0,"allocated":700416,"data_stored":291592,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574910,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27540,"kb_used_data":700,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939884,"statfs":{"total":21470642176,"available":21442441216,"internally_reserved":0,"allocated":716800,"data_stored":291443,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738437,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27996,"kb_used_data":1160,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939428,"statfs":{"total":21470642176,"available":21441974272,"internally_reserved":0,"allocated":1187840,"data_stored":752476,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1587,"internal_metadata":27457997},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1567,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":482,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T15:55:34.727 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T15:55:34.727 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T15:55:34.727 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T15:55:34.727 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph health --format=json 2026-03-09T15:55:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:35 vm09 bash[22983]: audit 2026-03-09T15:55:34.643956+0000 mgr.y (mgr.14520) 66 : audit [DBG] from='client.14646 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:35 vm09 bash[22983]: audit 2026-03-09T15:55:34.643956+0000 mgr.y (mgr.14520) 66 : audit [DBG] from='client.14646 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:35 vm09 bash[22983]: cluster 2026-03-09T15:55:34.648742+0000 mgr.y (mgr.14520) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:35 vm09 bash[22983]: cluster 2026-03-09T15:55:34.648742+0000 mgr.y (mgr.14520) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:36.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:35 vm01 bash[28152]: audit 2026-03-09T15:55:34.643956+0000 mgr.y (mgr.14520) 66 : audit [DBG] from='client.14646 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:36.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:35 vm01 bash[28152]: audit 2026-03-09T15:55:34.643956+0000 mgr.y (mgr.14520) 66 : audit [DBG] from='client.14646 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:36.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:35 vm01 bash[28152]: cluster 2026-03-09T15:55:34.648742+0000 mgr.y (mgr.14520) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:36.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:35 vm01 bash[28152]: cluster 2026-03-09T15:55:34.648742+0000 mgr.y (mgr.14520) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:36.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:35 vm01 bash[20728]: audit 2026-03-09T15:55:34.643956+0000 mgr.y (mgr.14520) 66 : audit [DBG] from='client.14646 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:36.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:35 vm01 bash[20728]: audit 2026-03-09T15:55:34.643956+0000 mgr.y (mgr.14520) 66 : audit [DBG] from='client.14646 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T15:55:36.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:35 vm01 bash[20728]: cluster 2026-03-09T15:55:34.648742+0000 mgr.y (mgr.14520) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:36.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:35 vm01 bash[20728]: cluster 2026-03-09T15:55:34.648742+0000 mgr.y (mgr.14520) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:36.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:55:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:55:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:37 vm09 bash[22983]: audit 2026-03-09T15:55:36.185254+0000 mgr.y (mgr.14520) 68 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:37 vm09 bash[22983]: audit 2026-03-09T15:55:36.185254+0000 mgr.y (mgr.14520) 68 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:37 vm09 bash[22983]: cluster 2026-03-09T15:55:36.649105+0000 mgr.y (mgr.14520) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:37 vm09 bash[22983]: cluster 2026-03-09T15:55:36.649105+0000 mgr.y (mgr.14520) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:38.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:37 vm01 bash[28152]: audit 2026-03-09T15:55:36.185254+0000 mgr.y (mgr.14520) 68 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:38.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:37 vm01 bash[28152]: audit 2026-03-09T15:55:36.185254+0000 mgr.y (mgr.14520) 68 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:38.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:37 vm01 bash[28152]: cluster 2026-03-09T15:55:36.649105+0000 mgr.y (mgr.14520) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:38.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:37 vm01 bash[28152]: cluster 2026-03-09T15:55:36.649105+0000 mgr.y (mgr.14520) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:38.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:37 vm01 bash[20728]: audit 2026-03-09T15:55:36.185254+0000 mgr.y (mgr.14520) 68 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:38.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:37 vm01 bash[20728]: audit 2026-03-09T15:55:36.185254+0000 mgr.y (mgr.14520) 68 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:38.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:37 vm01 bash[20728]: cluster 2026-03-09T15:55:36.649105+0000 mgr.y (mgr.14520) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:38.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:37 vm01 bash[20728]: cluster 2026-03-09T15:55:36.649105+0000 mgr.y (mgr.14520) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:39.423 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T15:55:39.588 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 -- 192.168.123.101:0/851614583 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff658077f40 msgr2=0x7ff658113640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:39.588 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 --2- 192.168.123.101:0/851614583 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff658077f40 0x7ff658113640 secure :-1 s=READY pgs=171 cs=0 l=1 rev1=1 crypto rx=0x7ff640009a30 tx=0x7ff64002f240 comp rx=0 tx=0).stop 2026-03-09T15:55:39.588 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 -- 192.168.123.101:0/851614583 shutdown_connections 2026-03-09T15:55:39.588 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 --2- 192.168.123.101:0/851614583 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff658113b80 0x7ff658115f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.588 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 --2- 192.168.123.101:0/851614583 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff658077f40 0x7ff658113640 unknown :-1 s=CLOSED pgs=171 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.589 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 --2- 192.168.123.101:0/851614583 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff658077620 0x7ff658077a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.589 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 -- 192.168.123.101:0/851614583 >> 192.168.123.101:0/851614583 conn(0x7ff6581009e0 msgr2=0x7ff658102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:39.589 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 -- 192.168.123.101:0/851614583 shutdown_connections 2026-03-09T15:55:39.589 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 -- 192.168.123.101:0/851614583 wait complete. 2026-03-09T15:55:39.589 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 Processor -- start 2026-03-09T15:55:39.589 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 -- start start 2026-03-09T15:55:39.589 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff658077620 0x7ff6581a0f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff658077f40 0x7ff6581a1440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff658113b80 0x7ff6581a57d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7ff658118890 con 0x7ff658077620 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7ff658118710 con 0x7ff658113b80 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff6600a8640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7ff658118a10 con 0x7ff658077f40 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff65de1d640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff658077620 0x7ff6581a0f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff65d61c640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff658077f40 0x7ff6581a1440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff65d61c640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff658077f40 0x7ff6581a1440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:46538/0 (socket says 192.168.123.101:46538) 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff65d61c640 1 -- 192.168.123.101:0/3951524390 learned_addr learned my addr 192.168.123.101:0/3951524390 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T15:55:39.590 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.588+0000 7ff65e61e640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff658113b80 0x7ff6581a57d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:39.591 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff65d61c640 1 -- 192.168.123.101:0/3951524390 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff658113b80 msgr2=0x7ff6581a57d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:39.591 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff65d61c640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff658113b80 0x7ff6581a57d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.591 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff65d61c640 1 -- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff658077620 msgr2=0x7ff6581a0f00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:39.591 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff65d61c640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff658077620 0x7ff6581a0f00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.591 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff65d61c640 1 -- 192.168.123.101:0/3951524390 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff6581a5f50 con 0x7ff658077f40 2026-03-09T15:55:39.591 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff65de1d640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff658077620 0x7ff6581a0f00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:55:39.591 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff65d61c640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff658077f40 0x7ff6581a1440 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7ff64002f750 tx=0x7ff64002fcd0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:39.591 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff64effd640 1 -- 192.168.123.101:0/3951524390 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff640004240 con 0x7ff658077f40 2026-03-09T15:55:39.592 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff64effd640 1 -- 192.168.123.101:0/3951524390 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff6400043e0 con 0x7ff658077f40 2026-03-09T15:55:39.592 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff6581a61e0 con 0x7ff658077f40 2026-03-09T15:55:39.592 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7ff6581ada80 con 0x7ff658077f40 2026-03-09T15:55:39.593 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff64effd640 1 -- 192.168.123.101:0/3951524390 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff640038770 con 0x7ff658077f40 2026-03-09T15:55:39.594 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff620005180 con 0x7ff658077f40 2026-03-09T15:55:39.594 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff64effd640 1 -- 192.168.123.101:0/3951524390 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7ff640005020 con 0x7ff658077f40 2026-03-09T15:55:39.594 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff64effd640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7ff630077700 0x7ff630079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:55:39.594 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.592+0000 7ff64effd640 1 -- 192.168.123.101:0/3951524390 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(64..64 src has 1..64) ==== 7401+0+0 (secure 0 0 0) 0x7ff6400bdfa0 con 0x7ff658077f40 2026-03-09T15:55:39.597 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.596+0000 7ff65de1d640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7ff630077700 0x7ff630079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:55:39.597 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.596+0000 7ff65de1d640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7ff630077700 0x7ff630079bc0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7ff648007920 tx=0x7ff648008040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:55:39.597 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.596+0000 7ff64effd640 1 -- 192.168.123.101:0/3951524390 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff64003f2c0 con 0x7ff658077f40 2026-03-09T15:55:39.716 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.716+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "health", "format": "json"} v 0) -- 0x7ff620005470 con 0x7ff658077f40 2026-03-09T15:55:39.717 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.716+0000 7ff64effd640 1 -- 192.168.123.101:0/3951524390 <== mon.2 v2:192.168.123.101:3301/0 7 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) ==== 72+0+46 (secure 0 0 0) 0x7ff64008a840 con 0x7ff658077f40 2026-03-09T15:55:39.718 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T15:55:39.718 INFO:teuthology.orchestra.run.vm01.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T15:55:39.720 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7ff630077700 msgr2=0x7ff630079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7ff630077700 0x7ff630079bc0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7ff648007920 tx=0x7ff648008040 comp rx=0 tx=0).stop 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff658077f40 msgr2=0x7ff6581a1440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff658077f40 0x7ff6581a1440 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7ff64002f750 tx=0x7ff64002fcd0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 shutdown_connections 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7ff630077700 0x7ff630079bc0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff658113b80 0x7ff6581a57d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7ff658077f40 0x7ff6581a1440 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 --2- 192.168.123.101:0/3951524390 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7ff658077620 0x7ff6581a0f00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 >> 192.168.123.101:0/3951524390 conn(0x7ff6581009e0 msgr2=0x7ff658102dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 shutdown_connections 2026-03-09T15:55:39.721 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T15:55:39.720+0000 7ff6600a8640 1 -- 192.168.123.101:0/3951524390 wait complete. 2026-03-09T15:55:39.791 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T15:55:39.792 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T15:55:39.792 INFO:teuthology.run_tasks:Running task workunit... 2026-03-09T15:55:39.797 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T15:55:39.797 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T15:55:39.797 DEBUG:teuthology.orchestra.run.vm01:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T15:55:39.801 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:55:39.802 INFO:teuthology.orchestra.run.vm01.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-09T15:55:39.802 DEBUG:teuthology.orchestra.run.vm01:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T15:55:39.847 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-09T15:55:39.847 DEBUG:teuthology.orchestra.run.vm01:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-09T15:55:39.896 INFO:tasks.workunit:timeout=3h 2026-03-09T15:55:39.896 INFO:tasks.workunit:cleanup=True 2026-03-09T15:55:39.896 DEBUG:teuthology.orchestra.run.vm01:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T15:55:39.944 INFO:tasks.workunit.client.0.vm01.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T15:55:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:39 vm09 bash[22983]: cluster 2026-03-09T15:55:38.649594+0000 mgr.y (mgr.14520) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:39 vm09 bash[22983]: cluster 2026-03-09T15:55:38.649594+0000 mgr.y (mgr.14520) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:39 vm09 bash[22983]: audit 2026-03-09T15:55:39.719422+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.101:0/3951524390' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T15:55:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:39 vm09 bash[22983]: audit 2026-03-09T15:55:39.719422+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.101:0/3951524390' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T15:55:40.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:39 vm01 bash[28152]: cluster 2026-03-09T15:55:38.649594+0000 mgr.y (mgr.14520) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:40.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:39 vm01 bash[28152]: cluster 2026-03-09T15:55:38.649594+0000 mgr.y (mgr.14520) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:40.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:39 vm01 bash[28152]: audit 2026-03-09T15:55:39.719422+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.101:0/3951524390' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T15:55:40.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:39 vm01 bash[28152]: audit 2026-03-09T15:55:39.719422+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.101:0/3951524390' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T15:55:40.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:39 vm01 bash[20728]: cluster 2026-03-09T15:55:38.649594+0000 mgr.y (mgr.14520) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:40.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:39 vm01 bash[20728]: cluster 2026-03-09T15:55:38.649594+0000 mgr.y (mgr.14520) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:40.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:39 vm01 bash[20728]: audit 2026-03-09T15:55:39.719422+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.101:0/3951524390' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T15:55:40.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:39 vm01 bash[20728]: audit 2026-03-09T15:55:39.719422+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.101:0/3951524390' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T15:55:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:41 vm09 bash[22983]: cluster 2026-03-09T15:55:40.650210+0000 mgr.y (mgr.14520) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:41 vm09 bash[22983]: cluster 2026-03-09T15:55:40.650210+0000 mgr.y (mgr.14520) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:42.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:41 vm01 bash[20728]: cluster 2026-03-09T15:55:40.650210+0000 mgr.y (mgr.14520) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:42.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:41 vm01 bash[20728]: cluster 2026-03-09T15:55:40.650210+0000 mgr.y (mgr.14520) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:42.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:41 vm01 bash[28152]: cluster 2026-03-09T15:55:40.650210+0000 mgr.y (mgr.14520) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:42.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:41 vm01 bash[28152]: cluster 2026-03-09T15:55:40.650210+0000 mgr.y (mgr.14520) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:42 vm09 bash[22983]: audit 2026-03-09T15:55:42.642500+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-09T15:55:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:42 vm09 bash[22983]: audit 2026-03-09T15:55:42.642500+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-09T15:55:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:42 vm01 bash[20728]: audit 2026-03-09T15:55:42.642500+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-09T15:55:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:42 vm01 bash[20728]: audit 2026-03-09T15:55:42.642500+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-09T15:55:43.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:55:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:55:43.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:42 vm01 bash[28152]: audit 2026-03-09T15:55:42.642500+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-09T15:55:43.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:42 vm01 bash[28152]: audit 2026-03-09T15:55:42.642500+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-09T15:55:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:43 vm09 bash[22983]: cluster 2026-03-09T15:55:42.650590+0000 mgr.y (mgr.14520) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:43 vm09 bash[22983]: cluster 2026-03-09T15:55:42.650590+0000 mgr.y (mgr.14520) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:43 vm09 bash[22983]: audit 2026-03-09T15:55:42.747271+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-09T15:55:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:43 vm09 bash[22983]: audit 2026-03-09T15:55:42.747271+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-09T15:55:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:43 vm09 bash[22983]: cluster 2026-03-09T15:55:42.756612+0000 mon.a (mon.0) 863 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T15:55:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:43 vm09 bash[22983]: cluster 2026-03-09T15:55:42.756612+0000 mon.a (mon.0) 863 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T15:55:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:43 vm09 bash[22983]: audit 2026-03-09T15:55:42.757532+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:43 vm09 bash[22983]: audit 2026-03-09T15:55:42.757532+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:44.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:43 vm01 bash[20728]: cluster 2026-03-09T15:55:42.650590+0000 mgr.y (mgr.14520) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:43 vm01 bash[20728]: cluster 2026-03-09T15:55:42.650590+0000 mgr.y (mgr.14520) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:43 vm01 bash[20728]: audit 2026-03-09T15:55:42.747271+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:43 vm01 bash[20728]: audit 2026-03-09T15:55:42.747271+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:43 vm01 bash[20728]: cluster 2026-03-09T15:55:42.756612+0000 mon.a (mon.0) 863 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:43 vm01 bash[20728]: cluster 2026-03-09T15:55:42.756612+0000 mon.a (mon.0) 863 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:43 vm01 bash[20728]: audit 2026-03-09T15:55:42.757532+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:43 vm01 bash[20728]: audit 2026-03-09T15:55:42.757532+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:43 vm01 bash[28152]: cluster 2026-03-09T15:55:42.650590+0000 mgr.y (mgr.14520) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:43 vm01 bash[28152]: cluster 2026-03-09T15:55:42.650590+0000 mgr.y (mgr.14520) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:43 vm01 bash[28152]: audit 2026-03-09T15:55:42.747271+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:43 vm01 bash[28152]: audit 2026-03-09T15:55:42.747271+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:43 vm01 bash[28152]: cluster 2026-03-09T15:55:42.756612+0000 mon.a (mon.0) 863 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:43 vm01 bash[28152]: cluster 2026-03-09T15:55:42.756612+0000 mon.a (mon.0) 863 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:43 vm01 bash[28152]: audit 2026-03-09T15:55:42.757532+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:44.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:43 vm01 bash[28152]: audit 2026-03-09T15:55:42.757532+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:44 vm09 bash[22983]: cluster 2026-03-09T15:55:43.754896+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T15:55:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:44 vm09 bash[22983]: cluster 2026-03-09T15:55:43.754896+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T15:55:45.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:44 vm01 bash[20728]: cluster 2026-03-09T15:55:43.754896+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T15:55:45.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:44 vm01 bash[20728]: cluster 2026-03-09T15:55:43.754896+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T15:55:45.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:44 vm01 bash[28152]: cluster 2026-03-09T15:55:43.754896+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T15:55:45.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:44 vm01 bash[28152]: cluster 2026-03-09T15:55:43.754896+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T15:55:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:45 vm09 bash[22983]: cluster 2026-03-09T15:55:44.650999+0000 mgr.y (mgr.14520) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:45 vm09 bash[22983]: cluster 2026-03-09T15:55:44.650999+0000 mgr.y (mgr.14520) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:45 vm09 bash[22983]: cluster 2026-03-09T15:55:44.772520+0000 mon.a (mon.0) 866 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T15:55:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:45 vm09 bash[22983]: cluster 2026-03-09T15:55:44.772520+0000 mon.a (mon.0) 866 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T15:55:46.133 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 15:55:45 vm09 bash[50619]: logger=infra.usagestats t=2026-03-09T15:55:45.770203344Z level=info msg="Usage stats are ready to report" 2026-03-09T15:55:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:45 vm01 bash[20728]: cluster 2026-03-09T15:55:44.650999+0000 mgr.y (mgr.14520) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:45 vm01 bash[20728]: cluster 2026-03-09T15:55:44.650999+0000 mgr.y (mgr.14520) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:45 vm01 bash[20728]: cluster 2026-03-09T15:55:44.772520+0000 mon.a (mon.0) 866 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T15:55:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:45 vm01 bash[20728]: cluster 2026-03-09T15:55:44.772520+0000 mon.a (mon.0) 866 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T15:55:46.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:45 vm01 bash[28152]: cluster 2026-03-09T15:55:44.650999+0000 mgr.y (mgr.14520) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:46.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:45 vm01 bash[28152]: cluster 2026-03-09T15:55:44.650999+0000 mgr.y (mgr.14520) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:46.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:45 vm01 bash[28152]: cluster 2026-03-09T15:55:44.772520+0000 mon.a (mon.0) 866 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T15:55:46.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:45 vm01 bash[28152]: cluster 2026-03-09T15:55:44.772520+0000 mon.a (mon.0) 866 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T15:55:46.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:55:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:55:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:47 vm09 bash[22983]: audit 2026-03-09T15:55:46.194684+0000 mgr.y (mgr.14520) 74 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:47 vm09 bash[22983]: audit 2026-03-09T15:55:46.194684+0000 mgr.y (mgr.14520) 74 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:47 vm09 bash[22983]: cluster 2026-03-09T15:55:46.651399+0000 mgr.y (mgr.14520) 75 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:47 vm09 bash[22983]: cluster 2026-03-09T15:55:46.651399+0000 mgr.y (mgr.14520) 75 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:47 vm01 bash[20728]: audit 2026-03-09T15:55:46.194684+0000 mgr.y (mgr.14520) 74 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:47 vm01 bash[20728]: audit 2026-03-09T15:55:46.194684+0000 mgr.y (mgr.14520) 74 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:47 vm01 bash[20728]: cluster 2026-03-09T15:55:46.651399+0000 mgr.y (mgr.14520) 75 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:47 vm01 bash[20728]: cluster 2026-03-09T15:55:46.651399+0000 mgr.y (mgr.14520) 75 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:48.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:47 vm01 bash[28152]: audit 2026-03-09T15:55:46.194684+0000 mgr.y (mgr.14520) 74 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:48.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:47 vm01 bash[28152]: audit 2026-03-09T15:55:46.194684+0000 mgr.y (mgr.14520) 74 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:48.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:47 vm01 bash[28152]: cluster 2026-03-09T15:55:46.651399+0000 mgr.y (mgr.14520) 75 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:48.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:47 vm01 bash[28152]: cluster 2026-03-09T15:55:46.651399+0000 mgr.y (mgr.14520) 75 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:55:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:49 vm09 bash[22983]: cluster 2026-03-09T15:55:48.651828+0000 mgr.y (mgr.14520) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T15:55:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:49 vm09 bash[22983]: cluster 2026-03-09T15:55:48.651828+0000 mgr.y (mgr.14520) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T15:55:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:49 vm01 bash[20728]: cluster 2026-03-09T15:55:48.651828+0000 mgr.y (mgr.14520) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T15:55:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:49 vm01 bash[20728]: cluster 2026-03-09T15:55:48.651828+0000 mgr.y (mgr.14520) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T15:55:50.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:49 vm01 bash[28152]: cluster 2026-03-09T15:55:48.651828+0000 mgr.y (mgr.14520) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T15:55:50.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:49 vm01 bash[28152]: cluster 2026-03-09T15:55:48.651828+0000 mgr.y (mgr.14520) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T15:55:51.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:51 vm01 bash[20728]: cluster 2026-03-09T15:55:50.652324+0000 mgr.y (mgr.14520) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 135 B/s, 0 objects/s recovering 2026-03-09T15:55:51.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:51 vm01 bash[20728]: cluster 2026-03-09T15:55:50.652324+0000 mgr.y (mgr.14520) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 135 B/s, 0 objects/s recovering 2026-03-09T15:55:51.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:51 vm01 bash[20728]: cluster 2026-03-09T15:55:50.843263+0000 mon.a (mon.0) 867 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T15:55:51.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:51 vm01 bash[20728]: cluster 2026-03-09T15:55:50.843263+0000 mon.a (mon.0) 867 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T15:55:51.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:51 vm01 bash[20728]: cluster 2026-03-09T15:55:50.843315+0000 mon.a (mon.0) 868 : cluster [INF] Cluster is now healthy 2026-03-09T15:55:51.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:51 vm01 bash[20728]: cluster 2026-03-09T15:55:50.843315+0000 mon.a (mon.0) 868 : cluster [INF] Cluster is now healthy 2026-03-09T15:55:51.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:51 vm01 bash[28152]: cluster 2026-03-09T15:55:50.652324+0000 mgr.y (mgr.14520) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 135 B/s, 0 objects/s recovering 2026-03-09T15:55:51.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:51 vm01 bash[28152]: cluster 2026-03-09T15:55:50.652324+0000 mgr.y (mgr.14520) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 135 B/s, 0 objects/s recovering 2026-03-09T15:55:51.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:51 vm01 bash[28152]: cluster 2026-03-09T15:55:50.843263+0000 mon.a (mon.0) 867 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T15:55:51.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:51 vm01 bash[28152]: cluster 2026-03-09T15:55:50.843263+0000 mon.a (mon.0) 867 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T15:55:51.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:51 vm01 bash[28152]: cluster 2026-03-09T15:55:50.843315+0000 mon.a (mon.0) 868 : cluster [INF] Cluster is now healthy 2026-03-09T15:55:51.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:51 vm01 bash[28152]: cluster 2026-03-09T15:55:50.843315+0000 mon.a (mon.0) 868 : cluster [INF] Cluster is now healthy 2026-03-09T15:55:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:51 vm09 bash[22983]: cluster 2026-03-09T15:55:50.652324+0000 mgr.y (mgr.14520) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 135 B/s, 0 objects/s recovering 2026-03-09T15:55:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:51 vm09 bash[22983]: cluster 2026-03-09T15:55:50.652324+0000 mgr.y (mgr.14520) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 135 B/s, 0 objects/s recovering 2026-03-09T15:55:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:51 vm09 bash[22983]: cluster 2026-03-09T15:55:50.843263+0000 mon.a (mon.0) 867 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T15:55:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:51 vm09 bash[22983]: cluster 2026-03-09T15:55:50.843263+0000 mon.a (mon.0) 867 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T15:55:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:51 vm09 bash[22983]: cluster 2026-03-09T15:55:50.843315+0000 mon.a (mon.0) 868 : cluster [INF] Cluster is now healthy 2026-03-09T15:55:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:51 vm09 bash[22983]: cluster 2026-03-09T15:55:50.843315+0000 mon.a (mon.0) 868 : cluster [INF] Cluster is now healthy 2026-03-09T15:55:53.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:55:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:55:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:55:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:53 vm09 bash[22983]: cluster 2026-03-09T15:55:52.652739+0000 mgr.y (mgr.14520) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 109 B/s, 0 objects/s recovering 2026-03-09T15:55:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:53 vm09 bash[22983]: cluster 2026-03-09T15:55:52.652739+0000 mgr.y (mgr.14520) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 109 B/s, 0 objects/s recovering 2026-03-09T15:55:54.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:53 vm01 bash[28152]: cluster 2026-03-09T15:55:52.652739+0000 mgr.y (mgr.14520) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 109 B/s, 0 objects/s recovering 2026-03-09T15:55:54.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:53 vm01 bash[28152]: cluster 2026-03-09T15:55:52.652739+0000 mgr.y (mgr.14520) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 109 B/s, 0 objects/s recovering 2026-03-09T15:55:54.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:53 vm01 bash[20728]: cluster 2026-03-09T15:55:52.652739+0000 mgr.y (mgr.14520) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 109 B/s, 0 objects/s recovering 2026-03-09T15:55:54.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:53 vm01 bash[20728]: cluster 2026-03-09T15:55:52.652739+0000 mgr.y (mgr.14520) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 109 B/s, 0 objects/s recovering 2026-03-09T15:55:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:55 vm09 bash[22983]: cluster 2026-03-09T15:55:54.653372+0000 mgr.y (mgr.14520) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s; 99 B/s, 0 objects/s recovering 2026-03-09T15:55:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:55 vm09 bash[22983]: cluster 2026-03-09T15:55:54.653372+0000 mgr.y (mgr.14520) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s; 99 B/s, 0 objects/s recovering 2026-03-09T15:55:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:55 vm01 bash[20728]: cluster 2026-03-09T15:55:54.653372+0000 mgr.y (mgr.14520) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s; 99 B/s, 0 objects/s recovering 2026-03-09T15:55:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:55 vm01 bash[20728]: cluster 2026-03-09T15:55:54.653372+0000 mgr.y (mgr.14520) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s; 99 B/s, 0 objects/s recovering 2026-03-09T15:55:56.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:55 vm01 bash[28152]: cluster 2026-03-09T15:55:54.653372+0000 mgr.y (mgr.14520) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s; 99 B/s, 0 objects/s recovering 2026-03-09T15:55:56.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:55 vm01 bash[28152]: cluster 2026-03-09T15:55:54.653372+0000 mgr.y (mgr.14520) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s; 99 B/s, 0 objects/s recovering 2026-03-09T15:55:56.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:55:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:55:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:57 vm09 bash[22983]: audit 2026-03-09T15:55:56.202533+0000 mgr.y (mgr.14520) 80 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:57 vm09 bash[22983]: audit 2026-03-09T15:55:56.202533+0000 mgr.y (mgr.14520) 80 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:57 vm09 bash[22983]: cluster 2026-03-09T15:55:56.653670+0000 mgr.y (mgr.14520) 81 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:55:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:57 vm09 bash[22983]: cluster 2026-03-09T15:55:56.653670+0000 mgr.y (mgr.14520) 81 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:55:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:57 vm09 bash[22983]: audit 2026-03-09T15:55:57.767229+0000 mon.a (mon.0) 869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:57 vm09 bash[22983]: audit 2026-03-09T15:55:57.767229+0000 mon.a (mon.0) 869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:57 vm01 bash[20728]: audit 2026-03-09T15:55:56.202533+0000 mgr.y (mgr.14520) 80 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:57 vm01 bash[20728]: audit 2026-03-09T15:55:56.202533+0000 mgr.y (mgr.14520) 80 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:57 vm01 bash[20728]: cluster 2026-03-09T15:55:56.653670+0000 mgr.y (mgr.14520) 81 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:57 vm01 bash[20728]: cluster 2026-03-09T15:55:56.653670+0000 mgr.y (mgr.14520) 81 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:57 vm01 bash[20728]: audit 2026-03-09T15:55:57.767229+0000 mon.a (mon.0) 869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:57 vm01 bash[20728]: audit 2026-03-09T15:55:57.767229+0000 mon.a (mon.0) 869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:57 vm01 bash[28152]: audit 2026-03-09T15:55:56.202533+0000 mgr.y (mgr.14520) 80 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:57 vm01 bash[28152]: audit 2026-03-09T15:55:56.202533+0000 mgr.y (mgr.14520) 80 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:57 vm01 bash[28152]: cluster 2026-03-09T15:55:56.653670+0000 mgr.y (mgr.14520) 81 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:57 vm01 bash[28152]: cluster 2026-03-09T15:55:56.653670+0000 mgr.y (mgr.14520) 81 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:57 vm01 bash[28152]: audit 2026-03-09T15:55:57.767229+0000 mon.a (mon.0) 869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:55:58.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:57 vm01 bash[28152]: audit 2026-03-09T15:55:57.767229+0000 mon.a (mon.0) 869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:59 vm09 bash[22983]: cluster 2026-03-09T15:55:58.653947+0000 mgr.y (mgr.14520) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:55:59 vm09 bash[22983]: cluster 2026-03-09T15:55:58.653947+0000 mgr.y (mgr.14520) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:00.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:59 vm01 bash[20728]: cluster 2026-03-09T15:55:58.653947+0000 mgr.y (mgr.14520) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:00.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:55:59 vm01 bash[20728]: cluster 2026-03-09T15:55:58.653947+0000 mgr.y (mgr.14520) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:59 vm01 bash[28152]: cluster 2026-03-09T15:55:58.653947+0000 mgr.y (mgr.14520) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:55:59 vm01 bash[28152]: cluster 2026-03-09T15:55:58.653947+0000 mgr.y (mgr.14520) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:02 vm09 bash[22983]: cluster 2026-03-09T15:56:00.654492+0000 mgr.y (mgr.14520) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:02 vm09 bash[22983]: cluster 2026-03-09T15:56:00.654492+0000 mgr.y (mgr.14520) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:02 vm01 bash[20728]: cluster 2026-03-09T15:56:00.654492+0000 mgr.y (mgr.14520) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:02 vm01 bash[20728]: cluster 2026-03-09T15:56:00.654492+0000 mgr.y (mgr.14520) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:02.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:02 vm01 bash[28152]: cluster 2026-03-09T15:56:00.654492+0000 mgr.y (mgr.14520) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:02.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:02 vm01 bash[28152]: cluster 2026-03-09T15:56:00.654492+0000 mgr.y (mgr.14520) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 90 B/s, 0 objects/s recovering 2026-03-09T15:56:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:03 vm01 bash[20728]: cluster 2026-03-09T15:56:02.654882+0000 mgr.y (mgr.14520) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:03 vm01 bash[20728]: cluster 2026-03-09T15:56:02.654882+0000 mgr.y (mgr.14520) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:03.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:03 vm01 bash[28152]: cluster 2026-03-09T15:56:02.654882+0000 mgr.y (mgr.14520) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:03.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:03 vm01 bash[28152]: cluster 2026-03-09T15:56:02.654882+0000 mgr.y (mgr.14520) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:03.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:56:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:56:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:56:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:03 vm09 bash[22983]: cluster 2026-03-09T15:56:02.654882+0000 mgr.y (mgr.14520) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:03 vm09 bash[22983]: cluster 2026-03-09T15:56:02.654882+0000 mgr.y (mgr.14520) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:05 vm09 bash[22983]: cluster 2026-03-09T15:56:04.655588+0000 mgr.y (mgr.14520) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:05 vm09 bash[22983]: cluster 2026-03-09T15:56:04.655588+0000 mgr.y (mgr.14520) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:06.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:05 vm01 bash[20728]: cluster 2026-03-09T15:56:04.655588+0000 mgr.y (mgr.14520) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:06.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:05 vm01 bash[20728]: cluster 2026-03-09T15:56:04.655588+0000 mgr.y (mgr.14520) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:06.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:05 vm01 bash[28152]: cluster 2026-03-09T15:56:04.655588+0000 mgr.y (mgr.14520) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:06.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:05 vm01 bash[28152]: cluster 2026-03-09T15:56:04.655588+0000 mgr.y (mgr.14520) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:06.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:56:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:56:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:07 vm09 bash[22983]: audit 2026-03-09T15:56:06.209916+0000 mgr.y (mgr.14520) 86 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:07 vm09 bash[22983]: audit 2026-03-09T15:56:06.209916+0000 mgr.y (mgr.14520) 86 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:07 vm09 bash[22983]: cluster 2026-03-09T15:56:06.655986+0000 mgr.y (mgr.14520) 87 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:07 vm09 bash[22983]: cluster 2026-03-09T15:56:06.655986+0000 mgr.y (mgr.14520) 87 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:08.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:07 vm01 bash[20728]: audit 2026-03-09T15:56:06.209916+0000 mgr.y (mgr.14520) 86 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:08.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:07 vm01 bash[20728]: audit 2026-03-09T15:56:06.209916+0000 mgr.y (mgr.14520) 86 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:08.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:07 vm01 bash[20728]: cluster 2026-03-09T15:56:06.655986+0000 mgr.y (mgr.14520) 87 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:08.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:07 vm01 bash[20728]: cluster 2026-03-09T15:56:06.655986+0000 mgr.y (mgr.14520) 87 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:08.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:07 vm01 bash[28152]: audit 2026-03-09T15:56:06.209916+0000 mgr.y (mgr.14520) 86 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:08.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:07 vm01 bash[28152]: audit 2026-03-09T15:56:06.209916+0000 mgr.y (mgr.14520) 86 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:08.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:07 vm01 bash[28152]: cluster 2026-03-09T15:56:06.655986+0000 mgr.y (mgr.14520) 87 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:08.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:07 vm01 bash[28152]: cluster 2026-03-09T15:56:06.655986+0000 mgr.y (mgr.14520) 87 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:09 vm09 bash[22983]: cluster 2026-03-09T15:56:08.656358+0000 mgr.y (mgr.14520) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:09 vm09 bash[22983]: cluster 2026-03-09T15:56:08.656358+0000 mgr.y (mgr.14520) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:10.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:09 vm01 bash[20728]: cluster 2026-03-09T15:56:08.656358+0000 mgr.y (mgr.14520) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:10.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:09 vm01 bash[20728]: cluster 2026-03-09T15:56:08.656358+0000 mgr.y (mgr.14520) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:10.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:09 vm01 bash[28152]: cluster 2026-03-09T15:56:08.656358+0000 mgr.y (mgr.14520) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:10.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:09 vm01 bash[28152]: cluster 2026-03-09T15:56:08.656358+0000 mgr.y (mgr.14520) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:11 vm09 bash[22983]: cluster 2026-03-09T15:56:10.656858+0000 mgr.y (mgr.14520) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:11 vm09 bash[22983]: cluster 2026-03-09T15:56:10.656858+0000 mgr.y (mgr.14520) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:11 vm01 bash[20728]: cluster 2026-03-09T15:56:10.656858+0000 mgr.y (mgr.14520) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:11 vm01 bash[20728]: cluster 2026-03-09T15:56:10.656858+0000 mgr.y (mgr.14520) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:11 vm01 bash[28152]: cluster 2026-03-09T15:56:10.656858+0000 mgr.y (mgr.14520) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:11 vm01 bash[28152]: cluster 2026-03-09T15:56:10.656858+0000 mgr.y (mgr.14520) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:13.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:56:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:56:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:56:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:13 vm09 bash[22983]: cluster 2026-03-09T15:56:12.657193+0000 mgr.y (mgr.14520) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:13 vm09 bash[22983]: cluster 2026-03-09T15:56:12.657193+0000 mgr.y (mgr.14520) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:13 vm09 bash[22983]: audit 2026-03-09T15:56:12.782869+0000 mon.a (mon.0) 870 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:13 vm09 bash[22983]: audit 2026-03-09T15:56:12.782869+0000 mon.a (mon.0) 870 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:13 vm01 bash[20728]: cluster 2026-03-09T15:56:12.657193+0000 mgr.y (mgr.14520) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:14.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:13 vm01 bash[20728]: cluster 2026-03-09T15:56:12.657193+0000 mgr.y (mgr.14520) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:14.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:13 vm01 bash[20728]: audit 2026-03-09T15:56:12.782869+0000 mon.a (mon.0) 870 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:14.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:13 vm01 bash[20728]: audit 2026-03-09T15:56:12.782869+0000 mon.a (mon.0) 870 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:13 vm01 bash[28152]: cluster 2026-03-09T15:56:12.657193+0000 mgr.y (mgr.14520) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:13 vm01 bash[28152]: cluster 2026-03-09T15:56:12.657193+0000 mgr.y (mgr.14520) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:13 vm01 bash[28152]: audit 2026-03-09T15:56:12.782869+0000 mon.a (mon.0) 870 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:13 vm01 bash[28152]: audit 2026-03-09T15:56:12.782869+0000 mon.a (mon.0) 870 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:15 vm09 bash[22983]: cluster 2026-03-09T15:56:14.657779+0000 mgr.y (mgr.14520) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:15 vm09 bash[22983]: cluster 2026-03-09T15:56:14.657779+0000 mgr.y (mgr.14520) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:15 vm01 bash[20728]: cluster 2026-03-09T15:56:14.657779+0000 mgr.y (mgr.14520) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:16.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:15 vm01 bash[20728]: cluster 2026-03-09T15:56:14.657779+0000 mgr.y (mgr.14520) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:15 vm01 bash[28152]: cluster 2026-03-09T15:56:14.657779+0000 mgr.y (mgr.14520) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:15 vm01 bash[28152]: cluster 2026-03-09T15:56:14.657779+0000 mgr.y (mgr.14520) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:16.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:56:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:56:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:17 vm09 bash[22983]: audit 2026-03-09T15:56:16.214275+0000 mgr.y (mgr.14520) 92 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:17 vm09 bash[22983]: audit 2026-03-09T15:56:16.214275+0000 mgr.y (mgr.14520) 92 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:17 vm09 bash[22983]: cluster 2026-03-09T15:56:16.658086+0000 mgr.y (mgr.14520) 93 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:17 vm09 bash[22983]: cluster 2026-03-09T15:56:16.658086+0000 mgr.y (mgr.14520) 93 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:17 vm01 bash[20728]: audit 2026-03-09T15:56:16.214275+0000 mgr.y (mgr.14520) 92 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:17 vm01 bash[20728]: audit 2026-03-09T15:56:16.214275+0000 mgr.y (mgr.14520) 92 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:17 vm01 bash[20728]: cluster 2026-03-09T15:56:16.658086+0000 mgr.y (mgr.14520) 93 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:17 vm01 bash[20728]: cluster 2026-03-09T15:56:16.658086+0000 mgr.y (mgr.14520) 93 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:18.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:17 vm01 bash[28152]: audit 2026-03-09T15:56:16.214275+0000 mgr.y (mgr.14520) 92 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:18.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:17 vm01 bash[28152]: audit 2026-03-09T15:56:16.214275+0000 mgr.y (mgr.14520) 92 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:18.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:17 vm01 bash[28152]: cluster 2026-03-09T15:56:16.658086+0000 mgr.y (mgr.14520) 93 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:18.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:17 vm01 bash[28152]: cluster 2026-03-09T15:56:16.658086+0000 mgr.y (mgr.14520) 93 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:19 vm09 bash[22983]: cluster 2026-03-09T15:56:18.658350+0000 mgr.y (mgr.14520) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:19 vm09 bash[22983]: cluster 2026-03-09T15:56:18.658350+0000 mgr.y (mgr.14520) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:19 vm09 bash[22983]: audit 2026-03-09T15:56:19.734184+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:56:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:19 vm09 bash[22983]: audit 2026-03-09T15:56:19.734184+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:56:20.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:19 vm01 bash[20728]: cluster 2026-03-09T15:56:18.658350+0000 mgr.y (mgr.14520) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:20.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:19 vm01 bash[20728]: cluster 2026-03-09T15:56:18.658350+0000 mgr.y (mgr.14520) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:20.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:19 vm01 bash[20728]: audit 2026-03-09T15:56:19.734184+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:56:20.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:19 vm01 bash[20728]: audit 2026-03-09T15:56:19.734184+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:56:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:19 vm01 bash[28152]: cluster 2026-03-09T15:56:18.658350+0000 mgr.y (mgr.14520) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:19 vm01 bash[28152]: cluster 2026-03-09T15:56:18.658350+0000 mgr.y (mgr.14520) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:19 vm01 bash[28152]: audit 2026-03-09T15:56:19.734184+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:56:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:19 vm01 bash[28152]: audit 2026-03-09T15:56:19.734184+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:56:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:22 vm09 bash[22983]: cluster 2026-03-09T15:56:20.658879+0000 mgr.y (mgr.14520) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:22 vm09 bash[22983]: cluster 2026-03-09T15:56:20.658879+0000 mgr.y (mgr.14520) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:22 vm01 bash[20728]: cluster 2026-03-09T15:56:20.658879+0000 mgr.y (mgr.14520) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:22 vm01 bash[20728]: cluster 2026-03-09T15:56:20.658879+0000 mgr.y (mgr.14520) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:22 vm01 bash[28152]: cluster 2026-03-09T15:56:20.658879+0000 mgr.y (mgr.14520) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:22 vm01 bash[28152]: cluster 2026-03-09T15:56:20.658879+0000 mgr.y (mgr.14520) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:23 vm01 bash[20728]: cluster 2026-03-09T15:56:22.659219+0000 mgr.y (mgr.14520) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:23 vm01 bash[20728]: cluster 2026-03-09T15:56:22.659219+0000 mgr.y (mgr.14520) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:23.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:56:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:56:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:56:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:23 vm01 bash[28152]: cluster 2026-03-09T15:56:22.659219+0000 mgr.y (mgr.14520) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:23 vm01 bash[28152]: cluster 2026-03-09T15:56:22.659219+0000 mgr.y (mgr.14520) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:23 vm09 bash[22983]: cluster 2026-03-09T15:56:22.659219+0000 mgr.y (mgr.14520) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:23 vm09 bash[22983]: cluster 2026-03-09T15:56:22.659219+0000 mgr.y (mgr.14520) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:25 vm09 bash[22983]: cluster 2026-03-09T15:56:24.659831+0000 mgr.y (mgr.14520) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:25 vm09 bash[22983]: cluster 2026-03-09T15:56:24.659831+0000 mgr.y (mgr.14520) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:25 vm09 bash[22983]: audit 2026-03-09T15:56:25.139820+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:25 vm09 bash[22983]: audit 2026-03-09T15:56:25.139820+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:25 vm09 bash[22983]: audit 2026-03-09T15:56:25.152068+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:25 vm09 bash[22983]: audit 2026-03-09T15:56:25.152068+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:25 vm01 bash[20728]: cluster 2026-03-09T15:56:24.659831+0000 mgr.y (mgr.14520) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:25 vm01 bash[20728]: cluster 2026-03-09T15:56:24.659831+0000 mgr.y (mgr.14520) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:25 vm01 bash[20728]: audit 2026-03-09T15:56:25.139820+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:25 vm01 bash[20728]: audit 2026-03-09T15:56:25.139820+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:25 vm01 bash[20728]: audit 2026-03-09T15:56:25.152068+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:25 vm01 bash[20728]: audit 2026-03-09T15:56:25.152068+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:25 vm01 bash[28152]: cluster 2026-03-09T15:56:24.659831+0000 mgr.y (mgr.14520) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:25 vm01 bash[28152]: cluster 2026-03-09T15:56:24.659831+0000 mgr.y (mgr.14520) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:25 vm01 bash[28152]: audit 2026-03-09T15:56:25.139820+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:25 vm01 bash[28152]: audit 2026-03-09T15:56:25.139820+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:25 vm01 bash[28152]: audit 2026-03-09T15:56:25.152068+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:25 vm01 bash[28152]: audit 2026-03-09T15:56:25.152068+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:26.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:56:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:25.733555+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:25.733555+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:25.741239+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:25.741239+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:26.325199+0000 mon.a (mon.0) 876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:26.325199+0000 mon.a (mon.0) 876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:26.325727+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:26.325727+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:26.334512+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:26 vm09 bash[22983]: audit 2026-03-09T15:56:26.334512+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:25.733555+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:25.733555+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:25.741239+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:25.741239+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:26.325199+0000 mon.a (mon.0) 876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:26.325199+0000 mon.a (mon.0) 876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:26.325727+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:26.325727+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:26.334512+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:26 vm01 bash[20728]: audit 2026-03-09T15:56:26.334512+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:25.733555+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:25.733555+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:25.741239+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:25.741239+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:26.325199+0000 mon.a (mon.0) 876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:26.325199+0000 mon.a (mon.0) 876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:26.325727+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:26.325727+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:26.334512+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:26 vm01 bash[28152]: audit 2026-03-09T15:56:26.334512+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:27 vm09 bash[22983]: audit 2026-03-09T15:56:26.216338+0000 mgr.y (mgr.14520) 98 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:27 vm09 bash[22983]: audit 2026-03-09T15:56:26.216338+0000 mgr.y (mgr.14520) 98 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:27 vm09 bash[22983]: cluster 2026-03-09T15:56:26.660177+0000 mgr.y (mgr.14520) 99 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:27 vm09 bash[22983]: cluster 2026-03-09T15:56:26.660177+0000 mgr.y (mgr.14520) 99 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:27 vm01 bash[20728]: audit 2026-03-09T15:56:26.216338+0000 mgr.y (mgr.14520) 98 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:27 vm01 bash[20728]: audit 2026-03-09T15:56:26.216338+0000 mgr.y (mgr.14520) 98 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:27 vm01 bash[20728]: cluster 2026-03-09T15:56:26.660177+0000 mgr.y (mgr.14520) 99 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:27 vm01 bash[20728]: cluster 2026-03-09T15:56:26.660177+0000 mgr.y (mgr.14520) 99 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:27 vm01 bash[28152]: audit 2026-03-09T15:56:26.216338+0000 mgr.y (mgr.14520) 98 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:27 vm01 bash[28152]: audit 2026-03-09T15:56:26.216338+0000 mgr.y (mgr.14520) 98 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:27 vm01 bash[28152]: cluster 2026-03-09T15:56:26.660177+0000 mgr.y (mgr.14520) 99 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:28.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:27 vm01 bash[28152]: cluster 2026-03-09T15:56:26.660177+0000 mgr.y (mgr.14520) 99 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:29.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:28 vm09 bash[22983]: audit 2026-03-09T15:56:27.789806+0000 mon.a (mon.0) 879 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:29.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:28 vm09 bash[22983]: audit 2026-03-09T15:56:27.789806+0000 mon.a (mon.0) 879 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:29.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:28 vm01 bash[20728]: audit 2026-03-09T15:56:27.789806+0000 mon.a (mon.0) 879 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:29.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:28 vm01 bash[20728]: audit 2026-03-09T15:56:27.789806+0000 mon.a (mon.0) 879 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:29.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:28 vm01 bash[28152]: audit 2026-03-09T15:56:27.789806+0000 mon.a (mon.0) 879 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:29.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:28 vm01 bash[28152]: audit 2026-03-09T15:56:27.789806+0000 mon.a (mon.0) 879 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:29 vm09 bash[22983]: cluster 2026-03-09T15:56:28.660438+0000 mgr.y (mgr.14520) 100 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:29 vm09 bash[22983]: cluster 2026-03-09T15:56:28.660438+0000 mgr.y (mgr.14520) 100 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:30.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:29 vm01 bash[20728]: cluster 2026-03-09T15:56:28.660438+0000 mgr.y (mgr.14520) 100 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:30.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:29 vm01 bash[20728]: cluster 2026-03-09T15:56:28.660438+0000 mgr.y (mgr.14520) 100 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:29 vm01 bash[28152]: cluster 2026-03-09T15:56:28.660438+0000 mgr.y (mgr.14520) 100 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:29 vm01 bash[28152]: cluster 2026-03-09T15:56:28.660438+0000 mgr.y (mgr.14520) 100 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:31 vm09 bash[22983]: cluster 2026-03-09T15:56:30.660913+0000 mgr.y (mgr.14520) 101 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:31 vm09 bash[22983]: cluster 2026-03-09T15:56:30.660913+0000 mgr.y (mgr.14520) 101 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:31 vm01 bash[20728]: cluster 2026-03-09T15:56:30.660913+0000 mgr.y (mgr.14520) 101 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:31 vm01 bash[20728]: cluster 2026-03-09T15:56:30.660913+0000 mgr.y (mgr.14520) 101 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:31 vm01 bash[28152]: cluster 2026-03-09T15:56:30.660913+0000 mgr.y (mgr.14520) 101 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:31 vm01 bash[28152]: cluster 2026-03-09T15:56:30.660913+0000 mgr.y (mgr.14520) 101 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:33.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:56:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:56:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr: git switch -c 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr:Or undo this operation with: 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr: git switch - 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T15:56:33.695 INFO:tasks.workunit.client.0.vm01.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T15:56:33.702 DEBUG:teuthology.orchestra.run.vm01:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T15:56:33.749 INFO:tasks.workunit.client.0.vm01.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T15:56:33.750 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T15:56:33.750 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T15:56:33.796 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T15:56:33.833 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T15:56:33.866 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T15:56:33.867 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T15:56:33.867 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T15:56:33.894 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T15:56:33.898 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T15:56:33.898 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T15:56:33.944 INFO:tasks.workunit:Running workunits matching rados/test.sh on client.0... 2026-03-09T15:56:33.945 INFO:tasks.workunit:Running workunit rados/test.sh... 2026-03-09T15:56:33.945 DEBUG:teuthology.orchestra.run.vm01:workunit test rados/test.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh 2026-03-09T15:56:33.993 INFO:tasks.workunit.client.0.vm01.stderr:+ parallel=1 2026-03-09T15:56:33.993 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' '' = --serial ']' 2026-03-09T15:56:33.993 INFO:tasks.workunit.client.0.vm01.stderr:+ crimson=0 2026-03-09T15:56:33.993 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' '' = --crimson ']' 2026-03-09T15:56:33.993 INFO:tasks.workunit.client.0.vm01.stderr:+ color= 2026-03-09T15:56:33.993 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -t 1 ']' 2026-03-09T15:56:33.993 INFO:tasks.workunit.client.0.vm01.stderr:+ trap cleanup EXIT ERR HUP INT QUIT 2026-03-09T15:56:33.993 INFO:tasks.workunit.client.0.vm01.stderr:+ GTEST_OUTPUT_DIR=/home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-09T15:56:33.993 INFO:tasks.workunit.client.0.vm01.stderr:+ mkdir -p /home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-09T15:56:33.994 INFO:tasks.workunit.client.0.vm01.stderr:+ declare -A pids 2026-03-09T15:56:33.994 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:33.994 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:33.994 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_aio 2026-03-09T15:56:33.995 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_aio' 2026-03-09T15:56:33.995 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_aio 2026-03-09T15:56:33.995 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stdout:test api_aio on pid 59597 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_aio 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59597 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_aio on pid 59597' 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59597 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2>&1 | tee ceph_test_rados_api_aio.log | sed "s/^/ api_aio: /"' 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_aio_pp 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_aio_pp' 2026-03-09T15:56:33.998 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:33.999 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:33.999 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:33.999 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:33.999 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:33.999 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_aio: /' 2026-03-09T15:56:33.999 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_aio.log 2026-03-09T15:56:33.999 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2026-03-09T15:56:34.000 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_aio_pp 2026-03-09T15:56:34.000 INFO:tasks.workunit.client.0.vm01.stdout:test api_aio_pp on pid 59605 2026-03-09T15:56:34.001 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_aio_pp 2026-03-09T15:56:34.001 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59605 2026-03-09T15:56:34.001 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_aio_pp on pid 59605' 2026-03-09T15:56:34.001 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59605 2026-03-09T15:56:34.001 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.001 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.001 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_io 2026-03-09T15:56:34.001 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_io' 2026-03-09T15:56:34.001 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2>&1 | tee ceph_test_rados_api_aio_pp.log | sed "s/^/ api_aio_pp: /"' 2026-03-09T15:56:34.002 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.002 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.002 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.002 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.002 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_io 2026-03-09T15:56:34.002 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_aio_pp.log 2026-03-09T15:56:34.002 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2026-03-09T15:56:34.002 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_aio_pp: /' 2026-03-09T15:56:34.003 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.008 INFO:tasks.workunit.client.0.vm01.stdout:test api_io on pid 59613 2026-03-09T15:56:34.008 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_io 2026-03-09T15:56:34.008 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59613 2026-03-09T15:56:34.008 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_io on pid 59613' 2026-03-09T15:56:34.008 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59613 2026-03-09T15:56:34.008 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.008 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.008 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_io_pp 2026-03-09T15:56:34.010 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_io_pp' 2026-03-09T15:56:34.010 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2>&1 | tee ceph_test_rados_api_io.log | sed "s/^/ api_io: /"' 2026-03-09T15:56:34.011 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_io_pp 2026-03-09T15:56:34.011 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.011 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.011 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.011 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.012 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.015 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_io.log 2026-03-09T15:56:34.015 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_io: /' 2026-03-09T15:56:34.020 INFO:tasks.workunit.client.0.vm01.stdout:test api_io_pp on pid 59621 2026-03-09T15:56:34.020 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_io_pp 2026-03-09T15:56:34.020 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59621 2026-03-09T15:56:34.020 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_io_pp on pid 59621' 2026-03-09T15:56:34.020 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59621 2026-03-09T15:56:34.020 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.020 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.023 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2026-03-09T15:56:34.025 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_asio 2026-03-09T15:56:34.025 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_asio' 2026-03-09T15:56:34.027 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2>&1 | tee ceph_test_rados_api_io_pp.log | sed "s/^/ api_io_pp: /"' 2026-03-09T15:56:34.028 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.028 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.028 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.028 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.031 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_io_pp.log 2026-03-09T15:56:34.032 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_io_pp: /' 2026-03-09T15:56:34.032 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2026-03-09T15:56:34.034 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.035 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_asio 2026-03-09T15:56:34.041 INFO:tasks.workunit.client.0.vm01.stdout:test api_asio on pid 59658 2026-03-09T15:56:34.042 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_asio 2026-03-09T15:56:34.042 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59658 2026-03-09T15:56:34.042 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_asio on pid 59658' 2026-03-09T15:56:34.042 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59658 2026-03-09T15:56:34.042 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.042 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.042 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_list 2026-03-09T15:56:34.042 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_list' 2026-03-09T15:56:34.042 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2>&1 | tee ceph_test_rados_api_asio.log | sed "s/^/ api_asio: /"' 2026-03-09T15:56:34.043 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.043 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.043 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.043 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.044 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_asio: /' 2026-03-09T15:56:34.047 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.047 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_asio.log 2026-03-09T15:56:34.050 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_list 2026-03-09T15:56:34.053 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2026-03-09T15:56:34.056 INFO:tasks.workunit.client.0.vm01.stdout:test api_list on pid 59675 2026-03-09T15:56:34.056 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_list 2026-03-09T15:56:34.056 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59675 2026-03-09T15:56:34.056 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_list on pid 59675' 2026-03-09T15:56:34.056 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59675 2026-03-09T15:56:34.056 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.056 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.060 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_lock 2026-03-09T15:56:34.060 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_lock' 2026-03-09T15:56:34.060 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2>&1 | tee ceph_test_rados_api_list.log | sed "s/^/ api_list: /"' 2026-03-09T15:56:34.062 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.064 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.064 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.064 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.067 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.068 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2026-03-09T15:56:34.072 INFO:tasks.workunit.client.0.vm01.stdout:test api_lock on pid 59711 2026-03-09T15:56:34.072 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_lock 2026-03-09T15:56:34.072 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_list.log 2026-03-09T15:56:34.072 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_lock 2026-03-09T15:56:34.072 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59711 2026-03-09T15:56:34.072 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_lock on pid 59711' 2026-03-09T15:56:34.072 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59711 2026-03-09T15:56:34.072 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.072 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.074 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_list: /' 2026-03-09T15:56:34.077 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2>&1 | tee ceph_test_rados_api_lock.log | sed "s/^/ api_lock: /"' 2026-03-09T15:56:34.077 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.077 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.077 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.077 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.078 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2026-03-09T15:56:34.081 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_lock_pp 2026-03-09T15:56:34.082 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_lock_pp' 2026-03-09T15:56:34.089 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.089 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_lock.log 2026-03-09T15:56:34.089 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_lock: /' 2026-03-09T15:56:34.090 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_lock_pp 2026-03-09T15:56:34.091 INFO:tasks.workunit.client.0.vm01.stdout:test api_lock_pp on pid 59721 2026-03-09T15:56:34.091 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_lock_pp 2026-03-09T15:56:34.091 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59721 2026-03-09T15:56:34.091 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_lock_pp on pid 59721' 2026-03-09T15:56:34.091 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59721 2026-03-09T15:56:34.091 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.091 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.092 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2>&1 | tee ceph_test_rados_api_lock_pp.log | sed "s/^/ api_lock_pp: /"' 2026-03-09T15:56:34.092 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_misc 2026-03-09T15:56:34.093 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.093 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.093 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.093 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.093 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_misc' 2026-03-09T15:56:34.093 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_lock_pp: /' 2026-03-09T15:56:34.094 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_lock_pp.log 2026-03-09T15:56:34.096 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2026-03-09T15:56:34.104 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.104 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_misc 2026-03-09T15:56:34.112 INFO:tasks.workunit.client.0.vm01.stdout:test api_misc on pid 59764 2026-03-09T15:56:34.112 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_misc 2026-03-09T15:56:34.112 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59764 2026-03-09T15:56:34.112 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_misc on pid 59764' 2026-03-09T15:56:34.112 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59764 2026-03-09T15:56:34.112 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.112 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.119 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2>&1 | tee ceph_test_rados_api_misc.log | sed "s/^/ api_misc: /"' 2026-03-09T15:56:34.119 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_misc_pp 2026-03-09T15:56:34.119 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.120 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_misc_pp' 2026-03-09T15:56:34.121 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.121 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.121 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.123 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2026-03-09T15:56:34.125 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_misc.log 2026-03-09T15:56:34.126 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_misc: /' 2026-03-09T15:56:34.128 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_misc_pp 2026-03-09T15:56:34.129 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.131 INFO:tasks.workunit.client.0.vm01.stdout:test api_misc_pp on pid 59788 2026-03-09T15:56:34.131 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_misc_pp 2026-03-09T15:56:34.131 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59788 2026-03-09T15:56:34.131 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_misc_pp on pid 59788' 2026-03-09T15:56:34.131 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59788 2026-03-09T15:56:34.131 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.131 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:33 vm09 bash[22983]: cluster 2026-03-09T15:56:32.661226+0000 mgr.y (mgr.14520) 102 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:33 vm09 bash[22983]: cluster 2026-03-09T15:56:32.661226+0000 mgr.y (mgr.14520) 102 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:34.135 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2>&1 | tee ceph_test_rados_api_misc_pp.log | sed "s/^/ api_misc_pp: /"' 2026-03-09T15:56:34.138 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.138 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.138 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.138 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.139 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_tier_pp 2026-03-09T15:56:34.139 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_tier_pp' 2026-03-09T15:56:34.143 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2026-03-09T15:56:34.152 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_misc_pp.log 2026-03-09T15:56:34.154 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_misc_pp: /' 2026-03-09T15:56:34.160 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_tier_pp 2026-03-09T15:56:34.160 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.162 INFO:tasks.workunit.client.0.vm01.stdout:test api_tier_pp on pid 59819 2026-03-09T15:56:34.162 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_tier_pp 2026-03-09T15:56:34.162 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59819 2026-03-09T15:56:34.162 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_tier_pp on pid 59819' 2026-03-09T15:56:34.162 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59819 2026-03-09T15:56:34.162 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.162 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.162 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2>&1 | tee ceph_test_rados_api_tier_pp.log | sed "s/^/ api_tier_pp: /"' 2026-03-09T15:56:34.163 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.163 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.163 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.163 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.163 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_pool 2026-03-09T15:56:34.163 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_pool' 2026-03-09T15:56:34.163 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_tier_pp: /' 2026-03-09T15:56:34.164 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_tier_pp.log 2026-03-09T15:56:34.164 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2026-03-09T15:56:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:33 vm01 bash[20728]: cluster 2026-03-09T15:56:32.661226+0000 mgr.y (mgr.14520) 102 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:33 vm01 bash[20728]: cluster 2026-03-09T15:56:32.661226+0000 mgr.y (mgr.14520) 102 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:33 vm01 bash[28152]: cluster 2026-03-09T15:56:32.661226+0000 mgr.y (mgr.14520) 102 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:33 vm01 bash[28152]: cluster 2026-03-09T15:56:32.661226+0000 mgr.y (mgr.14520) 102 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T15:56:34.178 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.178 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_pool 2026-03-09T15:56:34.179 INFO:tasks.workunit.client.0.vm01.stdout:test api_pool on pid 59833 2026-03-09T15:56:34.179 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_pool 2026-03-09T15:56:34.179 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59833 2026-03-09T15:56:34.179 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_pool on pid 59833' 2026-03-09T15:56:34.179 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59833 2026-03-09T15:56:34.180 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.180 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.184 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_snapshots 2026-03-09T15:56:34.185 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_snapshots' 2026-03-09T15:56:34.185 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2>&1 | tee ceph_test_rados_api_pool.log | sed "s/^/ api_pool: /"' 2026-03-09T15:56:34.189 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.189 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.189 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.189 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.189 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2026-03-09T15:56:34.191 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_pool.log 2026-03-09T15:56:34.195 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_pool: /' 2026-03-09T15:56:34.195 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_snapshots 2026-03-09T15:56:34.196 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.197 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_snapshots 2026-03-09T15:56:34.197 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59872 2026-03-09T15:56:34.197 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_snapshots on pid 59872' 2026-03-09T15:56:34.197 INFO:tasks.workunit.client.0.vm01.stdout:test api_snapshots on pid 59872 2026-03-09T15:56:34.198 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59872 2026-03-09T15:56:34.198 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.198 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.199 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2>&1 | tee ceph_test_rados_api_snapshots.log | sed "s/^/ api_snapshots: /"' 2026-03-09T15:56:34.200 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.200 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.200 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.200 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.200 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_snapshots: /' 2026-03-09T15:56:34.202 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_snapshots.log 2026-03-09T15:56:34.203 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2026-03-09T15:56:34.212 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_snapshots_pp 2026-03-09T15:56:34.212 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_snapshots_pp' 2026-03-09T15:56:34.224 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.228 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_snapshots_pp 2026-03-09T15:56:34.228 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_snapshots_pp 2026-03-09T15:56:34.229 INFO:tasks.workunit.client.0.vm01.stdout:test api_snapshots_pp on pid 59906 2026-03-09T15:56:34.229 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59906 2026-03-09T15:56:34.229 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_snapshots_pp on pid 59906' 2026-03-09T15:56:34.229 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59906 2026-03-09T15:56:34.229 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.229 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.229 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2>&1 | tee ceph_test_rados_api_snapshots_pp.log | sed "s/^/ api_snapshots_pp: /"' 2026-03-09T15:56:34.230 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.230 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.230 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.230 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.230 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_snapshots_pp: /' 2026-03-09T15:56:34.231 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_snapshots_pp.log 2026-03-09T15:56:34.232 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2026-03-09T15:56:34.237 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_stat 2026-03-09T15:56:34.237 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_stat' 2026-03-09T15:56:34.248 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.250 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_stat 2026-03-09T15:56:34.251 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_stat 2026-03-09T15:56:34.251 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59946 2026-03-09T15:56:34.251 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_stat on pid 59946' 2026-03-09T15:56:34.251 INFO:tasks.workunit.client.0.vm01.stdout:test api_stat on pid 59946 2026-03-09T15:56:34.251 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59946 2026-03-09T15:56:34.251 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.251 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.252 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2>&1 | tee ceph_test_rados_api_stat.log | sed "s/^/ api_stat: /"' 2026-03-09T15:56:34.255 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.255 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.255 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.255 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.255 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_stat: /' 2026-03-09T15:56:34.255 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_stat.log 2026-03-09T15:56:34.258 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2026-03-09T15:56:34.260 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_stat_pp 2026-03-09T15:56:34.261 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_stat_pp' 2026-03-09T15:56:34.264 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.267 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_stat_pp 2026-03-09T15:56:34.268 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_stat_pp 2026-03-09T15:56:34.268 INFO:tasks.workunit.client.0.vm01.stdout:test api_stat_pp on pid 59963 2026-03-09T15:56:34.268 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59963 2026-03-09T15:56:34.268 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_stat_pp on pid 59963' 2026-03-09T15:56:34.268 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59963 2026-03-09T15:56:34.268 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.268 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.268 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2>&1 | tee ceph_test_rados_api_stat_pp.log | sed "s/^/ api_stat_pp: /"' 2026-03-09T15:56:34.269 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.269 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.269 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.269 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.272 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_watch_notify 2026-03-09T15:56:34.272 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_stat_pp: /' 2026-03-09T15:56:34.272 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_watch_notify' 2026-03-09T15:56:34.273 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_stat_pp.log 2026-03-09T15:56:34.276 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2026-03-09T15:56:34.284 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_watch_notify 2026-03-09T15:56:34.285 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_watch_notify 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59985 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stdout:test api_watch_notify on pid 59985 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_watch_notify on pid 59985' 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59985 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_watch_notify_pp 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_watch_notify_pp' 2026-03-09T15:56:34.286 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2>&1 | tee ceph_test_rados_api_watch_notify.log | sed "s/^/ api_watch_notify: /"' 2026-03-09T15:56:34.287 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.287 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.287 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.287 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.287 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_watch_notify.log 2026-03-09T15:56:34.288 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2026-03-09T15:56:34.289 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.294 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_watch_notify: /' 2026-03-09T15:56:34.295 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_watch_notify_pp 2026-03-09T15:56:34.295 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_watch_notify_pp 2026-03-09T15:56:34.295 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59996 2026-03-09T15:56:34.296 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_watch_notify_pp on pid 59996' 2026-03-09T15:56:34.296 INFO:tasks.workunit.client.0.vm01.stdout:test api_watch_notify_pp on pid 59996 2026-03-09T15:56:34.296 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=59996 2026-03-09T15:56:34.296 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.296 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.301 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_cmd 2026-03-09T15:56:34.301 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_cmd' 2026-03-09T15:56:34.303 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2>&1 | tee ceph_test_rados_api_watch_notify_pp.log | sed "s/^/ api_watch_notify_pp: /"' 2026-03-09T15:56:34.304 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.304 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.304 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.304 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.308 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_watch_notify_pp: /' 2026-03-09T15:56:34.309 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_watch_notify_pp.log 2026-03-09T15:56:34.310 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.310 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_cmd 2026-03-09T15:56:34.310 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2026-03-09T15:56:34.311 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_cmd 2026-03-09T15:56:34.312 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60022 2026-03-09T15:56:34.312 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_cmd on pid 60022' 2026-03-09T15:56:34.312 INFO:tasks.workunit.client.0.vm01.stdout:test api_cmd on pid 60022 2026-03-09T15:56:34.312 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60022 2026-03-09T15:56:34.312 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.312 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.313 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2>&1 | tee ceph_test_rados_api_cmd.log | sed "s/^/ api_cmd: /"' 2026-03-09T15:56:34.314 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_cmd_pp 2026-03-09T15:56:34.314 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_cmd_pp' 2026-03-09T15:56:34.315 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.315 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.315 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.315 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.316 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_cmd: /' 2026-03-09T15:56:34.317 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_cmd.log 2026-03-09T15:56:34.317 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2026-03-09T15:56:34.318 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_cmd_pp 2026-03-09T15:56:34.318 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.324 INFO:tasks.workunit.client.0.vm01.stdout:test api_cmd_pp on pid 60043 2026-03-09T15:56:34.324 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_cmd_pp 2026-03-09T15:56:34.324 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60043 2026-03-09T15:56:34.324 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_cmd_pp on pid 60043' 2026-03-09T15:56:34.324 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60043 2026-03-09T15:56:34.324 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.324 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.328 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2>&1 | tee ceph_test_rados_api_cmd_pp.log | sed "s/^/ api_cmd_pp: /"' 2026-03-09T15:56:34.334 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.334 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.334 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.334 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.334 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2026-03-09T15:56:34.336 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_service 2026-03-09T15:56:34.336 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_service' 2026-03-09T15:56:34.346 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_cmd_pp: /' 2026-03-09T15:56:34.348 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_cmd_pp.log 2026-03-09T15:56:34.352 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.352 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_service 2026-03-09T15:56:34.357 INFO:tasks.workunit.client.0.vm01.stdout:test api_service on pid 60079 2026-03-09T15:56:34.357 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_service 2026-03-09T15:56:34.357 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60079 2026-03-09T15:56:34.358 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_service on pid 60079' 2026-03-09T15:56:34.358 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60079 2026-03-09T15:56:34.358 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.358 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.358 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2>&1 | tee ceph_test_rados_api_service.log | sed "s/^/ api_service: /"' 2026-03-09T15:56:34.358 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.358 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.358 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.358 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.360 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_service_pp 2026-03-09T15:56:34.360 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_service_pp' 2026-03-09T15:56:34.360 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2026-03-09T15:56:34.367 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_service.log 2026-03-09T15:56:34.369 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_service: /' 2026-03-09T15:56:34.375 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.377 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_service_pp 2026-03-09T15:56:34.381 INFO:tasks.workunit.client.0.vm01.stdout:test api_service_pp on pid 60118 2026-03-09T15:56:34.382 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_service_pp 2026-03-09T15:56:34.382 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60118 2026-03-09T15:56:34.382 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_service_pp on pid 60118' 2026-03-09T15:56:34.382 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60118 2026-03-09T15:56:34.382 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2>&1 | tee ceph_test_rados_api_service_pp.log | sed "s/^/ api_service_pp: /"' 2026-03-09T15:56:34.382 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.382 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.382 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.382 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.387 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_service_pp: /' 2026-03-09T15:56:34.387 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2026-03-09T15:56:34.387 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_service_pp.log 2026-03-09T15:56:34.389 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.389 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.396 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_c_write_operations 2026-03-09T15:56:34.396 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_c_write_operations' 2026-03-09T15:56:34.408 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_c_write_operations 2026-03-09T15:56:34.408 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.413 INFO:tasks.workunit.client.0.vm01.stdout:test api_c_write_operations on pid 60150 2026-03-09T15:56:34.413 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_c_write_operations 2026-03-09T15:56:34.413 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60150 2026-03-09T15:56:34.413 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_c_write_operations on pid 60150' 2026-03-09T15:56:34.413 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60150 2026-03-09T15:56:34.413 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.413 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.416 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2>&1 | tee ceph_test_rados_api_c_write_operations.log | sed "s/^/ api_c_write_operations: /"' 2026-03-09T15:56:34.417 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s api_c_read_operations 2026-03-09T15:56:34.417 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' api_c_read_operations' 2026-03-09T15:56:34.418 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.419 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.419 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.419 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.422 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2026-03-09T15:56:34.424 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_c_write_operations.log 2026-03-09T15:56:34.426 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_c_write_operations: /' 2026-03-09T15:56:34.429 INFO:tasks.workunit.client.0.vm01.stderr:++ echo api_c_read_operations 2026-03-09T15:56:34.429 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.434 INFO:tasks.workunit.client.0.vm01.stdout:test api_c_read_operations on pid 60188 2026-03-09T15:56:34.434 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=api_c_read_operations 2026-03-09T15:56:34.434 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60188 2026-03-09T15:56:34.434 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test api_c_read_operations on pid 60188' 2026-03-09T15:56:34.434 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60188 2026-03-09T15:56:34.434 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.434 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.436 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2>&1 | tee ceph_test_rados_api_c_read_operations.log | sed "s/^/ api_c_read_operations: /"' 2026-03-09T15:56:34.444 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s list_parallel 2026-03-09T15:56:34.445 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' list_parallel' 2026-03-09T15:56:34.445 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.445 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.445 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.445 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.448 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ api_c_read_operations: /' 2026-03-09T15:56:34.450 INFO:tasks.workunit.client.0.vm01.stderr:++ echo list_parallel 2026-03-09T15:56:34.454 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.457 INFO:tasks.workunit.client.0.vm01.stdout:test list_parallel on pid 60240 2026-03-09T15:56:34.457 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=list_parallel 2026-03-09T15:56:34.457 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60240 2026-03-09T15:56:34.457 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test list_parallel on pid 60240' 2026-03-09T15:56:34.457 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60240 2026-03-09T15:56:34.457 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.457 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.463 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2026-03-09T15:56:34.464 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_api_c_read_operations.log 2026-03-09T15:56:34.469 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2>&1 | tee ceph_test_rados_list_parallel.log | sed "s/^/ list_parallel: /"' 2026-03-09T15:56:34.469 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s open_pools_parallel 2026-03-09T15:56:34.469 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' open_pools_parallel' 2026-03-09T15:56:34.471 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.471 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.471 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.471 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.472 INFO:tasks.workunit.client.0.vm01.stderr:++ echo open_pools_parallel 2026-03-09T15:56:34.472 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2026-03-09T15:56:34.476 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ list_parallel: /' 2026-03-09T15:56:34.476 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_list_parallel.log 2026-03-09T15:56:34.481 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.483 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=open_pools_parallel 2026-03-09T15:56:34.483 INFO:tasks.workunit.client.0.vm01.stdout:test open_pools_parallel on pid 60264 2026-03-09T15:56:34.483 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60264 2026-03-09T15:56:34.483 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test open_pools_parallel on pid 60264' 2026-03-09T15:56:34.483 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60264 2026-03-09T15:56:34.483 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T15:56:34.483 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.486 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s delete_pools_parallel 2026-03-09T15:56:34.486 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' delete_pools_parallel' 2026-03-09T15:56:34.488 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2>&1 | tee ceph_test_rados_open_pools_parallel.log | sed "s/^/ open_pools_parallel: /"' 2026-03-09T15:56:34.489 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.489 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.489 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.489 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.491 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ open_pools_parallel: /' 2026-03-09T15:56:34.493 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_open_pools_parallel.log 2026-03-09T15:56:34.493 INFO:tasks.workunit.client.0.vm01.stderr:++ echo delete_pools_parallel 2026-03-09T15:56:34.493 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2026-03-09T15:56:34.500 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.505 INFO:tasks.workunit.client.0.vm01.stdout:test delete_pools_parallel on pid 60297 2026-03-09T15:56:34.505 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=delete_pools_parallel 2026-03-09T15:56:34.505 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60297 2026-03-09T15:56:34.505 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test delete_pools_parallel on pid 60297' 2026-03-09T15:56:34.505 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60297 2026-03-09T15:56:34.505 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.505 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.514 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s cls 2026-03-09T15:56:34.514 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' cls' 2026-03-09T15:56:34.514 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2>&1 | tee ceph_test_rados_delete_pools_parallel.log | sed "s/^/ delete_pools_parallel: /"' 2026-03-09T15:56:34.515 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.515 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.515 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.515 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.518 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2026-03-09T15:56:34.522 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.522 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ delete_pools_parallel: /' 2026-03-09T15:56:34.523 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_rados_delete_pools_parallel.log 2026-03-09T15:56:34.525 INFO:tasks.workunit.client.0.vm01.stderr:++ echo cls 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stdout:test cls on pid 60326 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=cls 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60326 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test cls on pid 60326' 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60326 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cls 2>&1 | tee ceph_test_neorados_cls.log | sed "s/^/ cls: /"' 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.526 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.527 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.527 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.527 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s cmd 2026-03-09T15:56:34.528 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' cmd' 2026-03-09T15:56:34.537 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.543 INFO:tasks.workunit.client.0.vm01.stderr:++ echo cmd 2026-03-09T15:56:34.545 INFO:tasks.workunit.client.0.vm01.stdout:test cmd on pid 60357 2026-03-09T15:56:34.545 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=cmd 2026-03-09T15:56:34.545 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60357 2026-03-09T15:56:34.545 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test cmd on pid 60357' 2026-03-09T15:56:34.545 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60357 2026-03-09T15:56:34.545 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.545 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.546 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_cls 2026-03-09T15:56:34.547 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_cls.log 2026-03-09T15:56:34.548 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ cls: /' 2026-03-09T15:56:34.554 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s handler_error 2026-03-09T15:56:34.554 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' handler_error' 2026-03-09T15:56:34.554 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cmd 2>&1 | tee ceph_test_neorados_cmd.log | sed "s/^/ cmd: /"' 2026-03-09T15:56:34.555 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.555 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.555 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.555 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.566 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_cmd 2026-03-09T15:56:34.566 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_cmd.log 2026-03-09T15:56:34.566 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ cmd: /' 2026-03-09T15:56:34.574 INFO:tasks.workunit.client.0.vm01.stderr:++ echo handler_error 2026-03-09T15:56:34.574 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.575 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=handler_error 2026-03-09T15:56:34.576 INFO:tasks.workunit.client.0.vm01.stdout:test handler_error on pid 60394 2026-03-09T15:56:34.576 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60394 2026-03-09T15:56:34.576 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test handler_error on pid 60394' 2026-03-09T15:56:34.576 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60394 2026-03-09T15:56:34.576 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.576 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.576 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s io 2026-03-09T15:56:34.576 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' io' 2026-03-09T15:56:34.576 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_handler_error 2>&1 | tee ceph_test_neorados_handler_error.log | sed "s/^/ handler_error: /"' 2026-03-09T15:56:34.577 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.579 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.580 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.580 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.581 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_handler_error.log 2026-03-09T15:56:34.581 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ handler_error: /' 2026-03-09T15:56:34.582 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_handler_error 2026-03-09T15:56:34.585 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.586 INFO:tasks.workunit.client.0.vm01.stderr:++ echo io 2026-03-09T15:56:34.591 INFO:tasks.workunit.client.0.vm01.stdout:test io on pid 60415 2026-03-09T15:56:34.591 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=io 2026-03-09T15:56:34.591 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60415 2026-03-09T15:56:34.591 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test io on pid 60415' 2026-03-09T15:56:34.591 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60415 2026-03-09T15:56:34.591 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.591 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.596 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_io 2>&1 | tee ceph_test_neorados_io.log | sed "s/^/ io: /"' 2026-03-09T15:56:34.596 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s ec_io 2026-03-09T15:56:34.596 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' ec_io' 2026-03-09T15:56:34.601 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.601 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.601 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.601 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.604 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ io: /' 2026-03-09T15:56:34.608 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_io.log 2026-03-09T15:56:34.608 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_io 2026-03-09T15:56:34.610 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.613 INFO:tasks.workunit.client.0.vm01.stderr:++ echo ec_io 2026-03-09T15:56:34.615 INFO:tasks.workunit.client.0.vm01.stdout:test ec_io on pid 60451 2026-03-09T15:56:34.615 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=ec_io 2026-03-09T15:56:34.615 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60451 2026-03-09T15:56:34.615 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test ec_io on pid 60451' 2026-03-09T15:56:34.615 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60451 2026-03-09T15:56:34.615 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.615 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.620 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s list 2026-03-09T15:56:34.621 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' list' 2026-03-09T15:56:34.621 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_io 2>&1 | tee ceph_test_neorados_ec_io.log | sed "s/^/ ec_io: /"' 2026-03-09T15:56:34.622 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.622 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.622 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.622 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.626 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_ec_io 2026-03-09T15:56:34.627 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ ec_io: /' 2026-03-09T15:56:34.629 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_ec_io.log 2026-03-09T15:56:34.630 INFO:tasks.workunit.client.0.vm01.stderr:++ echo list 2026-03-09T15:56:34.630 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.632 INFO:tasks.workunit.client.0.vm01.stdout:test list on pid 60475 2026-03-09T15:56:34.632 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=list 2026-03-09T15:56:34.632 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60475 2026-03-09T15:56:34.632 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test list on pid 60475' 2026-03-09T15:56:34.632 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60475 2026-03-09T15:56:34.632 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.632 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.634 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_list 2>&1 | tee ceph_test_neorados_list.log | sed "s/^/ list: /"' 2026-03-09T15:56:34.635 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.635 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.635 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.635 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.635 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_list 2026-03-09T15:56:34.636 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s ec_list 2026-03-09T15:56:34.637 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' ec_list' 2026-03-09T15:56:34.645 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.645 INFO:tasks.workunit.client.0.vm01.stderr:++ echo ec_list 2026-03-09T15:56:34.646 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_list.log 2026-03-09T15:56:34.646 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ list: /' 2026-03-09T15:56:34.650 INFO:tasks.workunit.client.0.vm01.stdout:test ec_list on pid 60501 2026-03-09T15:56:34.650 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=ec_list 2026-03-09T15:56:34.650 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60501 2026-03-09T15:56:34.650 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test ec_list on pid 60501' 2026-03-09T15:56:34.650 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60501 2026-03-09T15:56:34.650 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.650 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.650 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s misc 2026-03-09T15:56:34.651 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' misc' 2026-03-09T15:56:34.651 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_list 2>&1 | tee ceph_test_neorados_ec_list.log | sed "s/^/ ec_list: /"' 2026-03-09T15:56:34.651 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.652 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.652 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.652 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.652 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ ec_list: /' 2026-03-09T15:56:34.652 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_ec_list.log 2026-03-09T15:56:34.656 INFO:tasks.workunit.client.0.vm01.stderr:++ echo misc 2026-03-09T15:56:34.657 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.660 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_ec_list 2026-03-09T15:56:34.664 INFO:tasks.workunit.client.0.vm01.stdout:test misc on pid 60527 2026-03-09T15:56:34.664 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=misc 2026-03-09T15:56:34.664 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60527 2026-03-09T15:56:34.665 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test misc on pid 60527' 2026-03-09T15:56:34.665 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60527 2026-03-09T15:56:34.665 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.665 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.665 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_misc 2>&1 | tee ceph_test_neorados_misc.log | sed "s/^/ misc: /"' 2026-03-09T15:56:34.666 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s pool 2026-03-09T15:56:34.668 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' pool' 2026-03-09T15:56:34.668 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.668 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.668 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.668 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.669 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_misc 2026-03-09T15:56:34.670 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_misc.log 2026-03-09T15:56:34.670 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ misc: /' 2026-03-09T15:56:34.675 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.675 INFO:tasks.workunit.client.0.vm01.stderr:++ echo pool 2026-03-09T15:56:34.681 INFO:tasks.workunit.client.0.vm01.stdout:test pool on pid 60550 2026-03-09T15:56:34.681 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=pool 2026-03-09T15:56:34.681 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60550 2026-03-09T15:56:34.681 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test pool on pid 60550' 2026-03-09T15:56:34.681 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60550 2026-03-09T15:56:34.681 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.681 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.685 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s read_operations 2026-03-09T15:56:34.685 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' read_operations' 2026-03-09T15:56:34.685 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_pool 2>&1 | tee ceph_test_neorados_pool.log | sed "s/^/ pool: /"' 2026-03-09T15:56:34.687 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.688 INFO:tasks.workunit.client.0.vm01.stderr:++ echo read_operations 2026-03-09T15:56:34.688 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.688 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.688 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.690 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ pool: /' 2026-03-09T15:56:34.691 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.692 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_pool.log 2026-03-09T15:56:34.692 INFO:tasks.workunit.client.0.vm01.stdout:test read_operations on pid 60568 2026-03-09T15:56:34.692 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=read_operations 2026-03-09T15:56:34.692 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60568 2026-03-09T15:56:34.692 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test read_operations on pid 60568' 2026-03-09T15:56:34.692 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60568 2026-03-09T15:56:34.692 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.692 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.692 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_pool 2026-03-09T15:56:34.693 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_read_operations 2>&1 | tee ceph_test_neorados_read_operations.log | sed "s/^/ read_operations: /"' 2026-03-09T15:56:34.694 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s snapshots 2026-03-09T15:56:34.694 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' snapshots' 2026-03-09T15:56:34.695 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.695 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.695 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.695 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.696 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ read_operations: /' 2026-03-09T15:56:34.697 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_read_operations.log 2026-03-09T15:56:34.697 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.698 INFO:tasks.workunit.client.0.vm01.stderr:++ echo snapshots 2026-03-09T15:56:34.699 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_read_operations 2026-03-09T15:56:34.701 INFO:tasks.workunit.client.0.vm01.stdout:test snapshots on pid 60585 2026-03-09T15:56:34.701 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=snapshots 2026-03-09T15:56:34.701 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60585 2026-03-09T15:56:34.701 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test snapshots on pid 60585' 2026-03-09T15:56:34.701 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60585 2026-03-09T15:56:34.701 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.701 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.703 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s watch_notify 2026-03-09T15:56:34.703 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' watch_notify' 2026-03-09T15:56:34.704 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_snapshots 2>&1 | tee ceph_test_neorados_snapshots.log | sed "s/^/ snapshots: /"' 2026-03-09T15:56:34.704 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.705 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.706 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.706 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.712 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_snapshots.log 2026-03-09T15:56:34.713 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ snapshots: /' 2026-03-09T15:56:34.714 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_snapshots 2026-03-09T15:56:34.717 INFO:tasks.workunit.client.0.vm01.stderr:++ echo watch_notify 2026-03-09T15:56:34.722 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.729 INFO:tasks.workunit.client.0.vm01.stdout:test watch_notify on pid 60615 2026-03-09T15:56:34.730 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=watch_notify 2026-03-09T15:56:34.730 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60615 2026-03-09T15:56:34.730 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test watch_notify on pid 60615' 2026-03-09T15:56:34.730 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60615 2026-03-09T15:56:34.730 INFO:tasks.workunit.client.0.vm01.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T15:56:34.730 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.731 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_watch_notify 2>&1 | tee ceph_test_neorados_watch_notify.log | sed "s/^/ watch_notify: /"' 2026-03-09T15:56:34.732 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.732 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.732 INFO:tasks.workunit.client.0.vm01.stderr:++ printf %25s write_operations 2026-03-09T15:56:34.733 INFO:tasks.workunit.client.0.vm01.stderr:+ r=' write_operations' 2026-03-09T15:56:34.733 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.734 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.734 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_watch_notify 2026-03-09T15:56:34.738 INFO:tasks.workunit.client.0.vm01.stderr:++ echo write_operations 2026-03-09T15:56:34.740 INFO:tasks.workunit.client.0.vm01.stderr:++ awk '{print $1}' 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stdout:test write_operations on pid 60637 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stderr:+ ff=write_operations 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60637 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'test write_operations on pid 60637' 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stderr:+ pids[$f]=60637 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stderr:+ ret=0 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59819 2026-03-09T15:56:34.744 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59819 2026-03-09T15:56:34.745 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_watch_notify.log 2026-03-09T15:56:34.745 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ watch_notify: /' 2026-03-09T15:56:34.748 INFO:tasks.workunit.client.0.vm01.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_write_operations 2>&1 | tee ceph_test_neorados_write_operations.log | sed "s/^/ write_operations: /"' 2026-03-09T15:56:34.752 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' -z '' ']' 2026-03-09T15:56:34.752 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.752 INFO:tasks.workunit.client.0.vm01.stderr:+ case $- in 2026-03-09T15:56:34.752 INFO:tasks.workunit.client.0.vm01.stderr:+ return 2026-03-09T15:56:34.754 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph_test_neorados_write_operations 2026-03-09T15:56:34.755 INFO:tasks.workunit.client.0.vm01.stderr:+ tee ceph_test_neorados_write_operations.log 2026-03-09T15:56:34.758 INFO:tasks.workunit.client.0.vm01.stderr:+ sed 's/^/ write_operations: /' 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.355669+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.355669+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.356183+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.356183+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.358139+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.358139+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.358608+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.358608+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.359273+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.359273+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.359740+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.359740+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.398541+0000 mon.c (mon.2) 32 : audit [DBG] from='client.? 192.168.123.101:0/2936967683' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.398541+0000 mon.c (mon.2) 32 : audit [DBG] from='client.? 192.168.123.101:0/2936967683' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.676378+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.676378+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.676845+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.676845+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.709466+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.709466+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.713225+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:34 vm01 bash[20728]: audit 2026-03-09T15:56:34.713225+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.355669+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.355669+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.356183+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.356183+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.358139+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.358139+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.358608+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.358608+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.359273+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.359273+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.359740+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.359740+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.398541+0000 mon.c (mon.2) 32 : audit [DBG] from='client.? 192.168.123.101:0/2936967683' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.398541+0000 mon.c (mon.2) 32 : audit [DBG] from='client.? 192.168.123.101:0/2936967683' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.676378+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.676378+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.676845+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.676845+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.709466+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.709466+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.713225+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:34 vm01 bash[28152]: audit 2026-03-09T15:56:34.713225+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.355669+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.355669+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.356183+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.356183+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.358139+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.358139+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.358608+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.358608+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.359273+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.359273+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.359740+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.359740+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.398541+0000 mon.c (mon.2) 32 : audit [DBG] from='client.? 192.168.123.101:0/2936967683' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.398541+0000 mon.c (mon.2) 32 : audit [DBG] from='client.? 192.168.123.101:0/2936967683' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.676378+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.676378+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.676845+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.676845+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.709466+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.709466+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.713225+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:34 vm09 bash[22983]: audit 2026-03-09T15:56:34.713225+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:36.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:56:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: cluster 2026-03-09T15:56:34.661691+0000 mgr.y (mgr.14520) 103 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: cluster 2026-03-09T15:56:34.661691+0000 mgr.y (mgr.14520) 103 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.864932+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.864932+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.865054+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.865054+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.865151+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.865151+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.901417+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.901417+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.904660+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.904660+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.904787+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.904787+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.909941+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.101:0/3617549194' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.909941+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.101:0/3617549194' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.910080+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.101:0/2302588071' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.910080+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.101:0/2302588071' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.913812+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.101:0/3715895120' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.913812+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.101:0/3715895120' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.915489+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.101:0/3748374573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.915489+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.101:0/3748374573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.916067+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.101:0/1704571248' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.916067+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.101:0/1704571248' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.916173+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.101:0/1746178058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.916173+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.101:0/1746178058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.917373+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.101:0/1487994880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.917373+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.101:0/1487994880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.917969+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.101:0/1843098342' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.917969+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.101:0/1843098342' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.918363+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.101:0/475272767' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.918363+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.101:0/475272767' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.919125+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.101:0/3172791290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.919125+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.101:0/3172791290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.921048+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.101:0/2877949916' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.921048+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.101:0/2877949916' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.921491+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.101:0/855824016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.921491+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.101:0/855824016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: cluster 2026-03-09T15:56:34.923813+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: cluster 2026-03-09T15:56:34.923813+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.926142+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.101:0/2133398005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.926142+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.101:0/2133398005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.926223+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.101:0/3705759561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.926223+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.101:0/3705759561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.969061+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.969061+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.969190+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.969190+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.969283+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.969283+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.977319+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.977319+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.977509+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.977509+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.977579+0000 mon.a (mon.0) 894 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.977579+0000 mon.a (mon.0) 894 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.977632+0000 mon.a (mon.0) 895 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.977632+0000 mon.a (mon.0) 895 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.988685+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.988685+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.988885+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.988885+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989247+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989247+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989386+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989386+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989561+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989561+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989678+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989678+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989782+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989782+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989916+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.989916+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990091+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990091+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990234+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990234+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990366+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990366+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990474+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990474+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990581+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990581+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990730+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990730+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990855+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:36 vm09 bash[22983]: audit 2026-03-09T15:56:34.990855+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: cluster 2026-03-09T15:56:34.661691+0000 mgr.y (mgr.14520) 103 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: cluster 2026-03-09T15:56:34.661691+0000 mgr.y (mgr.14520) 103 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.864932+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.864932+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.865054+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.865054+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.865151+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.865151+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.901417+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.901417+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.904660+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.904660+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.904787+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.904787+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.909941+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.101:0/3617549194' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.909941+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.101:0/3617549194' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.910080+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.101:0/2302588071' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.910080+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.101:0/2302588071' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.913812+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.101:0/3715895120' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.913812+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.101:0/3715895120' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.915489+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.101:0/3748374573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.915489+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.101:0/3748374573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.916067+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.101:0/1704571248' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.916067+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.101:0/1704571248' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.916173+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.101:0/1746178058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.916173+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.101:0/1746178058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.917373+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.101:0/1487994880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.917373+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.101:0/1487994880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.917969+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.101:0/1843098342' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.917969+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.101:0/1843098342' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.918363+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.101:0/475272767' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.918363+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.101:0/475272767' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.919125+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.101:0/3172791290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.919125+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.101:0/3172791290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.921048+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.101:0/2877949916' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.921048+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.101:0/2877949916' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.921491+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.101:0/855824016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.921491+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.101:0/855824016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: cluster 2026-03-09T15:56:34.923813+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: cluster 2026-03-09T15:56:34.923813+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.926142+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.101:0/2133398005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.926142+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.101:0/2133398005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.926223+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.101:0/3705759561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.926223+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.101:0/3705759561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.969061+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.969061+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.969190+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.969190+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.969283+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.969283+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.977319+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.977319+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.977509+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.977509+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.977579+0000 mon.a (mon.0) 894 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.977579+0000 mon.a (mon.0) 894 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.977632+0000 mon.a (mon.0) 895 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.977632+0000 mon.a (mon.0) 895 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.988685+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.988685+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.988885+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.988885+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989247+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989247+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989386+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989386+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989561+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989561+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989678+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989678+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989782+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989782+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989916+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.989916+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990091+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990091+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990234+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990234+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990366+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990366+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990474+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990474+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990581+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990581+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990730+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990730+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990855+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:36 vm01 bash[28152]: audit 2026-03-09T15:56:34.990855+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: cluster 2026-03-09T15:56:34.661691+0000 mgr.y (mgr.14520) 103 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: cluster 2026-03-09T15:56:34.661691+0000 mgr.y (mgr.14520) 103 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.864932+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.864932+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.865054+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.865054+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm01-60464-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.865151+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.865151+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm01-60504-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.901417+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.901417+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.904660+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.904660+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.904787+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.904787+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.909941+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.101:0/3617549194' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.909941+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.101:0/3617549194' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.910080+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.101:0/2302588071' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.910080+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.101:0/2302588071' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.913812+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.101:0/3715895120' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.913812+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.101:0/3715895120' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.915489+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.101:0/3748374573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.915489+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.101:0/3748374573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.916067+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.101:0/1704571248' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.916067+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.101:0/1704571248' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.916173+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.101:0/1746178058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.916173+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.101:0/1746178058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.917373+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.101:0/1487994880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.917373+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.101:0/1487994880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.917969+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.101:0/1843098342' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.917969+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.101:0/1843098342' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.918363+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.101:0/475272767' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.918363+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.101:0/475272767' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.919125+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.101:0/3172791290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.919125+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.101:0/3172791290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.921048+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.101:0/2877949916' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.921048+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.101:0/2877949916' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.921491+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.101:0/855824016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.921491+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.101:0/855824016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: cluster 2026-03-09T15:56:34.923813+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: cluster 2026-03-09T15:56:34.923813+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.926142+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.101:0/2133398005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.926142+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.101:0/2133398005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.926223+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.101:0/3705759561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.926223+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.101:0/3705759561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.969061+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.969061+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.969190+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.969190+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.969283+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.969283+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.977319+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.977319+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.977509+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.977509+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.977579+0000 mon.a (mon.0) 894 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.977579+0000 mon.a (mon.0) 894 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.977632+0000 mon.a (mon.0) 895 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.977632+0000 mon.a (mon.0) 895 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.988685+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.988685+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.988885+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.988885+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989247+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989247+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989386+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989386+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989561+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989561+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989678+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989678+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989782+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989782+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989916+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.989916+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990091+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990091+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990234+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990234+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990366+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990366+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990474+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990474+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990581+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990581+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990730+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990730+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990855+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:36 vm01 bash[20728]: audit 2026-03-09T15:56:34.990855+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [==========] Running 12 tests from 1 test suite. 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [----------] Global test environment set-up. 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [----------] 12 tests from AsioRados 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncReadCallback 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncReadCallback (0 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncReadFuture 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncReadFuture (1 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncReadYield 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncReadYield (0 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteCallback 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncWriteCallback (200 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteFuture 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncWriteFuture (54 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteYield 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncWriteYield (15 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationCallback 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationCallback (1 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationFuture 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationFuture (0 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationYield 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationYield (1 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationCallback 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationCallback (5 ms) 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationFuture 2026-03-09T15:56:36.775 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationFuture (13 ms) 2026-03-09T15:56:36.776 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationYield 2026-03-09T15:56:36.776 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationYield (13 ms) 2026-03-09T15:56:36.776 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [----------] 12 tests from AsioRados (303 ms total) 2026-03-09T15:56:36.776 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: 2026-03-09T15:56:36.776 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [----------] Global test environment tear-down 2026-03-09T15:56:36.776 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [==========] 12 tests from 1 test suite ran. (2667 ms total) 2026-03-09T15:56:36.776 INFO:tasks.workunit.client.0.vm01.stdout: api_asio: [ PASSED ] 12 tests. 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: Running main() from gmock_main.cc 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: [==========] Running 1 test from 1 test suite. 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: [----------] Global test environment set-up. 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: [----------] 1 test from NeoRadosCls 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: [ RUN ] NeoRadosCls.DNE 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: [ OK ] NeoRadosCls.DNE (2372 ms) 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: [----------] 1 test from NeoRadosCls (2372 ms total) 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: [----------] Global test environment tear-down 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: [==========] 1 test from 1 test suite ran. (2373 ms total) 2026-03-09T15:56:36.954 INFO:tasks.workunit.client.0.vm01.stdout: cls: [ PASSED ] 1 test. 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: Running main() from gmock_main.cc 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: [==========] Running 1 test from 1 test suite. 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: [----------] Global test environment set-up. 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: [----------] 1 test from neocls_handler_error 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: [ RUN ] neocls_handler_error.test_handler_error 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: [ OK ] neocls_handler_error.test_handler_error (2352 ms) 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: [----------] 1 test from neocls_handler_error (2352 ms total) 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: [----------] Global test environment tear-down 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: [==========] 1 test from 1 test suite ran. (2352 ms total) 2026-03-09T15:56:36.962 INFO:tasks.workunit.client.0.vm01.stdout: handler_error: [ PASSED ] 1 test. 2026-03-09T15:56:36.986 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: Running main() from gmock_main.cc 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [==========] Running 3 tests from 1 test suite. 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [----------] Global test environment set-up. 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.MonDescribePP 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [ OK ] LibRadosCmd.MonDescribePP (47 ms) 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.OSDCmdPP 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [ OK ] LibRadosCmd.OSDCmdPP (46 ms) 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.PGCmdPP 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [ OK ] LibRadosCmd.PGCmdPP (2506 ms) 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd (2599 ms total) 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [----------] Global test environment tear-down 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [==========] 3 tests from 1 test suite ran. (2599 ms total) 2026-03-09T15:56:36.987 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd_pp: [ PASSED ] 3 tests. 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: Running main() from gmock_main.cc 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [==========] Running 4 tests from 1 test suite. 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [----------] Global test environment set-up. 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [----------] 4 tests from LibRadosCmd 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [ RUN ] LibRadosCmd.MonDescribe 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [ OK ] LibRadosCmd.MonDescribe (46 ms) 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [ RUN ] LibRadosCmd.OSDCmd 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [ OK ] LibRadosCmd.OSDCmd (32 ms) 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [ RUN ] LibRadosCmd.PGCmd 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [ OK ] LibRadosCmd.PGCmd (2537 ms) 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [ RUN ] LibRadosCmd.WatchLog 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.872141+0000 mon.a [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.872832+0000 mon.a [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.872862+0000 mon.a [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873374+0000 mon.a [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873403+0000 mon.a [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873569+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873654+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873680+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873721+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873748+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873807+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.094 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873841+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.872141+0000 mon.a (mon.0) 911 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.872141+0000 mon.a (mon.0) 911 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.872832+0000 mon.a (mon.0) 912 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.872832+0000 mon.a (mon.0) 912 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.872862+0000 mon.a (mon.0) 913 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.872862+0000 mon.a (mon.0) 913 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873374+0000 mon.a (mon.0) 914 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873374+0000 mon.a (mon.0) 914 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873403+0000 mon.a (mon.0) 915 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873403+0000 mon.a (mon.0) 915 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873569+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873569+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873654+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873654+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873680+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873680+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873721+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873721+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873748+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873748+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873807+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873807+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873841+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873841+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873896+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873896+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873944+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873944+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873999+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.873999+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.874062+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.874062+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.874114+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.874114+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.874182+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.874182+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.874232+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:35.874232+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: cluster 2026-03-09T15:56:35.947126+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: cluster 2026-03-09T15:56:35.947126+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.917029+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.917029+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.917074+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.917074+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.917104+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.917104+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: cluster 2026-03-09T15:56:36.935380+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: cluster 2026-03-09T15:56:36.935380+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.966332+0000 mon.c (mon.2) 46 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.966332+0000 mon.c (mon.2) 46 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.972593+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:37 vm09 bash[22983]: audit 2026-03-09T15:56:36.972593+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.872141+0000 mon.a (mon.0) 911 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.872141+0000 mon.a (mon.0) 911 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.872832+0000 mon.a (mon.0) 912 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.872832+0000 mon.a (mon.0) 912 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.872862+0000 mon.a (mon.0) 913 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.872862+0000 mon.a (mon.0) 913 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873374+0000 mon.a (mon.0) 914 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873374+0000 mon.a (mon.0) 914 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873403+0000 mon.a (mon.0) 915 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873403+0000 mon.a (mon.0) 915 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873569+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873569+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873654+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873654+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873680+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873680+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873721+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873721+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873748+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873748+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873807+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873807+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873841+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873841+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873896+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873896+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873944+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873944+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873999+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.873999+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.874062+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.874062+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.874114+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.874114+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.874182+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.874182+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.874232+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:35.874232+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: cluster 2026-03-09T15:56:35.947126+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T15:56:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: cluster 2026-03-09T15:56:35.947126+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.917029+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.917029+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.917074+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.917074+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.917104+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.917104+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: cluster 2026-03-09T15:56:36.935380+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: cluster 2026-03-09T15:56:36.935380+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.966332+0000 mon.c (mon.2) 46 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.966332+0000 mon.c (mon.2) 46 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.972593+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:37 vm01 bash[28152]: audit 2026-03-09T15:56:36.972593+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.872141+0000 mon.a (mon.0) 911 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.872141+0000 mon.a (mon.0) 911 : audit [INF] from='client.? 192.168.123.101:0/388725981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm01-59602-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.872832+0000 mon.a (mon.0) 912 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.872832+0000 mon.a (mon.0) 912 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.872862+0000 mon.a (mon.0) 913 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.872862+0000 mon.a (mon.0) 913 : audit [INF] from='client.? 192.168.123.101:0/2782918641' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm01-59715-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873374+0000 mon.a (mon.0) 914 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873374+0000 mon.a (mon.0) 914 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm01-59696-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873403+0000 mon.a (mon.0) 915 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873403+0000 mon.a (mon.0) 915 : audit [INF] from='client.? 192.168.123.101:0/2807406741' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm01-59948-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873569+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873569+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm01-59965-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873654+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873654+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm01-59988-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873680+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873680+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm01-59821-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873721+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873721+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm01-59618-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873748+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873748+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm01-59610-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873807+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873807+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm01-59640-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873841+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873841+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm01-59735-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873896+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873896+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873944+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873944+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873999+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.873999+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.874062+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.874062+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.874114+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.874114+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.874182+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.874182+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.874232+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:35.874232+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: cluster 2026-03-09T15:56:35.947126+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: cluster 2026-03-09T15:56:35.947126+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.917029+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.917029+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.917074+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.917074+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.917104+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.917104+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: cluster 2026-03-09T15:56:36.935380+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: cluster 2026-03-09T15:56:36.935380+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.966332+0000 mon.c (mon.2) 46 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.966332+0000 mon.c (mon.2) 46 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.972593+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:37 vm01 bash[20728]: audit 2026-03-09T15:56:36.972593+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873896+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "L api_list: [==========] Running 11 tests from 3 test suites. 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [----------] Global test environment set-up. 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [----------] 7 tests from LibRadosList 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosList.ListObjects 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosList.ListObjects (876 ms) 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosList.ListObjectsZeroInName 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosList.ListObjectsZeroInName (68 ms) 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosList.ListObjectsNS 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: myset foo1,foo2,foo3 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo1 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo2 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo3 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: myset foo1,foo4,foo5 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo4 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo5 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo1 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: myset foo6,foo7 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo7 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo6 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns1:foo4 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns1:foo5 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns2:foo7 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns2:foo6 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns1:foo1 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: :foo1 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: :foo2 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: :foo3 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosList.ListObjectsNS (674 ms) 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosList.ListObjectsStart 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 1 0 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 10 0 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 13 0 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 7 0 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 14 0 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 0 0 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 15 0 2026-03-09T15:56:37.797 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 11 0 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 5 0 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 8 0 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 6 0 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 3 0 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 4 0 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 12 0 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 9 0 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 2 0 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosList.ListObjectsStart (101 ms) 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosList.ListObjectsCursor 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: x cursor=MIN 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=1 cursor=14:02547ec2:::1:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=10 cursor=14:52ea6a34:::10:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=13 cursor=14:566253c9:::13:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=7 cursor=14:5c6b0b28:::7:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=14 cursor=14:62a1935d:::14:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=0 cursor=14:6cac518f:::0:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=15 cursor=14:863748b0:::15:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=11 cursor=14:89d3ae78:::11:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=5 cursor=14:b29083e3:::5:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=8 cursor=14:bd63b0f1:::8:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=6 cursor=14:c4fdafeb:::6:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=3 cursor=14:cfc208b3:::3:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=4 cursor=14:d83876eb:::4:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=12 cursor=14:de5d7c5f:::12:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=9 cursor=14:e960b815:::9:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > oid=2 cursor=14:f905c69b:::2:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: FIRST> seek to MIN oid=1 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=1 cursor=14:02547ec2:::1:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:02547ec2:::1:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:02547ec2:::1:head -> 1 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=10 cursor=14:52ea6a34:::10:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:52ea6a34:::10:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:52ea6a34:::10:head -> 10 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=13 cursor=14:566253c9:::13:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:566253c9:::13:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:566253c9:::13:head -> 13 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=7 cursor=14:5c6b0b28:::7:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:5c6b0b28:::7:head 2026-03-09T15:56:37.798 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:5c6b0b28:::7:head -> 7 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=14 cursor=14:62a1935d:::14:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:62a1935d:::14:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:62a1935d:::14:head -> 14 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=0 cursor=14:6cac518f:::0:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:6cac518f:::0:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:6cac518f:::0:head -> 0 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=15 cursor=14:863748b0:::15:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:863748b0:::15:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:863748b0:::15:head -> 15 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=11 cursor=14:89d3ae78:::11:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:89d3ae78:::11:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:89d3ae78:::11:head -> 11 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=5 cursor=14:b29083e3:::5:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:b29083e3:::5:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:b29083e3:::5:head -> 5 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=8 cursor=14:bd63b0f1:::8:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:bd63b0f1:::8:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:bd63b0f1:::8:head -> 8 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=6 cursor=14:c4fdafeb:::6:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:c4fdafeb:::6:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:c4fdafeb:::6:head -> 6 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=3 cursor=14:cfc208b3:::3:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:cfc208b3:::3:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:cfc208b3:::3:head -> 3 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=4 cursor=14:d83876eb:::4:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:d83876eb:::4:head 2026-03-09T15:56:37.872 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:d83876eb:::4:head -> 4 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=12 cursor=14:de5d7c5f:::12:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:de5d7c5f:::12:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:de5d7c5f:::12:head -> 12 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=9 cursor=14:e960b815:::9:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:e960b815:::9:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:e960b815:::9:head -> 9 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : oid=2 cursor=14:f905c69b:::2:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:f905c69b:::2:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:f905c69b:::2:head -> 2 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:f905c69b:::2:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:f905c69b:::2:head expected=14:f905c69b:::2:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:f905c69b:::2:head -> 2 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=2 expected=2 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:c4fdafeb:::6:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:c4fdafeb:::6:head expected=14:c4fdafeb:::6:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:c4fdafeb:::6:head -> 6 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=6 expected=6 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:bd63b0f1:::8:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:bd63b0f1:::8:head expected=14:bd63b0f1:::8:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:bd63b0f1:::8:head -> 8 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=8 expected=8 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:b29083e3:::5:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:b29083e3:::5:head expected=14:b29083e3:::5:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:b29083e3:::5:head -> 5 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=5 expected=5 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:52ea6a34:::10:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:52ea6a34:::10:head expected=14:52ea6a34:::10:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:52ea6a34:::10:head -> 10 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=10 expected=10 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:e960b815:::9:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:e960b815:::9:head expected=14:e960b815:::9:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:e960b815:::9:head -> 9 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=9 expected=9 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:de5d7c5f:::12:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:de5d7c5f:::12:head expected=14:de5d7c5f:::12:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:de5d7c5f:::12:head -> 12 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=12 expected=12 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:d83876eb:::4:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:d83876eb:::4:head expected=14:d83876eb:::4:head 2026-03-09T15:56:37.873 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:d83876eb:::4:head -> 4 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry= api_c_read_operations: Running main() from gmock_main.cc 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [==========] Running 17 tests from 1 test suite. 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [----------] Global test environment set-up. 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.NewDelete 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.NewDelete (0 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.SetOpFlags 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.SetOpFlags (408 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertExists 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertExists (81 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertVersion 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertVersion (332 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpXattr 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpXattr (132 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Read 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Read (43 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Checksum 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Checksum (170 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.RWOrderedRead 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.RWOrderedRead (259 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ShortRead 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ShortRead (3 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Exec 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Exec (20 ms) 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ExecUserBuf 2026-03-09T15:56:38.224 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ExecUserBuf (22 ms) 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat (22 ms) 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat2 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat2 (9 ms) 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Omap 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Omap (22 ms) 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.OmapNuls 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.OmapNuls (50 ms) 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.GetXattrs 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.GetXattrs (16 ms) 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpExt 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpExt (5 ms) 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest (1597 ms total) 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [----------] Global test environment tear-down 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [==========] 17 tests from 1 test suite ran. (3737 ms total) 2026-03-09T15:56:38.225 INFO:tasks.workunit.client.0.vm01.stdout: api_c_read_operations: [ PASSED ] 17 tests. 2026-03-09T15:56:38.278 INFO:tasks.workunit.client.0.vm01.stdout:ibRadosSnapshotsPP_vm01-59908-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:38.278 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873944+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.873999+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm01-59878-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.874062+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60056-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.874114+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm01-60203-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.874182+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:35.874232+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:36.917029+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm01-60504-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:36.917074+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm01-60464-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:36.917104+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm01-60007-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:36.966332+0000 mon.c [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:36.972593+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.011767+0000 mon.a [WRN] Health check failed: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.048088+0000 mon.b [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.056268+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.279 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.057700+0000 mon.b [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:36.226591+0000 mgr.y (mgr.14520) 104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:36.226591+0000 mgr.y (mgr.14520) 104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: cluster 2026-03-09T15:56:36.662763+0000 mgr.y (mgr.14520) 105 : cluster [DBG] pgmap v64: 1124 pgs: 992 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: cluster 2026-03-09T15:56:36.662763+0000 mgr.y (mgr.14520) 105 : cluster [DBG] pgmap v64: 1124 pgs: 992 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: cluster 2026-03-09T15:56:37.011767+0000 mon.a (mon.0) 936 : cluster [WRN] Health check failed: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: cluster 2026-03-09T15:56:37.011767+0000 mon.a (mon.0) 936 : cluster [WRN] Health check failed: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.048088+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.048088+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.056268+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.056268+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.057700+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.057700+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.094089+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.094089+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: cluster 2026-03-09T15:56:37.097109+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: cluster 2026-03-09T15:56:37.097109+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.110436+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.110436+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.112259+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.112259+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.117112+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.117112+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.174135+0000 mon.a (mon.0) 940 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.174135+0000 mon.a (mon.0) 940 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.187915+0000 mon.a (mon.0) 941 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.187915+0000 mon.a (mon.0) 941 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.196670+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.196670+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.226137+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:38 vm09 bash[22983]: audit 2026-03-09T15:56:37.226137+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:36.226591+0000 mgr.y (mgr.14520) 104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:36.226591+0000 mgr.y (mgr.14520) 104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: cluster 2026-03-09T15:56:36.662763+0000 mgr.y (mgr.14520) 105 : cluster [DBG] pgmap v64: 1124 pgs: 992 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: cluster 2026-03-09T15:56:36.662763+0000 mgr.y (mgr.14520) 105 : cluster [DBG] pgmap v64: 1124 pgs: 992 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: cluster 2026-03-09T15:56:37.011767+0000 mon.a (mon.0) 936 : cluster [WRN] Health check failed: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: cluster 2026-03-09T15:56:37.011767+0000 mon.a (mon.0) 936 : cluster [WRN] Health check failed: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.048088+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.048088+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.056268+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.056268+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.057700+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.057700+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.094089+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.094089+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: cluster 2026-03-09T15:56:37.097109+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: cluster 2026-03-09T15:56:37.097109+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.110436+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.110436+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.112259+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.112259+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.117112+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.117112+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.174135+0000 mon.a (mon.0) 940 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.174135+0000 mon.a (mon.0) 940 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.187915+0000 mon.a (mon.0) 941 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.187915+0000 mon.a (mon.0) 941 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.196670+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.196670+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.226137+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:38 vm01 bash[28152]: audit 2026-03-09T15:56:37.226137+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:36.226591+0000 mgr.y (mgr.14520) 104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:36.226591+0000 mgr.y (mgr.14520) 104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: cluster 2026-03-09T15:56:36.662763+0000 mgr.y (mgr.14520) 105 : cluster [DBG] pgmap v64: 1124 pgs: 992 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: cluster 2026-03-09T15:56:36.662763+0000 mgr.y (mgr.14520) 105 : cluster [DBG] pgmap v64: 1124 pgs: 992 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: cluster 2026-03-09T15:56:37.011767+0000 mon.a (mon.0) 936 : cluster [WRN] Health check failed: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: cluster 2026-03-09T15:56:37.011767+0000 mon.a (mon.0) 936 : cluster [WRN] Health check failed: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.048088+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.048088+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.056268+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.056268+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.057700+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.057700+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.094089+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.094089+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: cluster 2026-03-09T15:56:37.097109+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: cluster 2026-03-09T15:56:37.097109+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.110436+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.110436+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.112259+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.112259+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.117112+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.117112+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.174135+0000 mon.a (mon.0) 940 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.174135+0000 mon.a (mon.0) 940 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.187915+0000 mon.a (mon.0) 941 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.187915+0000 mon.a (mon.0) 941 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:38.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.196670+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.196670+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.226137+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:38.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:38 vm01 bash[20728]: audit 2026-03-09T15:56:37.226137+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.150595+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.150595+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.150671+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.150671+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.150706+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.150706+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: cluster 2026-03-09T15:56:38.173086+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: cluster 2026-03-09T15:56:38.173086+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.174331+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.174331+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.180386+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.101:0/1527089842' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.180386+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.101:0/1527089842' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.182333+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.182333+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.182732+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.101:0/2786720065' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.182732+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.101:0/2786720065' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.184989+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.184989+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.186013+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.101:0/942573747' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.186013+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.101:0/942573747' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.202257+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.202257+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.203061+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.203061+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217362+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217362+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217439+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217439+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217497+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217497+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217556+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217556+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217610+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.217610+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.252101+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.252101+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.294221+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.294221+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: cluster 2026-03-09T15:56:38.294770+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: cluster 2026-03-09T15:56:38.294770+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.367123+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.367123+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.376032+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.376032+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.377991+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.377991+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.408599+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.408599+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.410732+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.410732+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.410824+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.410824+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.412390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.412390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.413554+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.413554+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.413948+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.413948+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.414397+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.414397+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.415823+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.415823+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.416224+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.416224+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.416456+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.416456+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: cluster 2026-03-09T15:56:38.663558+0000 mgr.y (mgr.14520) 106 : cluster [DBG] pgmap v67: 820 pgs: 24 creating+peering, 597 unknown, 199 active+clean; 456 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 12 KiB/s wr, 21 op/s 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: cluster 2026-03-09T15:56:38.663558+0000 mgr.y (mgr.14520) 106 : cluster [DBG] pgmap v67: 820 pgs: 24 creating+peering, 597 unknown, 199 active+clean; 456 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 12 KiB/s wr, 21 op/s 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.819441+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:38.819441+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157843+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157843+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157895+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157895+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.150595+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.150595+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.150671+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.150671+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.150706+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.150706+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: cluster 2026-03-09T15:56:38.173086+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: cluster 2026-03-09T15:56:38.173086+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.174331+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.174331+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.180386+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.101:0/1527089842' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.180386+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.101:0/1527089842' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.182333+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.182333+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.182732+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.101:0/2786720065' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.182732+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.101:0/2786720065' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.184989+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.184989+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.186013+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.101:0/942573747' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.186013+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.101:0/942573747' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.202257+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.202257+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.203061+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.203061+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217362+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217362+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217439+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217439+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217497+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217497+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217556+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217556+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217610+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.217610+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.252101+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.252101+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.294221+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.294221+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: cluster 2026-03-09T15:56:38.294770+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: cluster 2026-03-09T15:56:38.294770+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.367123+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.367123+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.376032+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.376032+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.377991+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.377991+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.408599+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.408599+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.410732+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.410732+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.410824+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.410824+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.412390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.412390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.413554+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.413554+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.413948+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.413948+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.414397+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.414397+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.415823+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.415823+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.416224+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.931 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.416224+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.416456+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.416456+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: cluster 2026-03-09T15:56:38.663558+0000 mgr.y (mgr.14520) 106 : cluster [DBG] pgmap v67: 820 pgs: 24 creating+peering, 597 unknown, 199 active+clean; 456 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 12 KiB/s wr, 21 op/s 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: cluster 2026-03-09T15:56:38.663558+0000 mgr.y (mgr.14520) 106 : cluster [DBG] pgmap v67: 820 pgs: 24 creating+peering, 597 unknown, 199 active+clean; 456 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 12 KiB/s wr, 21 op/s 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.819441+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:38.819441+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157843+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157843+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157895+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157895+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157925+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157925+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157957+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157957+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157991+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.157991+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.158026+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.158026+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.158063+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.158063+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.177214+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.177214+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.177521+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.177521+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: cluster 2026-03-09T15:56:39.191141+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: cluster 2026-03-09T15:56:39.191141+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.196272+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.196272+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.196583+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.196583+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.215871+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.215871+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.222187+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.222187+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.226664+0000 mon.c (mon.2) 55 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.226664+0000 mon.c (mon.2) 55 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.227266+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.227266+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.229642+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.229642+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.237733+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.237733+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.256192+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.256192+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.256383+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:39 vm01 bash[28152]: audit 2026-03-09T15:56:39.256383+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157925+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157925+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157957+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157957+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157991+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.157991+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.158026+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.158026+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.158063+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.158063+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.177214+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.177214+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.177521+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.177521+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: cluster 2026-03-09T15:56:39.191141+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: cluster 2026-03-09T15:56:39.191141+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.196272+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.196272+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.196583+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.196583+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.215871+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.215871+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.222187+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.222187+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.226664+0000 mon.c (mon.2) 55 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.226664+0000 mon.c (mon.2) 55 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.227266+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.227266+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.229642+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.229642+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.237733+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.237733+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.256192+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.256192+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.256383+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:39.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:39 vm01 bash[20728]: audit 2026-03-09T15:56:39.256383+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.150595+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.150595+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.150671+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.150671+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.150706+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.150706+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: cluster 2026-03-09T15:56:38.173086+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: cluster 2026-03-09T15:56:38.173086+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.174331+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.174331+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.180386+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.101:0/1527089842' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.180386+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.101:0/1527089842' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.182333+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.182333+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.182732+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.101:0/2786720065' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.182732+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.101:0/2786720065' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.184989+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.184989+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.186013+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.101:0/942573747' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.186013+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.101:0/942573747' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.202257+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.202257+0000 mon.a (mon.0) 948 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.203061+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.203061+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217362+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217362+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217439+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217439+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217497+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217497+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217556+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217556+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217610+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.217610+0000 mon.a (mon.0) 954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.252101+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.252101+0000 mon.a (mon.0) 955 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.294221+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.294221+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: cluster 2026-03-09T15:56:38.294770+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: cluster 2026-03-09T15:56:38.294770+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-09T15:56:40.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.367123+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.367123+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.376032+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.376032+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.377991+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.377991+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.408599+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.408599+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.410732+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.410732+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.410824+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.410824+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.412390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.412390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.413554+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.413554+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.413948+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.413948+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.414397+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.414397+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.415823+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.415823+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.416224+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.416224+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.416456+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.416456+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: cluster 2026-03-09T15:56:38.663558+0000 mgr.y (mgr.14520) 106 : cluster [DBG] pgmap v67: 820 pgs: 24 creating+peering, 597 unknown, 199 active+clean; 456 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 12 KiB/s wr, 21 op/s 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: cluster 2026-03-09T15:56:38.663558+0000 mgr.y (mgr.14520) 106 : cluster [DBG] pgmap v67: 820 pgs: 24 creating+peering, 597 unknown, 199 active+clean; 456 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 12 KiB/s wr, 21 op/s 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.819441+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:38.819441+0000 mon.a (mon.0) 963 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157843+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157843+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157895+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157895+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157925+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157925+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157957+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157957+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157991+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.157991+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm01-59602-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.158026+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.158026+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm01-59618-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.158063+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.158063+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.177214+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.177214+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.177521+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.177521+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.101:0/4104685227' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: cluster 2026-03-09T15:56:39.191141+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: cluster 2026-03-09T15:56:39.191141+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.196272+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.196272+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.196583+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.196583+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.215871+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.215871+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.222187+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.222187+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.226664+0000 mon.c (mon.2) 55 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.226664+0000 mon.c (mon.2) 55 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.227266+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.227266+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.229642+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.229642+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.237733+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.237733+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.256192+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.256192+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.256383+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:39 vm09 bash[22983]: audit 2026-03-09T15:56:39.256383+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.094089+0000 mon.c [INF] from='client.? 192.168.123.101 list_parallel: process_1_[60266]: starting. 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_1_[60266]: creating pool ceph_test_rados_list_parallel.vm01-60257 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_1_[60266]: created object 0... 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_1_[60266]: created object 25... 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_1_[60266]: created object 49... 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_1_[60266]: finishing. 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_1_[60266]: shutting down. 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_2_[60267]: starting. 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_2_[60267]: listing objects. 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_2_[60267]: listed object 0... 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_2_[60267]: listed object 25... 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_2_[60267]: saw 50 objects 2026-03-09T15:56:40.589 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_2_[60267]: shutting down. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_3_[60935]: starting. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_3_[60935]: creating pool ceph_test_rados_list_parallel.vm01-60257 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_3_[60935]: created object 0... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_3_[60935]: created object 25... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_3_[60935]: created object 49... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_3_[60935]: finishing. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_3_[60935]: shutting down. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_4_[60936]: starting. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_4_[60936]: listing objects. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_4_[60936]: listed object 0... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_4_[60936]: listed object 25... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_4_[60936]: saw 46 objects 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_4_[60936]: shutting down. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_5_[60937]: starting. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_5_[60937]: removed 25 objects... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_5_[60937]: removed half of the objects 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_5_[60937]: removed 50 objects... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_5_[60937]: removed 50 objects 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_5_[60937]: shutting down. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_6_[60983]: starting. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_6_[60983]: creating pool ceph_test_rados_list_parallel.vm01-60257 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_6_[60983]: created object 0... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_6_[60983]: created object 25... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_6_[60983]: created object 49... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_6_[60983]: finishing. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_6_[60983]: shutting down. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_7_[60984]: starting. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_7_[60984]: listing objects. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_7_[60984]: listed object 0... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_7_[60984]: listed object 25... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_7_[60984]: listed object 50... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_7_[60984]: saw 51 objects 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_7_[60984]: shutting down. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_8_[60985]: starting. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_8_[60985]: added 25 objects... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_8_[60985]: added half of the objects 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_8_[60985]: added 50 objects... 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_8_[60985]: added 50 objects 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_8_[60985]: shutting down. 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.590 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.283583+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.283583+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.313978+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.313978+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.318423+0000 mon.a (mon.0) 979 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.318423+0000 mon.a (mon.0) 979 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.323362+0000 mon.a (mon.0) 980 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.323362+0000 mon.a (mon.0) 980 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.327085+0000 mon.a (mon.0) 981 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.327085+0000 mon.a (mon.0) 981 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.365977+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.365977+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.583589+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.583589+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.283583+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.283583+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.313978+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.313978+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.318423+0000 mon.a (mon.0) 979 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.318423+0000 mon.a (mon.0) 979 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.323362+0000 mon.a (mon.0) 980 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.323362+0000 mon.a (mon.0) 980 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.327085+0000 mon.a (mon.0) 981 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.327085+0000 mon.a (mon.0) 981 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.365977+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.365977+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.583589+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.583589+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.629033+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.629033+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.658307+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.658307+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.820297+0000 mon.a (mon.0) 984 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.820297+0000 mon.a (mon.0) 984 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.904010+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.904010+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.976619+0000 mon.a (mon.0) 986 : audit [DBG] from='client.? 192.168.123.101:0/3713197627' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:39.976619+0000 mon.a (mon.0) 986 : audit [DBG] from='client.? 192.168.123.101:0/3713197627' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.231949+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.231949+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.231985+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.231985+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:40.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232016+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232016+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232041+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232041+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232064+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232064+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232090+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232090+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232114+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232114+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232150+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.232150+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.241590+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.241590+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.241764+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.241764+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.278185+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.278185+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.285631+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.285631+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: cluster 2026-03-09T15:56:40.286444+0000 mon.a (mon.0) 995 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: cluster 2026-03-09T15:56:40.286444+0000 mon.a (mon.0) 995 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.303846+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.303846+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.314147+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.314147+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.336595+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.336595+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.362057+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.362057+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.368104+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.368104+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.368252+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.368252+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.368326+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.368326+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.428106+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.428106+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.440362+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:40 vm01 bash[20728]: audit 2026-03-09T15:56:40.440362+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.629033+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.629033+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.658307+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.658307+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.820297+0000 mon.a (mon.0) 984 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.820297+0000 mon.a (mon.0) 984 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.904010+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.904010+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.976619+0000 mon.a (mon.0) 986 : audit [DBG] from='client.? 192.168.123.101:0/3713197627' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:39.976619+0000 mon.a (mon.0) 986 : audit [DBG] from='client.? 192.168.123.101:0/3713197627' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.231949+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.231949+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.231985+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.231985+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:40.932 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232016+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232016+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232041+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232041+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232064+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232064+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232090+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232090+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232114+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232114+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232150+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.232150+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.241590+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.241590+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.241764+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.241764+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.278185+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.278185+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.285631+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.285631+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: cluster 2026-03-09T15:56:40.286444+0000 mon.a (mon.0) 995 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: cluster 2026-03-09T15:56:40.286444+0000 mon.a (mon.0) 995 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.303846+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.303846+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.314147+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.314147+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.336595+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.336595+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.362057+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.362057+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.368104+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.368104+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.368252+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.368252+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:40.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.368326+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:40.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.368326+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:40.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.428106+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.428106+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:40.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.440362+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:40.934 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:40 vm01 bash[28152]: audit 2026-03-09T15:56:40.440362+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.112 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_9_[61205]: starting. 2026-03-09T15:56:41.112 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_9_[61205]: creating pool ceph_test_rados_list_parallel.vm01-60257 2026-03-09T15:56:41.112 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_9_[61205]: created object 0... 2026-03-09T15:56:41.112 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_9_[61205]: created object 25... 2026-03-09T15:56:41.112 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_9_[61205]: created object 49... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_9_[61205]: finishing. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_9_[61205]: shutting down. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_10_[61206]: starting. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_10_[61206]: listing objects. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_10_[61206]: listed object 0... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_10_[61206]: listed object 25... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_10_[61206]: listed object 50... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_10_[61206]: listed object 75... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_10_[61206]: saw 100 objects 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_10_[61206]: shutting down. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_11_[61207]: starting. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_11_[61207]: added 25 objects... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_11_[61207]: added half of the objects 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_11_[61207]: added 50 objects... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_11_[61207]: added 50 objects 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_11_[61207]: shutting down. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_12_[61208]: starting. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_12_[61208]: added 25 objects... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_12_[61208]: added half of the objects 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_12_[61208]: added 50 objects... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_12_[61208]: added 50 objects 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_12_[61208]: shutting down. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_13_[61209]: starting. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_13_[61209]: removed 25 objects... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_13_[61209]: removed half of the objects 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_13_[61209]: removed 50 objects... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_13_[61209]: removed 50 objects 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_13_[61209]: shutting down. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_14_[61814]: starting. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_14_[61814]: creating pool ceph_test_rados_list_parallel.vm01-60257 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_14_[61814]: created object 0... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_14_[61814]: created object 25... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_14_[61814]: created object 49... 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_14_[61814]: finishing. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_14_[61814]: shutting down. 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.113 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.114 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.114 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: starting. 2026-03-09T15:56:41.114 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: listing objects. 2026-03-09T15:56:41.114 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: listed object 0... 2026-03-09T15:56:41.114 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: listed object 25... 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.283583+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.283583+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.313978+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.313978+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.318423+0000 mon.a (mon.0) 979 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.318423+0000 mon.a (mon.0) 979 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.323362+0000 mon.a (mon.0) 980 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.323362+0000 mon.a (mon.0) 980 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.327085+0000 mon.a (mon.0) 981 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.327085+0000 mon.a (mon.0) 981 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.365977+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.365977+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.583589+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.583589+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.629033+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.629033+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.658307+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.658307+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.820297+0000 mon.a (mon.0) 984 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.820297+0000 mon.a (mon.0) 984 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.904010+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.904010+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.976619+0000 mon.a (mon.0) 986 : audit [DBG] from='client.? 192.168.123.101:0/3713197627' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:39.976619+0000 mon.a (mon.0) 986 : audit [DBG] from='client.? 192.168.123.101:0/3713197627' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.231949+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.231949+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm01-59965-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.231985+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.231985+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm01-59948-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232016+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232016+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm01-60464-1"}]': finished 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232041+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:41.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232041+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232064+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232064+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232090+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232090+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232114+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232114+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm01-59715-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232150+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.232150+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm01-59735-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.241590+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.241590+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.241764+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.241764+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.101:0/4283433277' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.278185+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.278185+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.285631+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.285631+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.101:0/3192211770' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: cluster 2026-03-09T15:56:40.286444+0000 mon.a (mon.0) 995 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: cluster 2026-03-09T15:56:40.286444+0000 mon.a (mon.0) 995 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.303846+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.303846+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.314147+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.314147+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.336595+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.336595+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.362057+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.362057+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.368104+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.368104+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.368252+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.368252+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.368326+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.368326+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.428106+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:41.135 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.428106+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:41.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.440362+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.136 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:40 vm09 bash[22983]: audit 2026-03-09T15:56:40.440362+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: listed object 50... 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: listed object 75... 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: listed object 100... 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: listed object 125... 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: saw 150 objects 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_15_[61815]: shutting down. 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_16_[61816]: starting. 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_16_[61816]: added 25 objects... 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_16_[61816]: added half of the objects 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_16_[61816]: added 50 objects... 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_16_[61816]: added 50 objects 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: process_16_[61816]: shutting down. 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******************************* 2026-03-09T15:56:41.312 INFO:tasks.workunit.client.0.vm01.stdout: list_parallel: ******* SUCCESS ********** 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_1_[60292]: starting. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_1_[60292]: creating pool ceph_test_rados_open_pools_parallel.vm01-60269 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_1_[60292]: created object 0... 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_1_[60292]: created object 25... 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_1_[60292]: created object 49... 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_1_[60292]: finishing. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_1_[60292]: shutting down. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_2_[60293]: starting. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_2_[60293]: rados_pool_create. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_2_[60293]: rados_ioctx_create. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_2_[60293]: shutting down. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: ******************************* 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_3_[61058]: starting. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_3_[61058]: creating pool ceph_test_rados_open_pools_parallel.vm01-60269 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_3_[61058]: created object 0... 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_3_[61058]: created object 25... 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_3_[61058]: created object 49... 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_3_[61058]: finishing. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_3_[61058]: shutting down. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: ******************************* 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_4_[61059]: starting. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_4_[61059]: rados_pool_create. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_4_[61059]: rados_ioctx_create. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: process_4_[61059]: shutting down. 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: ******************************* 2026-03-09T15:56:41.322 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: ******************************* 2026-03-09T15:56:41.323 INFO:tasks.workunit.client.0.vm01.stdout: open_pools_parallel: ******* SUCCESS ********** 2026-03-09T15:56:41.330 INFO:tasks.workunit.client.0.vm01.stdout: cmd: Running main() from gmock_main.cc 2026-03-09T15:56:41.330 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [==========] Running 3 tests from 1 test suite. 2026-03-09T15:56:41.330 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [----------] Global test environment set-up. 2026-03-09T15:56:41.330 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [----------] 3 tests from NeoRadosCmd 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [ RUN ] NeoRadosCmd.MonDescribe 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [ OK ] NeoRadosCmd.MonDescribe (1383 ms) 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [ RUN ] NeoRadosCmd.OSDCmd 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [ OK ] NeoRadosCmd.OSDCmd (2223 ms) 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [ RUN ] NeoRadosCmd.PGCmd 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [ OK ] NeoRadosCmd.PGCmd (3124 ms) 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [----------] 3 tests from NeoRadosCmd (6730 ms total) 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [----------] Global test environment tear-down 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [==========] 3 tests from 1 test suite ran. (6730 ms total) 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: cmd: [ PASSED ] 3 tests. 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_1_[60354]: starting. 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_1_[60354]: creating pool ceph_test_rados_delete_pools_parallel.vm01-60307 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_1_[60354]: created object 0... 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_1_[60354]: created object 25... 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_1_[60354]: created object 49... 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_1_[60354]: finishing. 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_1_[60354]: shutting down. 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_2_[60355]: starting. 2026-03-09T15:56:41.331 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_2_[60355]: deleting pool ceph_test_rados_delete_pools_parallel.vm01-60307 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_2_[60355]: shutting down. 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: ******************************* 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_3_[61060]: starting. 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_3_[61060]: creating pool ceph_test_rados_delete_pools_parallel.vm01-60307 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_3_[61060]: created object 0... 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_3_[61060]: created object 25... 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_3_[61060]: created object 49... 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_3_[61060]: finishing. 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_3_[61060]: shutting down. 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: ******************************* 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_5_[61062]: starting. 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_5_[61062]: listing objects. 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_5_[61062]: listed object 0... 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_5_[61062]: listed object 25... 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_5_[61062]: saw 50 objects 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_5_[61062]: shutting down. 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: ******************************* 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_4_[61061]: starting. 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_4_[61061]: deleting pool ceph_test_rados_delete_pools_parallel.vm01-60307 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: process_4_[61061]: shutting down. 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: ******************************* 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: ******************************* 2026-03-09T15:56:41.332 INFO:tasks.workunit.client.0.vm01.stdout: delete_pools_parallel: ******* SUCCESS ********** 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout::0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.097109+0000 client.admin [INF] onexx 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.110436+0000 mon.b [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.112259+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.117112+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.174135+0000 mon.a [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.187915+0000 mon.a [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.196670+0000 mon.a [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm01-59965-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:37.226137+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm01-59948-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.237562+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.237690+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.237728+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.237780+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.237812+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.237859+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:41.890 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.237896+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:41.891 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.257192+0000 mon.c [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: cluster 2026-03-09T15:56:40.664566+0000 mgr.y (mgr.14520) 107 : cluster [DBG] pgmap v70: 788 pgs: 32 creating+peering, 272 unknown, 484 active+clean; 464 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 57 KiB/s wr, 151 op/s 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: cluster 2026-03-09T15:56:40.664566+0000 mgr.y (mgr.14520) 107 : cluster [DBG] pgmap v70: 788 pgs: 32 creating+peering, 272 unknown, 484 active+clean; 464 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 57 KiB/s wr, 151 op/s 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:40.822429+0000 mon.a (mon.0) 1003 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:40.822429+0000 mon.a (mon.0) 1003 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237562+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237562+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237690+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237690+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237728+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237728+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237780+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237780+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237812+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237812+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237859+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237859+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237896+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.237896+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.257192+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.257192+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.261132+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.261132+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.267974+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.267974+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.290656+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.101:0/1090379311' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.290656+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.101:0/1090379311' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.291304+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.291304+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.293838+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.101:0/4230165384' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.293838+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.101:0/4230165384' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: cluster 2026-03-09T15:56:41.298610+0000 mon.a (mon.0) 1011 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: cluster 2026-03-09T15:56:41.298610+0000 mon.a (mon.0) 1011 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.299578+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.299578+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.299652+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.299652+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.299912+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.299912+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.299962+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.299962+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.310546+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.101:0/1202073198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.310546+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.101:0/1202073198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.338447+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.338447+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.338529+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.338529+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.338770+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:41 vm09 bash[22983]: audit 2026-03-09T15:56:41.338770+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: cluster 2026-03-09T15:56:40.664566+0000 mgr.y (mgr.14520) 107 : cluster [DBG] pgmap v70: 788 pgs: 32 creating+peering, 272 unknown, 484 active+clean; 464 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 57 KiB/s wr, 151 op/s 2026-03-09T15:56:42.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: cluster 2026-03-09T15:56:40.664566+0000 mgr.y (mgr.14520) 107 : cluster [DBG] pgmap v70: 788 pgs: 32 creating+peering, 272 unknown, 484 active+clean; 464 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 57 KiB/s wr, 151 op/s 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:40.822429+0000 mon.a (mon.0) 1003 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:40.822429+0000 mon.a (mon.0) 1003 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237562+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237562+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237690+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237690+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237728+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237728+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237780+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237780+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237812+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237812+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237859+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237859+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237896+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.237896+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.257192+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.257192+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.261132+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.261132+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.267974+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.267974+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.290656+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.101:0/1090379311' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.290656+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.101:0/1090379311' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.291304+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.291304+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.293838+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.101:0/4230165384' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.293838+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.101:0/4230165384' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: cluster 2026-03-09T15:56:41.298610+0000 mon.a (mon.0) 1011 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: cluster 2026-03-09T15:56:41.298610+0000 mon.a (mon.0) 1011 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.299578+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.299578+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.299652+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.299652+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.299912+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.299912+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.299962+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.299962+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.310546+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.101:0/1202073198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.310546+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.101:0/1202073198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.338447+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.338447+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.338529+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.338529+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.338770+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:41 vm01 bash[28152]: audit 2026-03-09T15:56:41.338770+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: cluster 2026-03-09T15:56:40.664566+0000 mgr.y (mgr.14520) 107 : cluster [DBG] pgmap v70: 788 pgs: 32 creating+peering, 272 unknown, 484 active+clean; 464 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 57 KiB/s wr, 151 op/s 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: cluster 2026-03-09T15:56:40.664566+0000 mgr.y (mgr.14520) 107 : cluster [DBG] pgmap v70: 788 pgs: 32 creating+peering, 272 unknown, 484 active+clean; 464 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 57 KiB/s wr, 151 op/s 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:40.822429+0000 mon.a (mon.0) 1003 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:40.822429+0000 mon.a (mon.0) 1003 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237562+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237562+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm01-59618-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237690+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237690+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237728+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237728+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm01-60007-1"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237780+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237780+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm01-60504-1"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237812+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237812+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237859+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237859+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm01-59610-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237896+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.237896+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm01-60464-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.257192+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.257192+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.261132+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.261132+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.267974+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.267974+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.290656+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.101:0/1090379311' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.290656+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.101:0/1090379311' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.291304+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.291304+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.293838+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.101:0/4230165384' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.293838+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.101:0/4230165384' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: cluster 2026-03-09T15:56:41.298610+0000 mon.a (mon.0) 1011 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: cluster 2026-03-09T15:56:41.298610+0000 mon.a (mon.0) 1011 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.299578+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.299578+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.299652+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.299652+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.299912+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.299912+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.299962+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.299962+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.310546+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.101:0/1202073198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.310546+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.101:0/1202073198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.338447+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.338447+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.338529+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.338529+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.338770+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:42.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:41 vm01 bash[20728]: audit 2026-03-09T15:56:41.338770+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:43.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:56:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:56:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: audit 2026-03-09T15:56:41.823350+0000 mon.a (mon.0) 1019 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: audit 2026-03-09T15:56:41.823350+0000 mon.a (mon.0) 1019 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: audit 2026-03-09T15:56:41.894162+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: audit 2026-03-09T15:56:41.894162+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: cluster 2026-03-09T15:56:41.894374+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: cluster 2026-03-09T15:56:41.894374+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: audit 2026-03-09T15:56:41.894675+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: audit 2026-03-09T15:56:41.894675+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: audit 2026-03-09T15:56:41.990713+0000 mon.c (mon.2) 62 : audit [DBG] from='client.? 192.168.123.101:0/30086117' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:43 vm09 bash[22983]: audit 2026-03-09T15:56:41.990713+0000 mon.c (mon.2) 62 : audit [DBG] from='client.? 192.168.123.101:0/30086117' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.2611 api_io_pp: Running main() from gmock_main.cc 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [==========] Running 39 tests from 2 test suites. 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [----------] Global test environment set-up. 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: seed 59640 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TooBigPP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.TooBigPP (0 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SimpleWritePP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.SimpleWritePP (749 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadOpPP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadOpPP (7 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SparseReadOpPP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.SparseReadOpPP (48 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP (8 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP2 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP2 (47 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.Checksum 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.Checksum (6 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadIntoBufferlist 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadIntoBufferlist (4 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.OverlappingWriteRoundTripPP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.OverlappingWriteRoundTripPP (26 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP (8 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP2 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP2 (10 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.AppendRoundTripPP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.AppendRoundTripPP (8 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TruncTestPP 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.TruncTestPP (12 ms) 2026-03-09T15:56:43.646 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RemoveTestPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.RemoveTestPP (12 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrsRoundTripPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrsRoundTripPP (13 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RmXattrPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.RmXattrPP (309 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrListPP (148 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CrcZeroWrite 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.CrcZeroWrite (35 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtPP (23 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtDNEPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtDNEPP (4 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtMismatchPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtMismatchPP (14 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP (1491 ms total) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SimpleWritePP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SimpleWritePP (2300 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.ReadOpPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.ReadOpPP (60 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SparseReadOpPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SparseReadOpPP (9 ms) 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RoundTripPP 2026-03-09T15:56:43.647 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP (9 ms) 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: audit 2026-03-09T15:56:41.823350+0000 mon.a (mon.0) 1019 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: audit 2026-03-09T15:56:41.823350+0000 mon.a (mon.0) 1019 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: audit 2026-03-09T15:56:41.894162+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: audit 2026-03-09T15:56:41.894162+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: cluster 2026-03-09T15:56:41.894374+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: cluster 2026-03-09T15:56:41.894374+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: audit 2026-03-09T15:56:41.894675+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: audit 2026-03-09T15:56:41.894675+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: audit 2026-03-09T15:56:41.990713+0000 mon.c (mon.2) 62 : audit [DBG] from='client.? 192.168.123.101:0/30086117' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:43 vm01 bash[28152]: audit 2026-03-09T15:56:41.990713+0000 mon.c (mon.2) 62 : audit [DBG] from='client.? 192.168.123.101:0/30086117' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: audit 2026-03-09T15:56:41.823350+0000 mon.a (mon.0) 1019 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: audit 2026-03-09T15:56:41.823350+0000 mon.a (mon.0) 1019 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: audit 2026-03-09T15:56:41.894162+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: audit 2026-03-09T15:56:41.894162+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: cluster 2026-03-09T15:56:41.894374+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: cluster 2026-03-09T15:56:41.894374+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: audit 2026-03-09T15:56:41.894675+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: audit 2026-03-09T15:56:41.894675+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: audit 2026-03-09T15:56:41.990713+0000 mon.c (mon.2) 62 : audit [DBG] from='client.? 192.168.123.101:0/30086117' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:43 vm01 bash[20728]: audit 2026-03-09T15:56:41.990713+0000 mon.c (mon.2) 62 : audit [DBG] from='client.? 192.168.123.101:0/30086117' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T15:56:44.491 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: cluster 2026-03-09T15:56:42.665229+0000 mgr.y (mgr.14520) 108 : cluster [DBG] pgmap v72: 900 pgs: 32 creating+peering, 416 unknown, 452 active+clean; 459 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.4 KiB/s rd, 20 KiB/s wr, 92 op/s 2026-03-09T15:56:44.491 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: cluster 2026-03-09T15:56:42.665229+0000 mgr.y (mgr.14520) 108 : cluster [DBG] pgmap v72: 900 pgs: 32 creating+peering, 416 unknown, 452 active+clean; 459 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.4 KiB/s rd, 20 KiB/s wr, 92 op/s 2026-03-09T15:56:44.491 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: cluster 2026-03-09T15:56:42.715772+0000 mon.a (mon.0) 1021 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:44.491 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: cluster 2026-03-09T15:56:42.715772+0000 mon.a (mon.0) 1021 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:42.824556+0000 mon.a (mon.0) 1022 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:42.824556+0000 mon.a (mon.0) 1022 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.282460+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.282460+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.282554+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.282554+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.282683+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.282683+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.282917+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.282917+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.283002+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.283002+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.283048+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.283048+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.283167+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.283167+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.283367+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.283367+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.295102+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.295102+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.302884+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.302884+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.306038+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.306038+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: cluster 2026-03-09T15:56:43.306841+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: cluster 2026-03-09T15:56:43.306841+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.313896+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.313896+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.330541+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.330541+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.349684+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.349684+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.351166+0000 mon.a (mon.0) 1033 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.351166+0000 mon.a (mon.0) 1033 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: cluster 2026-03-09T15:56:43.352567+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: cluster 2026-03-09T15:56:43.352567+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.353306+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.353306+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.353578+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.353578+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.358888+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.358888+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.359397+0000 mon.a (mon.0) 1037 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.359397+0000 mon.a (mon.0) 1037 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.360304+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.360304+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.363679+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.363679+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.375173+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.375173+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.822628+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.822628+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.825939+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.825939+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.827279+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:44 vm09 bash[22983]: audit 2026-03-09T15:56:43.827279+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: cluster 2026-03-09T15:56:42.665229+0000 mgr.y (mgr.14520) 108 : cluster [DBG] pgmap v72: 900 pgs: 32 creating+peering, 416 unknown, 452 active+clean; 459 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.4 KiB/s rd, 20 KiB/s wr, 92 op/s 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: cluster 2026-03-09T15:56:42.665229+0000 mgr.y (mgr.14520) 108 : cluster [DBG] pgmap v72: 900 pgs: 32 creating+peering, 416 unknown, 452 active+clean; 459 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.4 KiB/s rd, 20 KiB/s wr, 92 op/s 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: cluster 2026-03-09T15:56:42.715772+0000 mon.a (mon.0) 1021 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: cluster 2026-03-09T15:56:42.715772+0000 mon.a (mon.0) 1021 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:42.824556+0000 mon.a (mon.0) 1022 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:42.824556+0000 mon.a (mon.0) 1022 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.282460+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.282460+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.282554+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.282554+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.282683+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.282683+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.282917+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.282917+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.283002+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.283002+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.283048+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.283048+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.283167+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.283167+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.283367+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.283367+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.295102+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.295102+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.302884+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.302884+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.306038+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.306038+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: cluster 2026-03-09T15:56:43.306841+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: cluster 2026-03-09T15:56:43.306841+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.313896+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.313896+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.330541+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.330541+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.349684+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.349684+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.351166+0000 mon.a (mon.0) 1033 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.351166+0000 mon.a (mon.0) 1033 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: cluster 2026-03-09T15:56:43.352567+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: cluster 2026-03-09T15:56:43.352567+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.353306+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.353306+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.353578+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.353578+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.358888+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.358888+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.359397+0000 mon.a (mon.0) 1037 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.359397+0000 mon.a (mon.0) 1037 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.360304+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.360304+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.363679+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.363679+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.375173+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.375173+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.822628+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.822628+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.825939+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.825939+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.827279+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:44 vm01 bash[28152]: audit 2026-03-09T15:56:43.827279+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: cluster 2026-03-09T15:56:42.665229+0000 mgr.y (mgr.14520) 108 : cluster [DBG] pgmap v72: 900 pgs: 32 creating+peering, 416 unknown, 452 active+clean; 459 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.4 KiB/s rd, 20 KiB/s wr, 92 op/s 2026-03-09T15:56:44.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: cluster 2026-03-09T15:56:42.665229+0000 mgr.y (mgr.14520) 108 : cluster [DBG] pgmap v72: 900 pgs: 32 creating+peering, 416 unknown, 452 active+clean; 459 KiB data, 377 MiB used, 160 GiB / 160 GiB avail; 2.4 KiB/s rd, 20 KiB/s wr, 92 op/s 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: cluster 2026-03-09T15:56:42.715772+0000 mon.a (mon.0) 1021 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: cluster 2026-03-09T15:56:42.715772+0000 mon.a (mon.0) 1021 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:42.824556+0000 mon.a (mon.0) 1022 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:42.824556+0000 mon.a (mon.0) 1022 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.282460+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.282460+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm01-59715-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.282554+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.282554+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm01-59735-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.282683+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.282683+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.282917+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.282917+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.283002+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.283002+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.283048+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.283048+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.283167+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.283167+0000 mon.a (mon.0) 1029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.283367+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.283367+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.295102+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.295102+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.302884+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.302884+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.306038+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.306038+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: cluster 2026-03-09T15:56:43.306841+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: cluster 2026-03-09T15:56:43.306841+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.313896+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.313896+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.330541+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.330541+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.349684+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.349684+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.351166+0000 mon.a (mon.0) 1033 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.351166+0000 mon.a (mon.0) 1033 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: cluster 2026-03-09T15:56:43.352567+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: cluster 2026-03-09T15:56:43.352567+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.353306+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.353306+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.353578+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.353578+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.358888+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.358888+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.359397+0000 mon.a (mon.0) 1037 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.359397+0000 mon.a (mon.0) 1037 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.360304+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.360304+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.363679+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.363679+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.375173+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.375173+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.822628+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.822628+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.825939+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.825939+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.827279+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:44.931 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:44 vm01 bash[20728]: audit 2026-03-09T15:56:43.827279+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ R api_stat_pp: Running main() from gmock_main.cc 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [==========] Running 9 tests from 2 test suites. 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [----------] Global test environment set-up. 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: seed 59965 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPP 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPP (717 ms) 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.Stat2Mtime2PP 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ OK ] LibRadosStatPP.Stat2Mtime2PP (75 ms) 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.ClusterStatPP 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ OK ] LibRadosStatPP.ClusterStatPP (0 ms) 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.PoolStatPP 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ OK ] LibRadosStatPP.PoolStatPP (54 ms) 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPPNS 2026-03-09T15:56:45.685 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPPNS (46 ms) 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP (892 ms total) 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPP 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPP (1181 ms) 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.ClusterStatPP 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.ClusterStatPP (0 ms) 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.PoolStatPP 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.PoolStatPP (8 ms) 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPPNS 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPPNS (10 ms) 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP (1199 ms total) 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [----------] Global test environment tear-down 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [==========] 9 tests from 2 test suites ran. (11391 ms total) 2026-03-09T15:56:45.686 INFO:tasks.workunit.client.0.vm01.stdout: api_stat_pp: [ PASSED ] 9 tests. 2026-03-09T15:56:45.783 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: Running main() from gmock_main.cc 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [==========] Running 9 tests from 2 test suites. 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [----------] Global test environment set-up. 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [----------] 5 tests from LibRadosStat 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ RUN ] LibRadosStat.Stat 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ OK ] LibRadosStat.Stat (582 ms) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ RUN ] LibRadosStat.Stat2 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ OK ] LibRadosStat.Stat2 (212 ms) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ RUN ] LibRadosStat.StatNS 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ OK ] LibRadosStat.StatNS (63 ms) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ RUN ] LibRadosStat.ClusterStat 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ OK ] LibRadosStat.ClusterStat (1 ms) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ RUN ] LibRadosStat.PoolStat 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ OK ] LibRadosStat.PoolStat (13 ms) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [----------] 5 tests from LibRadosStat (871 ms total) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [----------] 4 tests from LibRadosStatEC 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ RUN ] LibRadosStatEC.Stat 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ OK ] LibRadosStatEC.Stat (1352 ms) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ RUN ] LibRadosStatEC.StatNS 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ OK ] LibRadosStatEC.StatNS (23 ms) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ RUN ] LibRadosStatEC.ClusterStat 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ OK ] LibRadosStatEC.ClusterStat (1 ms) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ RUN ] LibRadosStatEC.PoolStat 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ OK ] LibRadosStatEC.PoolStat (3 ms) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [----------] 4 tests from LibRadosStatEC (1379 ms total) 2026-03-09T15:56:45.784 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: 2026-03-09T15:56:45.785 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [----------] Global test environment tear-down 2026-03-09T15:56:45.785 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [==========] 9 tests from 2 test suites ran. (11505 ms total) 2026-03-09T15:56:45.785 INFO:tasks.workunit.client.0.vm01.stdout: api_stat: [ PASSED ] 9 tests. 2026-03-09T15:56:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: cluster 2026-03-09T15:56:44.283877+0000 mon.a (mon.0) 1043 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:46.161 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: cluster 2026-03-09T15:56:44.283877+0000 mon.a (mon.0) 1043 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:46.161 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.401880+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.401880+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.401927+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.401927+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.401957+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.401957+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.401987+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.401987+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.402009+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.402009+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.402035+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.402035+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.418241+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.418241+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: cluster 2026-03-09T15:56:44.439523+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: cluster 2026-03-09T15:56:44.439523+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.442138+0000 mon.c (mon.2) 66 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.442138+0000 mon.c (mon.2) 66 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.468815+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.468815+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.473057+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.473057+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.481275+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.481275+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.490683+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.490683+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: cluster 2026-03-09T15:56:44.666196+0000 mgr.y (mgr.14520) 109 : cluster [DBG] pgmap v75: 772 pgs: 80 creating+peering, 40 unknown, 652 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 188 op/s 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: cluster 2026-03-09T15:56:44.666196+0000 mgr.y (mgr.14520) 109 : cluster [DBG] pgmap v75: 772 pgs: 80 creating+peering, 40 unknown, 652 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 188 op/s 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.827117+0000 mon.a (mon.0) 1055 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.827117+0000 mon.a (mon.0) 1055 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.886717+0000 mon.c (mon.2) 67 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.886717+0000 mon.c (mon.2) 67 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.887213+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.887213+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.937026+0000 mon.c (mon.2) 68 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.937026+0000 mon.c (mon.2) 68 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.937445+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:45 vm09 bash[22983]: audit 2026-03-09T15:56:44.937445+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: cluster 2026-03-09T15:56:44.283877+0000 mon.a (mon.0) 1043 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: cluster 2026-03-09T15:56:44.283877+0000 mon.a (mon.0) 1043 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.401880+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.401880+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.401927+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.401927+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.401957+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.401957+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.401987+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.401987+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.402009+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.402009+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.402035+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.402035+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.418241+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.418241+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: cluster 2026-03-09T15:56:44.439523+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: cluster 2026-03-09T15:56:44.439523+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.442138+0000 mon.c (mon.2) 66 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.442138+0000 mon.c (mon.2) 66 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.468815+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.468815+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.473057+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.473057+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.481275+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.481275+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.490683+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.490683+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T15:56:46.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: cluster 2026-03-09T15:56:44.666196+0000 mgr.y (mgr.14520) 109 : cluster [DBG] pgmap v75: 772 pgs: 80 creating+peering, 40 unknown, 652 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 188 op/s 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: cluster 2026-03-09T15:56:44.666196+0000 mgr.y (mgr.14520) 109 : cluster [DBG] pgmap v75: 772 pgs: 80 creating+peering, 40 unknown, 652 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 188 op/s 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.827117+0000 mon.a (mon.0) 1055 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.827117+0000 mon.a (mon.0) 1055 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.886717+0000 mon.c (mon.2) 67 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.886717+0000 mon.c (mon.2) 67 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.887213+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.887213+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.937026+0000 mon.c (mon.2) 68 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.937026+0000 mon.c (mon.2) 68 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.937445+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:45 vm01 bash[28152]: audit 2026-03-09T15:56:44.937445+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: cluster 2026-03-09T15:56:44.283877+0000 mon.a (mon.0) 1043 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: cluster 2026-03-09T15:56:44.283877+0000 mon.a (mon.0) 1043 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.401880+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.401880+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.401927+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.401927+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-4-cache", "mode": "writeback"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.401957+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.401957+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.401987+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.401987+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.402009+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.402009+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.402035+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.402035+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm01-59640-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.418241+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.418241+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.101:0/614697305' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: cluster 2026-03-09T15:56:44.439523+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: cluster 2026-03-09T15:56:44.439523+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.442138+0000 mon.c (mon.2) 66 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.442138+0000 mon.c (mon.2) 66 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.468815+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.468815+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.473057+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.473057+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.481275+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.481275+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.490683+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.490683+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: cluster 2026-03-09T15:56:44.666196+0000 mgr.y (mgr.14520) 109 : cluster [DBG] pgmap v75: 772 pgs: 80 creating+peering, 40 unknown, 652 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 188 op/s 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: cluster 2026-03-09T15:56:44.666196+0000 mgr.y (mgr.14520) 109 : cluster [DBG] pgmap v75: 772 pgs: 80 creating+peering, 40 unknown, 652 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 188 op/s 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.827117+0000 mon.a (mon.0) 1055 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.827117+0000 mon.a (mon.0) 1055 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.886717+0000 mon.c (mon.2) 67 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.886717+0000 mon.c (mon.2) 67 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.887213+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.887213+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]: dispatch 2026-03-09T15:56:46.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.937026+0000 mon.c (mon.2) 68 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.937026+0000 mon.c (mon.2) 68 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.937445+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:45 vm01 bash[20728]: audit 2026-03-09T15:56:44.937445+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]: dispatch 2026-03-09T15:56:46.517 INFO:tasks.workunit.client.0.vm01.stdout:32+0000 mon.b [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:46.517 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.267974+0000 mon.b [INF] from='client.? 192.168.123.101:0/2400822232' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:46.517 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.290656+0000 mon.b [INF] from='client.? 192.168.123.101:0/1090379311' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.291304+0000 mon.b [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.293838+0000 mon.c [INF] from='client.? 192.168.123.101:0/4230165384' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.299578+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-4", "overlaypool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.299652+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm01-60464-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.299912+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm01-59610-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.299962+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.310546+0000 mon.b [INF] from='client.? 192.168.123.101:0/1202073198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.338447+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm01-59602-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.338529+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm01-60504-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.338770+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.894162+0000 mon.c [INF] from='client.? 192.168.123.101:0/3111254435' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.894374+0000 client.admin [INF] threexx 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: got: 2026-03-09T15:56:41.894675+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [ OK ] LibRadosCmd.WatchLog (9547 ms) 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [----------] 4 tests from LibRadosCmd (12163 ms total) 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [----------] Global test environment tear-down 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [==========] 4 tests from 1 test suite ran. (12164 ms total) 2026-03-09T15:56:46.518 INFO:tasks.workunit.client.0.vm01.stdout: api_cmd: [ PASSED ] 4 tests. 2026-03-09T15:56:46.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:56:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: Running main() from gmock_main.cc 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [==========] Running 24 tests from 2 test suites. 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [----------] Global test environment set-up. 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [----------] 14 tests from LibRadosIo 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.SimpleWrite 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.SimpleWrite (848 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.TooBig 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.TooBig (0 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.ReadTimeout 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: no timeout :/ 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: no timeout :/ 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: no timeout :/ 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: no timeout :/ 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: no timeout :/ 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.ReadTimeout (73 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.RoundTrip 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.RoundTrip (31 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.Checksum 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.Checksum (6 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.OverlappingWriteRoundTrip 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.OverlappingWriteRoundTrip (8 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.WriteFullRoundTrip 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.WriteFullRoundTrip (15 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.AppendRoundTrip 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.AppendRoundTrip (38 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.ZeroLenZero 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.ZeroLenZero (12 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.TruncTest 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.TruncTest (350 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.RemoveTest 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.RemoveTest (53 ms) 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.XattrsRoundTrip 2026-03-09T15:56:46.871 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.XattrsRoundTrip (31 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.RmXattr 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.RmXattr (54 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIo.XattrIter 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIo.XattrIter (16 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [----------] 14 tests from LibRadosIo (1535 ms total) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [----------] 10 tests from LibRadosIoEC 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.SimpleWrite 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.SimpleWrite (2286 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.RoundTrip 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.RoundTrip (27 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.OverlappingWriteRoundTrip 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.OverlappingWriteRoundTrip (24 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.WriteFullRoundTrip 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.WriteFullRoundTrip (13 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.AppendRoundTrip 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.AppendRoundTrip (18 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.TruncTest 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.TruncTest (15 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.RemoveTest 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.RemoveTest (9 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.XattrsRoundTrip 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.XattrsRoundTrip (28 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.RmXattr 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.RmXattr (109 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ RUN ] LibRadosIoEC.XattrIter 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ OK ] LibRadosIoEC.XattrIter (10 ms) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [----------] 10 tests from LibRadosIoEC (2539 ms total) 2026-03-09T15:56:46.872 INFO:tasks.workunit.client.0.vm01.stdout: api_io: 2026-03-09T15:56:46.882 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [----------] Global test environment tear-down 2026-03-09T15:56:46.882 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [==========] 24 tests from 2 test suites ran. (12837 ms total) 2026-03-09T15:56:46.883 INFO:tasks.workunit.client.0.vm01.stdout: api_io: [ PASSED ] 24 tests. 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543454+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543454+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543497+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543497+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543519+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543519+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543546+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543546+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543568+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543568+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543595+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.543595+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]': finished 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: cluster 2026-03-09T15:56:45.628355+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: cluster 2026-03-09T15:56:45.628355+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.690120+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.690120+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.690202+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.101:0/3696993336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.690202+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.101:0/3696993336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.692917+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.101:0/3784339430' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.692917+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.101:0/3784339430' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.695845+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.101:0/749011770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.695845+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.101:0/749011770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.741898+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.741898+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.743048+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.743048+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.745603+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.745603+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.785282+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.785282+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.785364+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.785364+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.785427+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.785427+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.785526+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.785526+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.821775+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.821775+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.822000+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.822000+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.831009+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.831009+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.831278+0000 mon.a (mon.0) 1072 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: audit 2026-03-09T15:56:45.831278+0000 mon.a (mon.0) 1072 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: cluster 2026-03-09T15:56:46.545510+0000 mon.a (mon.0) 1073 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:56:47.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:46 vm09 bash[22983]: cluster 2026-03-09T15:56:46.545510+0000 mon.a (mon.0) 1073 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:56:47.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543454+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:47.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543454+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:47.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543497+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:47.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543497+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543519+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543519+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543546+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543546+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543568+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543568+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543595+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.543595+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: cluster 2026-03-09T15:56:45.628355+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: cluster 2026-03-09T15:56:45.628355+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.690120+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.690120+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.690202+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.101:0/3696993336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.690202+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.101:0/3696993336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.692917+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.101:0/3784339430' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.692917+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.101:0/3784339430' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.695845+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.101:0/749011770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.695845+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.101:0/749011770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.741898+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.741898+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.743048+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.743048+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.745603+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.745603+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.785282+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.785282+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543454+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543454+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm01-60504-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543497+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543497+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? 192.168.123.101:0/3815248212' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm01-59965-7"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543519+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543519+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543546+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543546+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm01-59948-7"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543568+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543568+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-60007-6", "pg_num": 4}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543595+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.543595+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-4"}]': finished 2026-03-09T15:56:47.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: cluster 2026-03-09T15:56:45.628355+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: cluster 2026-03-09T15:56:45.628355+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.690120+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.690120+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.690202+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.101:0/3696993336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.690202+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.101:0/3696993336' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.692917+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.101:0/3784339430' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.692917+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.101:0/3784339430' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.695845+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.101:0/749011770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.695845+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.101:0/749011770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.741898+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.741898+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.101:0/2577468574' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.743048+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.743048+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.745603+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.745603+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.101:0/1135617012' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.785282+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.785282+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.785364+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.785364+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.785427+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.785427+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.785526+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.785526+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.821775+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.821775+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.822000+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.822000+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.831009+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.831009+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.831278+0000 mon.a (mon.0) 1072 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: audit 2026-03-09T15:56:45.831278+0000 mon.a (mon.0) 1072 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: cluster 2026-03-09T15:56:46.545510+0000 mon.a (mon.0) 1073 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:46 vm01 bash[28152]: cluster 2026-03-09T15:56:46.545510+0000 mon.a (mon.0) 1073 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.785364+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.785364+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.785427+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.785427+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.785526+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.785526+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.821775+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.821775+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.822000+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.822000+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.831009+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.831009+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.831278+0000 mon.a (mon.0) 1072 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: audit 2026-03-09T15:56:45.831278+0000 mon.a (mon.0) 1072 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: cluster 2026-03-09T15:56:46.545510+0000 mon.a (mon.0) 1073 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:56:47.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:46 vm01 bash[20728]: cluster 2026-03-09T15:56:46.545510+0000 mon.a (mon.0) 1073 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:56:47.867 INFO:tasks.workunit.client.0.vm01.stdout: list: Running main() from gmock_main.cc 2026-03-09T15:56:47.867 INFO:tasks.workunit.client.0.vm01.stdout: list: [==========] Running 3 tests from 1 test suite. 2026-03-09T15:56:47.867 INFO:tasks.workunit.client.0.vm01.stdout: list: [----------] Global test environment set-up. 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [----------] 3 tests from NeoradosList 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [ RUN ] NeoradosList.ListObjects 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [ OK ] NeoradosList.ListObjects (2292 ms) 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [ RUN ] NeoradosList.ListObjectsNS 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [ OK ] NeoradosList.ListObjectsNS (3381 ms) 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [ RUN ] NeoradosList.ListObjectsMany 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [ OK ] NeoradosList.ListObjectsMany (7538 ms) 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [----------] 3 tests from NeoradosList (13211 ms total) 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [----------] Global test environment tear-down 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [==========] 3 tests from 1 test suite ran. (13211 ms total) 2026-03-09T15:56:47.868 INFO:tasks.workunit.client.0.vm01.stdout: list: [ PASSED ] 3 tests. 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.234975+0000 mgr.y (mgr.14520) 110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.234975+0000 mgr.y (mgr.14520) 110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: cluster 2026-03-09T15:56:46.666758+0000 mgr.y (mgr.14520) 111 : cluster [DBG] pgmap v77: 808 pgs: 48 creating+peering, 180 unknown, 580 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 2.2 KiB/s wr, 184 op/s 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: cluster 2026-03-09T15:56:46.666758+0000 mgr.y (mgr.14520) 111 : cluster [DBG] pgmap v77: 808 pgs: 48 creating+peering, 180 unknown, 580 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 2.2 KiB/s wr, 184 op/s 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786706+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786706+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786747+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786747+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786776+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786776+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786797+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786797+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786819+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786819+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786843+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786843+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786869+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.786869+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: cluster 2026-03-09T15:56:46.802442+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: cluster 2026-03-09T15:56:46.802442+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.810756+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.810756+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.824694+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.824694+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.829913+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.829913+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.830298+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.830298+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.832527+0000 mon.a (mon.0) 1083 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.832527+0000 mon.a (mon.0) 1083 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.841018+0000 mon.c (mon.2) 72 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.841018+0000 mon.c (mon.2) 72 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.862037+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.862037+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.862176+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.862176+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.865154+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.865154+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.870989+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.870989+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.871331+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.871331+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.875582+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.875582+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.875953+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.875953+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.888013+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.888013+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.888318+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:46.888318+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:47.806521+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:47 vm09 bash[22983]: audit 2026-03-09T15:56:47.806521+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.234975+0000 mgr.y (mgr.14520) 110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.234975+0000 mgr.y (mgr.14520) 110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: cluster 2026-03-09T15:56:46.666758+0000 mgr.y (mgr.14520) 111 : cluster [DBG] pgmap v77: 808 pgs: 48 creating+peering, 180 unknown, 580 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 2.2 KiB/s wr, 184 op/s 2026-03-09T15:56:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: cluster 2026-03-09T15:56:46.666758+0000 mgr.y (mgr.14520) 111 : cluster [DBG] pgmap v77: 808 pgs: 48 creating+peering, 180 unknown, 580 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 2.2 KiB/s wr, 184 op/s 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786706+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786706+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786747+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786747+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786776+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786776+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786797+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786797+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786819+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786819+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786843+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786843+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786869+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.786869+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: cluster 2026-03-09T15:56:46.802442+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: cluster 2026-03-09T15:56:46.802442+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.810756+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.810756+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.824694+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.824694+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.829913+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.829913+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.830298+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.830298+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.832527+0000 mon.a (mon.0) 1083 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.832527+0000 mon.a (mon.0) 1083 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.841018+0000 mon.c (mon.2) 72 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.841018+0000 mon.c (mon.2) 72 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.862037+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.862037+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.862176+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.862176+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.865154+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.865154+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.870989+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.870989+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.871331+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.871331+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.875582+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.875582+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.875953+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.875953+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.888013+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.888013+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.888318+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:46.888318+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:47.806521+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:47 vm01 bash[20728]: audit 2026-03-09T15:56:47.806521+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.234975+0000 mgr.y (mgr.14520) 110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.234975+0000 mgr.y (mgr.14520) 110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: cluster 2026-03-09T15:56:46.666758+0000 mgr.y (mgr.14520) 111 : cluster [DBG] pgmap v77: 808 pgs: 48 creating+peering, 180 unknown, 580 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 2.2 KiB/s wr, 184 op/s 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: cluster 2026-03-09T15:56:46.666758+0000 mgr.y (mgr.14520) 111 : cluster [DBG] pgmap v77: 808 pgs: 48 creating+peering, 180 unknown, 580 active+clean; 464 KiB data, 390 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 2.2 KiB/s wr, 184 op/s 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786706+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786706+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786747+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786747+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm01-59602-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786776+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786776+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786797+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786797+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786819+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786819+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm01-59618-16"}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786843+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786843+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786869+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.786869+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-4", "tierpool": "test-rados-api-vm01-59821-4-cache"}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: cluster 2026-03-09T15:56:46.802442+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: cluster 2026-03-09T15:56:46.802442+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.810756+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.810756+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.101:0/409180117' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.824694+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.824694+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.829913+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.829913+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.830298+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.830298+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.832527+0000 mon.a (mon.0) 1083 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.832527+0000 mon.a (mon.0) 1083 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.841018+0000 mon.c (mon.2) 72 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.841018+0000 mon.c (mon.2) 72 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.862037+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.862037+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.862176+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.862176+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.865154+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.865154+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.870989+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.870989+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.871331+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.871331+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.875582+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.875582+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.875953+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.875953+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.888013+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.888013+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.888318+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:46.888318+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:47.806521+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:47 vm01 bash[28152]: audit 2026-03-09T15:56:47.806521+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: Running main() from gmock_main.cc 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [==========] Running 2 tests from 1 test suite. 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [----------] Global test environment set-up. 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [----------] 2 tests from NeoRadosECIo 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [ RUN ] NeoRadosECIo.SimpleWrite 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [ OK ] NeoRadosECIo.SimpleWrite (5622 ms) 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [ RUN ] NeoRadosECIo.ReadOp 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [ OK ] NeoRadosECIo.ReadOp (8103 ms) 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [----------] 2 tests from NeoRadosECIo (13725 ms total) 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [----------] Global test environment tear-down 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [==========] 2 tests from 1 test suite ran. (13726 ms total) 2026-03-09T15:56:48.372 INFO:tasks.workunit.client.0.vm01.stdout: ec_io: [ PASSED ] 2 tests. 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.806585+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.806585+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.806620+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.806620+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.806649+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.806649+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.806685+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.806685+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.822856+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.822856+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.822999+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.822999+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.833713+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.101:0/328386699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.833713+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.101:0/328386699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.837471+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.837471+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:47.840400+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:47.840400+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.852557+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.852557+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.855349+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.855349+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.878646+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.878646+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.885602+0000 mon.a (mon.0) 1096 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.885602+0000 mon.a (mon.0) 1096 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.885817+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.885817+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.885948+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.885948+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.926138+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.926138+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.929954+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.929954+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.930032+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.930032+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.930145+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.930145+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.930241+0000 mon.a (mon.0) 1103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.930241+0000 mon.a (mon.0) 1103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.930299+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.930299+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.931542+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:47.931542+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.003495+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.003495+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.018315+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.018315+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.078652+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.078652+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.219019+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.219019+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.219325+0000 mon.a (mon.0) 1108 : cluster [WRN] pool 'PoolQuotaPP_vm01-59610-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.219325+0000 mon.a (mon.0) 1108 : cluster [WRN] pool 'PoolQuotaPP_vm01-59610-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.220815+0000 mon.a (mon.0) 1109 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.220815+0000 mon.a (mon.0) 1109 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.220828+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.220828+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.360275+0000 mon.b (mon.1) 81 : audit [DBG] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.360275+0000 mon.b (mon.1) 81 : audit [DBG] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363179+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363179+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363245+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363245+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363279+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363279+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363312+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363312+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363479+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363479+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363511+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363511+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363786+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.363786+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.368800+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.368800+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.368907+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.368907+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.368972+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.368972+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.373981+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.373981+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.385536+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.101:0/2195239414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.385536+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.101:0/2195239414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.387631+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: cluster 2026-03-09T15:56:48.387631+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.393184+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.393184+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.393474+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.393474+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.395116+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.101:0/1824061709' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.395116+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.101:0/1824061709' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.395211+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.101:0/3499679607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.395211+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.101:0/3499679607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.406357+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.406357+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.406563+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.406563+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.406794+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.406794+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.408923+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.408923+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.412418+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.412418+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.413339+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.413339+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.758042+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.758042+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.758901+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.758901+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.759471+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.759471+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.760476+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.760476+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.761113+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.761113+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.761829+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.761829+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.763596+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.763596+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.765106+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.765106+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.765935+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.765935+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.806585+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.806585+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.806620+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.806620+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.806649+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.806649+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.806685+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.806685+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.822856+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.822856+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.822999+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.822999+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.833713+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.101:0/328386699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.833713+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.101:0/328386699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.837471+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.837471+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:47.840400+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:47.840400+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.852557+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.852557+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.855349+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.855349+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.878646+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.878646+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.885602+0000 mon.a (mon.0) 1096 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.885602+0000 mon.a (mon.0) 1096 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.885817+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.885817+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.885948+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.885948+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.926138+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.926138+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.929954+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.929954+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.930032+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.930032+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.930145+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.930145+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.930241+0000 mon.a (mon.0) 1103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.930241+0000 mon.a (mon.0) 1103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.930299+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.930299+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.931542+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:47.931542+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.003495+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.003495+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.018315+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.018315+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.078652+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.078652+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.219019+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.219019+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.219325+0000 mon.a (mon.0) 1108 : cluster [WRN] pool 'PoolQuotaPP_vm01-59610-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.219325+0000 mon.a (mon.0) 1108 : cluster [WRN] pool 'PoolQuotaPP_vm01-59610-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.220815+0000 mon.a (mon.0) 1109 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.220815+0000 mon.a (mon.0) 1109 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.220828+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.220828+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.360275+0000 mon.b (mon.1) 81 : audit [DBG] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.360275+0000 mon.b (mon.1) 81 : audit [DBG] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363179+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363179+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363245+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363245+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363279+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363279+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363312+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363312+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363479+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363479+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363511+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363511+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363786+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.181 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.363786+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.368800+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.368800+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.368907+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.368907+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.368972+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.368972+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.373981+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.373981+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.385536+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.101:0/2195239414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.385536+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.101:0/2195239414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.387631+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: cluster 2026-03-09T15:56:48.387631+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.393184+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.393184+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.393474+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.393474+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.395116+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.101:0/1824061709' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.395116+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.101:0/1824061709' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.395211+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.101:0/3499679607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.395211+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.101:0/3499679607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.406357+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.406357+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.406563+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.406563+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.406794+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.406794+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.408923+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.408923+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.412418+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.412418+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.413339+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.413339+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.758042+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.758042+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.758901+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.758901+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.759471+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.759471+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.760476+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.760476+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.761113+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.761113+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.761829+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.761829+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.763596+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.763596+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.765106+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.765106+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.765935+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.765935+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.767022+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:49.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:48 vm01 bash[20728]: audit 2026-03-09T15:56:48.767022+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:49.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.767022+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:49.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:48 vm01 bash[28152]: audit 2026-03-09T15:56:48.767022+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.806585+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.806585+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.806620+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.806620+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.806649+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.806649+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "overlaypool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.806685+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.806685+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.822856+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.822856+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.822999+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.822999+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.833713+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.101:0/328386699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.833713+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.101:0/328386699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.837471+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.837471+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.101:0/3058730445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:47.840400+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:47.840400+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.852557+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.852557+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.855349+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.855349+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.878646+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.878646+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.885602+0000 mon.a (mon.0) 1096 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.885602+0000 mon.a (mon.0) 1096 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.885817+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.885817+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.885948+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.885948+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.926138+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.926138+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.929954+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.929954+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.930032+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.930032+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.930145+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.930145+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.930241+0000 mon.a (mon.0) 1103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.930241+0000 mon.a (mon.0) 1103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.930299+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.930299+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.931542+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:47.931542+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.003495+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.003495+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.018315+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.018315+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.078652+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.078652+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.219019+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.219019+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.219325+0000 mon.a (mon.0) 1108 : cluster [WRN] pool 'PoolQuotaPP_vm01-59610-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.219325+0000 mon.a (mon.0) 1108 : cluster [WRN] pool 'PoolQuotaPP_vm01-59610-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.220815+0000 mon.a (mon.0) 1109 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.220815+0000 mon.a (mon.0) 1109 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.220828+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.220828+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.360275+0000 mon.b (mon.1) 81 : audit [DBG] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.360275+0000 mon.b (mon.1) 81 : audit [DBG] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363179+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363179+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm01-60464-2"}]': finished 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363245+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]': finished 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363245+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-60007-6", "mode": "writeback"}]': finished 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363279+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363279+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363312+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363312+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:49.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363479+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363479+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363511+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363511+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm01-59878-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363786+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.363786+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm01-59640-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.368800+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.368800+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.101:0/1175039908' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.368907+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.368907+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.101:0/1435730302' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.368972+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.368972+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.373981+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.373981+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.385536+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.101:0/2195239414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.385536+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.101:0/2195239414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.387631+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: cluster 2026-03-09T15:56:48.387631+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.393184+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.393184+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.393474+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.393474+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.395116+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.101:0/1824061709' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.395116+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.101:0/1824061709' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.395211+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.101:0/3499679607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.395211+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.101:0/3499679607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.406357+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.406357+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.406563+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.406563+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.406794+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.406794+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.408923+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.408923+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.412418+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.412418+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.413339+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.413339+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.758042+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.758042+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.758901+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:49.386 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.758901+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.759471+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.759471+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.760476+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.760476+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.761113+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.761113+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.761829+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.761829+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.763596+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.763596+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.765106+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.765106+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.765935+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.765935+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.767022+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:49.387 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:48 vm09 bash[22983]: audit 2026-03-09T15:56:48.767022+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: Running main() from gmock_main.cc 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [==========] Running 16 tests from 2 test suites. 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [----------] Global test environment set-up. 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: seed 59735 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusivePP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusivePP (774 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedPP (104 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusiveDurPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusiveDurPP (1024 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedDurPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedDurPP (1104 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockMayRenewPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockMayRenewPP (5 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.UnlockPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockPP.UnlockPP (8 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.ListLockersPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockPP.ListLockersPP (6 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.BreakLockPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockPP.BreakLockPP (8 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP (3033 ms total) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusivePP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusivePP (1581 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedPP (28 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusiveDurPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusiveDurPP (1040 ms) 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedDurPP 2026-03-09T15:56:49.438 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedDurPP (1115 ms) 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockMayRenewPP 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockMayRenewPP (27 ms) 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.UnlockPP 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.UnlockPP (16 ms) 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.ListLockersPP 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.ListLockersPP (16 ms) 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.BreakLockPP 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.BreakLockPP (9 ms) 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP (3833 ms total) 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [----------] Global test environment tear-down 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [==========] 16 tests from 2 test suites ran. (15323 ms total) 2026-03-09T15:56:49.439 INFO:tasks.workunit.client.0.vm01.stdout: api_lock_pp: [ PASSED ] 16 tests. 2026-03-09T15:56:49.462 INFO:tasks.workunit.client.0.vm01.stdout: pool: Running main() from gmock_main.cc 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [==========] Running 6 tests from 1 test suite. 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [----------] Global test environment set-up. 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [----------] 6 tests from NeoRadosPools 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ RUN ] NeoRadosPools.PoolList 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ OK ] NeoRadosPools.PoolList (1269 ms) 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ RUN ] NeoRadosPools.PoolLookup 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ OK ] NeoRadosPools.PoolLookup (2229 ms) 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ RUN ] NeoRadosPools.PoolLookupOtherInstance 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ OK ] NeoRadosPools.PoolLookupOtherInstance (2084 ms) 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ RUN ] NeoRadosPools.PoolDelete 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ OK ] NeoRadosPools.PoolDelete (5397 ms) 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateDelete 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ OK ] NeoRadosPools.PoolCreateDelete (2199 ms) 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateWithCrushRule 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ OK ] NeoRadosPools.PoolCreateWithCrushRule (1566 ms) 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [----------] 6 tests from NeoRadosPools (14744 ms total) 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [----------] Global test environment tear-down 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [==========] 6 tests from 1 test suite ran. (14744 ms total) 2026-03-09T15:56:49.463 INFO:tasks.workunit.client.0.vm01.stdout: pool: [ PASSED ] 6 tests. 2026-03-09T15:56:49.481 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: Running main() from gmock_main.cc 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [==========] Running 16 tests from 2 test suites. 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [----------] Global test environment set-up. 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [----------] 8 tests from LibRadosLock 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusive 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLock.LockExclusive (818 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLock.LockShared 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLock.LockShared (12 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusiveDur 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLock.LockExclusiveDur (1026 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLock.LockSharedDur 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLock.LockSharedDur (1056 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLock.LockMayRenew 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLock.LockMayRenew (14 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLock.Unlock 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLock.Unlock (8 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLock.ListLockers 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLock.ListLockers (12 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLock.BreakLock 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLock.BreakLock (5 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [----------] 8 tests from LibRadosLock (2951 ms total) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [----------] 8 tests from LibRadosLockEC 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusive 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusive (1515 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLockEC.LockShared 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLockEC.LockShared (31 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusiveDur 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusiveDur (1145 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLockEC.LockSharedDur 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLockEC.LockSharedDur (1092 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLockEC.LockMayRenew 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLockEC.LockMayRenew (12 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLockEC.Unlock 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLockEC.Unlock (12 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLockEC.ListLockers 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLockEC.ListLockers (18 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ RUN ] LibRadosLockEC.BreakLock 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ OK ] LibRadosLockEC.BreakLock (7 ms) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [----------] 8 tests from LibRadosLockEC (3832 ms total) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [----------] Global test environment tear-down 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [==========] 16 tests from 2 test suites ran. (15391 ms total) 2026-03-09T15:56:49.482 INFO:tasks.workunit.client.0.vm01.stdout: api_lock: [ PASSED ] 16 tests. 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:48.667493+0000 mgr.y (mgr.14520) 112 : cluster [DBG] pgmap v81: 744 pgs: 85 creating+peering, 189 unknown, 470 active+clean; 96 MiB data, 570 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:48.667493+0000 mgr.y (mgr.14520) 112 : cluster [DBG] pgmap v81: 744 pgs: 85 creating+peering, 189 unknown, 470 active+clean; 96 MiB data, 570 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.758305+0000 mgr.y (mgr.14520) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.758305+0000 mgr.y (mgr.14520) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.759058+0000 mgr.y (mgr.14520) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.759058+0000 mgr.y (mgr.14520) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.759763+0000 mgr.y (mgr.14520) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.759763+0000 mgr.y (mgr.14520) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.760612+0000 mgr.y (mgr.14520) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.760612+0000 mgr.y (mgr.14520) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.761254+0000 mgr.y (mgr.14520) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.761254+0000 mgr.y (mgr.14520) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.762004+0000 mgr.y (mgr.14520) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.762004+0000 mgr.y (mgr.14520) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:48.762710+0000 osd.2 (osd.2) 3 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:48.762710+0000 osd.2 (osd.2) 3 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.763810+0000 mgr.y (mgr.14520) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.763810+0000 mgr.y (mgr.14520) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:48.764178+0000 osd.2 (osd.2) 4 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:48.764178+0000 osd.2 (osd.2) 4 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.765218+0000 mgr.y (mgr.14520) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.765218+0000 mgr.y (mgr.14520) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.766037+0000 mgr.y (mgr.14520) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.766037+0000 mgr.y (mgr.14520) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.767122+0000 mgr.y (mgr.14520) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.767122+0000 mgr.y (mgr.14520) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.908797+0000 mon.a (mon.0) 1137 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:48.908797+0000 mon.a (mon.0) 1137 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.118430+0000 osd.0 (osd.0) 3 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.118430+0000 osd.0 (osd.0) 3 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.119894+0000 osd.0 (osd.0) 4 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.119894+0000 osd.0 (osd.0) 4 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.265400+0000 osd.6 (osd.6) 3 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.265400+0000 osd.6 (osd.6) 3 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.284320+0000 osd.6 (osd.6) 4 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.284320+0000 osd.6 (osd.6) 4 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.376067+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.376067+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.376285+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.376285+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.376416+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.376416+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.377214+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.377214+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.377375+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.377375+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.398584+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.398584+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.398705+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.398705+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.398853+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.398853+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.442413+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.442413+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2"}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.442892+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.442892+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.443981+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.443981+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.455745+0000 mon.a (mon.0) 1146 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: cluster 2026-03-09T15:56:49.455745+0000 mon.a (mon.0) 1146 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.470691+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.470691+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.471039+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:49 vm01 bash[20728]: audit 2026-03-09T15:56:49.471039+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:48.667493+0000 mgr.y (mgr.14520) 112 : cluster [DBG] pgmap v81: 744 pgs: 85 creating+peering, 189 unknown, 470 active+clean; 96 MiB data, 570 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:48.667493+0000 mgr.y (mgr.14520) 112 : cluster [DBG] pgmap v81: 744 pgs: 85 creating+peering, 189 unknown, 470 active+clean; 96 MiB data, 570 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.758305+0000 mgr.y (mgr.14520) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.758305+0000 mgr.y (mgr.14520) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.759058+0000 mgr.y (mgr.14520) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.759058+0000 mgr.y (mgr.14520) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.759763+0000 mgr.y (mgr.14520) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.759763+0000 mgr.y (mgr.14520) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.760612+0000 mgr.y (mgr.14520) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.760612+0000 mgr.y (mgr.14520) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.761254+0000 mgr.y (mgr.14520) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.761254+0000 mgr.y (mgr.14520) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.762004+0000 mgr.y (mgr.14520) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.762004+0000 mgr.y (mgr.14520) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:48.762710+0000 osd.2 (osd.2) 3 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:48.762710+0000 osd.2 (osd.2) 3 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.763810+0000 mgr.y (mgr.14520) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.763810+0000 mgr.y (mgr.14520) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:48.764178+0000 osd.2 (osd.2) 4 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:48.764178+0000 osd.2 (osd.2) 4 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.765218+0000 mgr.y (mgr.14520) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.765218+0000 mgr.y (mgr.14520) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.766037+0000 mgr.y (mgr.14520) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.766037+0000 mgr.y (mgr.14520) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.767122+0000 mgr.y (mgr.14520) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.767122+0000 mgr.y (mgr.14520) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.908797+0000 mon.a (mon.0) 1137 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:48.908797+0000 mon.a (mon.0) 1137 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.118430+0000 osd.0 (osd.0) 3 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.118430+0000 osd.0 (osd.0) 3 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.119894+0000 osd.0 (osd.0) 4 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.119894+0000 osd.0 (osd.0) 4 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.265400+0000 osd.6 (osd.6) 3 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.265400+0000 osd.6 (osd.6) 3 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.284320+0000 osd.6 (osd.6) 4 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.284320+0000 osd.6 (osd.6) 4 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.376067+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.376067+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.376285+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.376285+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.376416+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.376416+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.377214+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:50.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.377214+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.377375+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.377375+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.398584+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.398584+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.398705+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.398705+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.398853+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.398853+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.442413+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2"}]: dispatch 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.442413+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2"}]: dispatch 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.442892+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.442892+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.443981+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.443981+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.455745+0000 mon.a (mon.0) 1146 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: cluster 2026-03-09T15:56:49.455745+0000 mon.a (mon.0) 1146 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.470691+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.470691+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.471039+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.180 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:49 vm01 bash[28152]: audit 2026-03-09T15:56:49.471039+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:48.667493+0000 mgr.y (mgr.14520) 112 : cluster [DBG] pgmap v81: 744 pgs: 85 creating+peering, 189 unknown, 470 active+clean; 96 MiB data, 570 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:48.667493+0000 mgr.y (mgr.14520) 112 : cluster [DBG] pgmap v81: 744 pgs: 85 creating+peering, 189 unknown, 470 active+clean; 96 MiB data, 570 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.758305+0000 mgr.y (mgr.14520) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.758305+0000 mgr.y (mgr.14520) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.759058+0000 mgr.y (mgr.14520) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.759058+0000 mgr.y (mgr.14520) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.759763+0000 mgr.y (mgr.14520) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.759763+0000 mgr.y (mgr.14520) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.760612+0000 mgr.y (mgr.14520) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.760612+0000 mgr.y (mgr.14520) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.761254+0000 mgr.y (mgr.14520) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.761254+0000 mgr.y (mgr.14520) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.762004+0000 mgr.y (mgr.14520) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.762004+0000 mgr.y (mgr.14520) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:48.762710+0000 osd.2 (osd.2) 3 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:48.762710+0000 osd.2 (osd.2) 3 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.763810+0000 mgr.y (mgr.14520) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.763810+0000 mgr.y (mgr.14520) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:48.764178+0000 osd.2 (osd.2) 4 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:48.764178+0000 osd.2 (osd.2) 4 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.765218+0000 mgr.y (mgr.14520) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.765218+0000 mgr.y (mgr.14520) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.766037+0000 mgr.y (mgr.14520) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.766037+0000 mgr.y (mgr.14520) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.767122+0000 mgr.y (mgr.14520) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.767122+0000 mgr.y (mgr.14520) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.908797+0000 mon.a (mon.0) 1137 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:48.908797+0000 mon.a (mon.0) 1137 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:50.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.118430+0000 osd.0 (osd.0) 3 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.118430+0000 osd.0 (osd.0) 3 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.119894+0000 osd.0 (osd.0) 4 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.119894+0000 osd.0 (osd.0) 4 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.265400+0000 osd.6 (osd.6) 3 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.265400+0000 osd.6 (osd.6) 3 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.284320+0000 osd.6 (osd.6) 4 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.284320+0000 osd.6 (osd.6) 4 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.376067+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.376067+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm01-59988-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.376285+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.376285+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? 192.168.123.101:0/804873066' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm01-59715-10"}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.376416+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.376416+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm01-59735-10"}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.377214+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.377214+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm01-60504-2"}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.377375+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.377375+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.398584+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.398584+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.398705+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.398705+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.398853+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.398853+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm01-59602-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.442413+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2"}]: dispatch 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.442413+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2"}]: dispatch 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.442892+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.442892+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.443981+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.443981+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.455745+0000 mon.a (mon.0) 1146 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: cluster 2026-03-09T15:56:49.455745+0000 mon.a (mon.0) 1146 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.470691+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.470691+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.471039+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:50.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:49 vm09 bash[22983]: audit 2026-03-09T15:56:49.471039+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.215712+0000 osd.4 (osd.4) 3 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.215712+0000 osd.4 (osd.4) 3 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.218012+0000 osd.4 (osd.4) 4 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.218012+0000 osd.4 (osd.4) 4 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.510953+0000 osd.5 (osd.5) 3 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.510953+0000 osd.5 (osd.5) 3 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.512851+0000 osd.5 (osd.5) 4 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.512851+0000 osd.5 (osd.5) 4 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.752070+0000 osd.1 (osd.1) 3 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.752070+0000 osd.1 (osd.1) 3 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.752917+0000 osd.1 (osd.1) 4 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:49.752917+0000 osd.1 (osd.1) 4 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:49.909993+0000 mon.a (mon.0) 1149 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:49.909993+0000 mon.a (mon.0) 1149 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:50.144176+0000 osd.0 (osd.0) 5 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:50.144176+0000 osd.0 (osd.0) 5 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:50.145190+0000 osd.0 (osd.0) 6 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:50.145190+0000 osd.0 (osd.0) 6 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.378953+0000 mon.a (mon.0) 1150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.378953+0000 mon.a (mon.0) 1150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.378999+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.378999+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.379026+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.379026+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:50.402042+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: cluster 2026-03-09T15:56:50.402042+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.404284+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.404284+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.423318+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.423318+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.426312+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.426312+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.427059+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.427059+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.432925+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.432925+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.433377+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.433377+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.433476+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:50 vm01 bash[20728]: audit 2026-03-09T15:56:50.433476+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.215712+0000 osd.4 (osd.4) 3 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.215712+0000 osd.4 (osd.4) 3 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.218012+0000 osd.4 (osd.4) 4 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.218012+0000 osd.4 (osd.4) 4 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.510953+0000 osd.5 (osd.5) 3 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.510953+0000 osd.5 (osd.5) 3 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.512851+0000 osd.5 (osd.5) 4 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.512851+0000 osd.5 (osd.5) 4 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.752070+0000 osd.1 (osd.1) 3 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.752070+0000 osd.1 (osd.1) 3 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.752917+0000 osd.1 (osd.1) 4 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:49.752917+0000 osd.1 (osd.1) 4 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:49.909993+0000 mon.a (mon.0) 1149 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:49.909993+0000 mon.a (mon.0) 1149 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:50.144176+0000 osd.0 (osd.0) 5 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:50.144176+0000 osd.0 (osd.0) 5 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:50.145190+0000 osd.0 (osd.0) 6 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:50.145190+0000 osd.0 (osd.0) 6 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.378953+0000 mon.a (mon.0) 1150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.378953+0000 mon.a (mon.0) 1150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.378999+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.378999+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.379026+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.379026+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:50.402042+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: cluster 2026-03-09T15:56:50.402042+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.404284+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.404284+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.423318+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.423318+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.426312+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.426312+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.427059+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.427059+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.432925+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.432925+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.433377+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.433377+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.433476+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:50 vm01 bash[28152]: audit 2026-03-09T15:56:50.433476+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.215712+0000 osd.4 (osd.4) 3 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:56:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.215712+0000 osd.4 (osd.4) 3 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:56:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.218012+0000 osd.4 (osd.4) 4 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:56:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.218012+0000 osd.4 (osd.4) 4 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:56:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.510953+0000 osd.5 (osd.5) 3 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:56:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.510953+0000 osd.5 (osd.5) 3 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.512851+0000 osd.5 (osd.5) 4 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.512851+0000 osd.5 (osd.5) 4 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.752070+0000 osd.1 (osd.1) 3 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.752070+0000 osd.1 (osd.1) 3 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.752917+0000 osd.1 (osd.1) 4 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:49.752917+0000 osd.1 (osd.1) 4 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:49.909993+0000 mon.a (mon.0) 1149 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:49.909993+0000 mon.a (mon.0) 1149 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:50.144176+0000 osd.0 (osd.0) 5 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:50.144176+0000 osd.0 (osd.0) 5 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:50.145190+0000 osd.0 (osd.0) 6 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:50.145190+0000 osd.0 (osd.0) 6 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.378953+0000 mon.a (mon.0) 1150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.378953+0000 mon.a (mon.0) 1150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm01-59640-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.378999+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.378999+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm01-60504-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.379026+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.379026+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm01-59772-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:50.402042+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: cluster 2026-03-09T15:56:50.402042+0000 mon.a (mon.0) 1153 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.404284+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.404284+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.423318+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.423318+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.426312+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.426312+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.427059+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.427059+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.432925+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.432925+0000 mon.a (mon.0) 1154 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.433377+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.433377+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.433476+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:50 vm09 bash[22983]: audit 2026-03-09T15:56:50.433476+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: Running main() from gmock_main.cc 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [==========] Running 16 tests from 2 test suites. 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [----------] Global test environment set-up. 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotify 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: notify 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotify (1628 ms) 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotifyTimeout 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotifyTimeout (24 ms) 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP (1652 ms total) 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: notify 2026-03-09T15:56:51.790 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 (430 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: notify 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 (3691 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 (66 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 (97 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547066635344 notify_id 339302416385 notifier_gid 25037 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 (24 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547066617328 notify_id 339302416384 notifier_gid 25037 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 (6 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547066652896 notify_id 339302416386 notifier_gid 25037 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 (6 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547066652896 notify_id 339302416385 notifier_gid 25037 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 (5 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547068220560 notify_id 339302416386 notifier_gid 25037 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 (4 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547066652896 notify_id 339302416387 notifier_gid 25037 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 (5 ms) 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: trying... 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547066652896 notify_id 339302416387 notifier_gid 25037 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: timed out 2026-03-09T15:56:51.791 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: flushing 2026-03-09T15:56:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: cluster 2026-03-09T15:56:50.668057+0000 mgr.y (mgr.14520) 123 : cluster [DBG] pgmap v84: 568 pgs: 112 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 32 MiB/s wr, 205 op/s 2026-03-09T15:56:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: cluster 2026-03-09T15:56:50.668057+0000 mgr.y (mgr.14520) 123 : cluster [DBG] pgmap v84: 568 pgs: 112 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 32 MiB/s wr, 205 op/s 2026-03-09T15:56:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:50.910943+0000 mon.a (mon.0) 1157 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:50.910943+0000 mon.a (mon.0) 1157 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: cluster 2026-03-09T15:56:51.193163+0000 osd.0 (osd.0) 7 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: cluster 2026-03-09T15:56:51.193163+0000 osd.0 (osd.0) 7 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: cluster 2026-03-09T15:56:51.194981+0000 osd.0 (osd.0) 8 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: cluster 2026-03-09T15:56:51.194981+0000 osd.0 (osd.0) 8 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.390957+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.390957+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.390993+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.390993+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.419072+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.419072+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: cluster 2026-03-09T15:56:51.433942+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: cluster 2026-03-09T15:56:51.433942+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.437589+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.101:0/1992454286' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.437589+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.101:0/1992454286' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: cluster 2026-03-09T15:56:50.668057+0000 mgr.y (mgr.14520) 123 : cluster [DBG] pgmap v84: 568 pgs: 112 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 32 MiB/s wr, 205 op/s 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: cluster 2026-03-09T15:56:50.668057+0000 mgr.y (mgr.14520) 123 : cluster [DBG] pgmap v84: 568 pgs: 112 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 32 MiB/s wr, 205 op/s 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:50.910943+0000 mon.a (mon.0) 1157 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:50.910943+0000 mon.a (mon.0) 1157 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: cluster 2026-03-09T15:56:51.193163+0000 osd.0 (osd.0) 7 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: cluster 2026-03-09T15:56:51.193163+0000 osd.0 (osd.0) 7 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: cluster 2026-03-09T15:56:51.194981+0000 osd.0 (osd.0) 8 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: cluster 2026-03-09T15:56:51.194981+0000 osd.0 (osd.0) 8 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.390957+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.390957+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.390993+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.390993+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.419072+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.419072+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: cluster 2026-03-09T15:56:51.433942+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: cluster 2026-03-09T15:56:51.433942+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.437589+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.101:0/1992454286' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.437589+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.101:0/1992454286' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.437772+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.101:0/2466239058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.437772+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.101:0/2466239058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.437907+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.101:0/3797602555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.437907+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.101:0/3797602555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.439840+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.101:0/1289362233' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.439840+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.101:0/1289362233' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.443394+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.443394+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.470195+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.470195+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.470448+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.470448+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.470566+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.470566+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.470652+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.470652+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.486792+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.486792+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.486946+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:51 vm01 bash[28152]: audit 2026-03-09T15:56:51.486946+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.437772+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.101:0/2466239058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.437772+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.101:0/2466239058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.437907+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.101:0/3797602555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.437907+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.101:0/3797602555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.439840+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.101:0/1289362233' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.439840+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.101:0/1289362233' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.443394+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.443394+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.470195+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.470195+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.470448+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.470448+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.470566+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.470566+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.470652+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.470652+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.486792+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.486792+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.486946+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:51 vm01 bash[20728]: audit 2026-03-09T15:56:51.486946+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: cluster 2026-03-09T15:56:50.668057+0000 mgr.y (mgr.14520) 123 : cluster [DBG] pgmap v84: 568 pgs: 112 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 32 MiB/s wr, 205 op/s 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: cluster 2026-03-09T15:56:50.668057+0000 mgr.y (mgr.14520) 123 : cluster [DBG] pgmap v84: 568 pgs: 112 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 32 MiB/s wr, 205 op/s 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:50.910943+0000 mon.a (mon.0) 1157 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:50.910943+0000 mon.a (mon.0) 1157 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: cluster 2026-03-09T15:56:51.193163+0000 osd.0 (osd.0) 7 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: cluster 2026-03-09T15:56:51.193163+0000 osd.0 (osd.0) 7 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: cluster 2026-03-09T15:56:51.194981+0000 osd.0 (osd.0) 8 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: cluster 2026-03-09T15:56:51.194981+0000 osd.0 (osd.0) 8 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.390957+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.390957+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.390993+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.390993+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.419072+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.419072+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: cluster 2026-03-09T15:56:51.433942+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: cluster 2026-03-09T15:56:51.433942+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.437589+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.101:0/1992454286' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.437589+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.101:0/1992454286' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.437772+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.101:0/2466239058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.437772+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.101:0/2466239058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.437907+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.101:0/3797602555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.437907+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.101:0/3797602555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.439840+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.101:0/1289362233' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.439840+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.101:0/1289362233' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.443394+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.443394+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.470195+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.470195+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.470448+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.470448+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.470566+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.470566+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.470652+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.470652+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.486792+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.486792+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.486946+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:52.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:51 vm09 bash[22983]: audit 2026-03-09T15:56:51.486946+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:51.913375+0000 mon.a (mon.0) 1167 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:51.913375+0000 mon.a (mon.0) 1167 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395286+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395286+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395450+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395450+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395490+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395490+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395530+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395530+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395564+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395564+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395600+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395600+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395633+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.395633+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.408563+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.408563+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: cluster 2026-03-09T15:56:52.411312+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: cluster 2026-03-09T15:56:52.411312+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.411531+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.411531+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.423500+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.423500+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.423637+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.423637+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.450454+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.450454+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.450612+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.450612+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.450760+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.450760+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.450821+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:53 vm01 bash[20728]: audit 2026-03-09T15:56:52.450821+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:56:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:56:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:51.913375+0000 mon.a (mon.0) 1167 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:51.913375+0000 mon.a (mon.0) 1167 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395286+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395286+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395450+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395450+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395490+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395490+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395530+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395530+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395564+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395564+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395600+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395600+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395633+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.395633+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.408563+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.408563+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: cluster 2026-03-09T15:56:52.411312+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: cluster 2026-03-09T15:56:52.411312+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.411531+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.411531+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.423500+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.423500+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.423637+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.423637+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.450454+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.450454+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.450612+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.450612+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.450760+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.450760+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.450821+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:53 vm01 bash[28152]: audit 2026-03-09T15:56:52.450821+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:51.913375+0000 mon.a (mon.0) 1167 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:51.913375+0000 mon.a (mon.0) 1167 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395286+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395286+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm01-60504-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395450+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395450+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395490+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395490+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395530+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395530+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395564+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395564+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm01-59908-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395600+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395600+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395633+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.395633+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm01-59602-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.408563+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.408563+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.101:0/1174313585' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: cluster 2026-03-09T15:56:52.411312+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: cluster 2026-03-09T15:56:52.411312+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.411531+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.411531+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.423500+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.423500+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.101:0/2153034024' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.423637+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.423637+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.450454+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.450454+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:56:53.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.450612+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.450612+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]: dispatch 2026-03-09T15:56:53.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.450760+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.450760+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:53.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.450821+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:53 vm09 bash[22983]: audit 2026-03-09T15:56:52.450821+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:53.472 INFO:tasks.workunit.client.0.vm01.stdout: api_ api_watch_notify: Running main() from gmock_main.cc 2026-03-09T15:56:53.472 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [==========] Running 11 tests from 2 test suites. 2026-03-09T15:56:53.472 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [----------] Global test environment set-up. 2026-03-09T15:56:53.472 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify 2026-03-09T15:56:53.472 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify_test_cb 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify (785 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch2Delete 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94887414203168 err -107 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch2Delete (48 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94887414203168 err -107 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete (28 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_cb from 24598 notify_id 292057776129 cookie 94887414217312 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2 (38 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchNotify2 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_cb from 24598 notify_id 292057776128 cookie 94887414233152 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchNotify2 (14 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioNotify 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_cb from 24598 notify_id 292057776129 cookie 94887414248272 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioNotify (7 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Multi 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_cb from 24598 notify_id 292057776129 cookie 94887414248272 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_cb from 24598 notify_id 292057776129 cookie 94887414263680 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Multi (21 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Timeout 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_cb from 24598 notify_id 292057776130 cookie 94887414248272 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_cb from 24598 notify_id 296352743427 cookie 94887414248272 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Timeout (3185 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch3Timeout 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: waiting up to 1024 for osd to time us out ... 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94887414248272 err -107 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_cb from 24598 notify_id 322122547203 cookie 94887414248272 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch3Timeout (5039 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete2 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: waiting up to 30 for disconnect notification ... 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94887414264928 err -107 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete2 (1085 ms) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify (10250 ms total) 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotifyEC.WatchNotify 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: watch_notify_test_cb 2026-03-09T15:56:53.473 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ OK ] LibRadosWatchNotifyEC.WatchNotify (1154 ms) 2026-03-09T15:56:53.474 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC (1154 ms total) 2026-03-09T15:56:53.474 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: 2026-03-09T15:56:53.474 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [----------] Global test environment tear-down 2026-03-09T15:56:53.474 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [==========] 11 tests from 2 test suites ran. (19170 ms total) 2026-03-09T15:56:53.474 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify: [ PASSED ] 11 tests. 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: cluster 2026-03-09T15:56:52.668654+0000 mgr.y (mgr.14520) 124 : cluster [DBG] pgmap v87: 784 pgs: 328 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 6.0 MiB/s rd, 6.0 MiB/s wr, 193 op/s 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: cluster 2026-03-09T15:56:52.668654+0000 mgr.y (mgr.14520) 124 : cluster [DBG] pgmap v87: 784 pgs: 328 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 6.0 MiB/s rd, 6.0 MiB/s wr, 193 op/s 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.002901+0000 mon.a (mon.0) 1180 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.002901+0000 mon.a (mon.0) 1180 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: cluster 2026-03-09T15:56:53.358914+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: cluster 2026-03-09T15:56:53.358914+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.458978+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.458978+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.459102+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.459102+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.459134+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.459134+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.459235+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.459235+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: cluster 2026-03-09T15:56:53.477360+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: cluster 2026-03-09T15:56:53.477360+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.506970+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.506970+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.511301+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.511301+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.729450+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.729450+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.729945+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:54 vm09 bash[22983]: audit 2026-03-09T15:56:53.729945+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: cluster 2026-03-09T15:56:52.668654+0000 mgr.y (mgr.14520) 124 : cluster [DBG] pgmap v87: 784 pgs: 328 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 6.0 MiB/s rd, 6.0 MiB/s wr, 193 op/s 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: cluster 2026-03-09T15:56:52.668654+0000 mgr.y (mgr.14520) 124 : cluster [DBG] pgmap v87: 784 pgs: 328 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 6.0 MiB/s rd, 6.0 MiB/s wr, 193 op/s 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.002901+0000 mon.a (mon.0) 1180 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.002901+0000 mon.a (mon.0) 1180 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: cluster 2026-03-09T15:56:53.358914+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: cluster 2026-03-09T15:56:53.358914+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.458978+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.458978+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.459102+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.459102+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.459134+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.459134+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.459235+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.459235+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: cluster 2026-03-09T15:56:53.477360+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: cluster 2026-03-09T15:56:53.477360+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.506970+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.506970+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.511301+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.511301+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.729450+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.729450+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.729945+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:54 vm01 bash[28152]: audit 2026-03-09T15:56:53.729945+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: cluster 2026-03-09T15:56:52.668654+0000 mgr.y (mgr.14520) 124 : cluster [DBG] pgmap v87: 784 pgs: 328 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 6.0 MiB/s rd, 6.0 MiB/s wr, 193 op/s 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: cluster 2026-03-09T15:56:52.668654+0000 mgr.y (mgr.14520) 124 : cluster [DBG] pgmap v87: 784 pgs: 328 unknown, 456 active+clean; 120 MiB data, 659 MiB used, 159 GiB / 160 GiB avail; 6.0 MiB/s rd, 6.0 MiB/s wr, 193 op/s 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.002901+0000 mon.a (mon.0) 1180 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.002901+0000 mon.a (mon.0) 1180 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: cluster 2026-03-09T15:56:53.358914+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: cluster 2026-03-09T15:56:53.358914+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.458978+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.458978+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm01-59772-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.459102+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.459102+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm01-59988-12"}]': finished 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.459134+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.459134+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.459235+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.459235+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: cluster 2026-03-09T15:56:53.477360+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: cluster 2026-03-09T15:56:53.477360+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.506970+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.506970+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.101:0/3037667596' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.511301+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.511301+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.729450+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.729450+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.729945+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:54 vm01 bash[20728]: audit 2026-03-09T15:56:53.729945+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:56:54.541 INFO:tasks.workunit.client.0.vm01.stdout:UN ] LibRadosIoECPP.RoundTripPP2 2026-03-09T15:56:54.541 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP2 (25 ms) 2026-03-09T15:56:54.541 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.OverlappingWriteRoundTripPP 2026-03-09T15:56:54.541 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.OverlappingWriteRoundTripPP (95 ms) 2026-03-09T15:56:54.541 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP (9 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP2 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP2 (5 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.AppendRoundTripPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.AppendRoundTripPP (8 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.TruncTestPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.TruncTestPP (6 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RemoveTestPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RemoveTestPP (6 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrsRoundTripPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrsRoundTripPP (7 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RmXattrPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RmXattrPP (15 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CrcZeroWrite 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CrcZeroWrite (6604 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrListPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrListPP (1123 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtPP (5 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtDNEPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtDNEPP (5 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtMismatchPP 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtMismatchPP (4 ms) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP (10295 ms total) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [----------] Global test environment tear-down 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [==========] 39 tests from 2 test suites ran. (20490 ms total) 2026-03-09T15:56:54.542 INFO:tasks.workunit.client.0.vm01.stdout: api_io_pp: [ PASSED ] 39 tests. 2026-03-09T15:56:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.004214+0000 mon.a (mon.0) 1189 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.004214+0000 mon.a (mon.0) 1189 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: cluster 2026-03-09T15:56:54.461085+0000 mon.a (mon.0) 1190 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T15:56:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: cluster 2026-03-09T15:56:54.461085+0000 mon.a (mon.0) 1190 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T15:56:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.466538+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.466538+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.466595+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.466595+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: cluster 2026-03-09T15:56:54.514665+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: cluster 2026-03-09T15:56:54.514665+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.538699+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.101:0/3359609813' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.538699+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.101:0/3359609813' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.547296+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.547296+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.550558+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.550558+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.554535+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.554535+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.556246+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.101:0/3112169746' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.556246+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.101:0/3112169746' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.561063+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:54.561063+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:55.005798+0000 mon.a (mon.0) 1197 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:55 vm09 bash[22983]: audit 2026-03-09T15:56:55.005798+0000 mon.a (mon.0) 1197 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.004214+0000 mon.a (mon.0) 1189 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.004214+0000 mon.a (mon.0) 1189 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: cluster 2026-03-09T15:56:54.461085+0000 mon.a (mon.0) 1190 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: cluster 2026-03-09T15:56:54.461085+0000 mon.a (mon.0) 1190 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.466538+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.466538+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.466595+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.466595+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: cluster 2026-03-09T15:56:54.514665+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: cluster 2026-03-09T15:56:54.514665+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.538699+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.101:0/3359609813' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.538699+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.101:0/3359609813' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.547296+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.547296+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.550558+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.550558+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.554535+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.554535+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.556246+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.101:0/3112169746' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.556246+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.101:0/3112169746' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.561063+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:54.561063+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:55.005798+0000 mon.a (mon.0) 1197 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:55 vm01 bash[20728]: audit 2026-03-09T15:56:55.005798+0000 mon.a (mon.0) 1197 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.004214+0000 mon.a (mon.0) 1189 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.004214+0000 mon.a (mon.0) 1189 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: cluster 2026-03-09T15:56:54.461085+0000 mon.a (mon.0) 1190 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: cluster 2026-03-09T15:56:54.461085+0000 mon.a (mon.0) 1190 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.466538+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.466538+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm01-59640-23"}]': finished 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.466595+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.466595+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: cluster 2026-03-09T15:56:54.514665+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: cluster 2026-03-09T15:56:54.514665+0000 mon.a (mon.0) 1193 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.538699+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.101:0/3359609813' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.538699+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.101:0/3359609813' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.547296+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.547296+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.550558+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.550558+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.554535+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.554535+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.556246+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.101:0/3112169746' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.556246+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.101:0/3112169746' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.561063+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:54.561063+0000 mon.a (mon.0) 1196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:55.005798+0000 mon.a (mon.0) 1197 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:55.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:55 vm01 bash[28152]: audit 2026-03-09T15:56:55.005798+0000 mon.a (mon.0) 1197 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:56.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:56:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: cluster 2026-03-09T15:56:54.669229+0000 mgr.y (mgr.14520) 125 : cluster [DBG] pgmap v90: 688 pgs: 4 creating+activating, 160 unknown, 524 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: cluster 2026-03-09T15:56:54.669229+0000 mgr.y (mgr.14520) 125 : cluster [DBG] pgmap v90: 688 pgs: 4 creating+activating, 160 unknown, 524 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.519179+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.519179+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.519215+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.519215+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.519243+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.519243+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: cluster 2026-03-09T15:56:55.547816+0000 mon.a (mon.0) 1201 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: cluster 2026-03-09T15:56:55.547816+0000 mon.a (mon.0) 1201 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.904807+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.904807+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.905414+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:55.905414+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:56.009331+0000 mon.a (mon.0) 1203 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:56 vm09 bash[22983]: audit 2026-03-09T15:56:56.009331+0000 mon.a (mon.0) 1203 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: cluster 2026-03-09T15:56:54.669229+0000 mgr.y (mgr.14520) 125 : cluster [DBG] pgmap v90: 688 pgs: 4 creating+activating, 160 unknown, 524 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: cluster 2026-03-09T15:56:54.669229+0000 mgr.y (mgr.14520) 125 : cluster [DBG] pgmap v90: 688 pgs: 4 creating+activating, 160 unknown, 524 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.519179+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.519179+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.519215+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.519215+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.519243+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.519243+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: cluster 2026-03-09T15:56:55.547816+0000 mon.a (mon.0) 1201 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: cluster 2026-03-09T15:56:55.547816+0000 mon.a (mon.0) 1201 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.904807+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.904807+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.905414+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:55.905414+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:56.009331+0000 mon.a (mon.0) 1203 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:56 vm01 bash[20728]: audit 2026-03-09T15:56:56.009331+0000 mon.a (mon.0) 1203 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: cluster 2026-03-09T15:56:54.669229+0000 mgr.y (mgr.14520) 125 : cluster [DBG] pgmap v90: 688 pgs: 4 creating+activating, 160 unknown, 524 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: cluster 2026-03-09T15:56:54.669229+0000 mgr.y (mgr.14520) 125 : cluster [DBG] pgmap v90: 688 pgs: 4 creating+activating, 160 unknown, 524 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.519179+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.519179+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-60179-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.519215+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.519215+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.519243+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.519243+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm01-59602-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:56.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: cluster 2026-03-09T15:56:55.547816+0000 mon.a (mon.0) 1201 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T15:56:56.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: cluster 2026-03-09T15:56:55.547816+0000 mon.a (mon.0) 1201 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T15:56:56.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.904807+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.904807+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.905414+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:55.905414+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:56:56.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:56.009331+0000 mon.a (mon.0) 1203 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:56.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:56 vm01 bash[28152]: audit 2026-03-09T15:56:56.009331+0000 mon.a (mon.0) 1203 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.242620+0000 mgr.y (mgr.14520) 126 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.242620+0000 mgr.y (mgr.14520) 126 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.524485+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.524485+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: cluster 2026-03-09T15:56:56.530724+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: cluster 2026-03-09T15:56:56.530724+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.569451+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.569451+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.579079+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.579079+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.630308+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.630308+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.632951+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:56.632951+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: cluster 2026-03-09T15:56:56.670755+0000 mgr.y (mgr.14520) 127 : cluster [DBG] pgmap v93: 656 pgs: 4 creating+activating, 160 unknown, 492 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: cluster 2026-03-09T15:56:56.670755+0000 mgr.y (mgr.14520) 127 : cluster [DBG] pgmap v93: 656 pgs: 4 creating+activating, 160 unknown, 492 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:57.010938+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:57 vm09 bash[22983]: audit 2026-03-09T15:56:57.010938+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.242620+0000 mgr.y (mgr.14520) 126 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.242620+0000 mgr.y (mgr.14520) 126 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.524485+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.524485+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: cluster 2026-03-09T15:56:56.530724+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: cluster 2026-03-09T15:56:56.530724+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.569451+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.569451+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.579079+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.579079+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.630308+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.630308+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.632951+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:56.632951+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: cluster 2026-03-09T15:56:56.670755+0000 mgr.y (mgr.14520) 127 : cluster [DBG] pgmap v93: 656 pgs: 4 creating+activating, 160 unknown, 492 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:57.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: cluster 2026-03-09T15:56:56.670755+0000 mgr.y (mgr.14520) 127 : cluster [DBG] pgmap v93: 656 pgs: 4 creating+activating, 160 unknown, 492 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:57.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:57.010938+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:57.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:57 vm01 bash[28152]: audit 2026-03-09T15:56:57.010938+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.242620+0000 mgr.y (mgr.14520) 126 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.242620+0000 mgr.y (mgr.14520) 126 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.524485+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.524485+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: cluster 2026-03-09T15:56:56.530724+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: cluster 2026-03-09T15:56:56.530724+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.569451+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.569451+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.579079+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.579079+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.630308+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.630308+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.632951+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:56.632951+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]: dispatch 2026-03-09T15:56:57.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: cluster 2026-03-09T15:56:56.670755+0000 mgr.y (mgr.14520) 127 : cluster [DBG] pgmap v93: 656 pgs: 4 creating+activating, 160 unknown, 492 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:57.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: cluster 2026-03-09T15:56:56.670755+0000 mgr.y (mgr.14520) 127 : cluster [DBG] pgmap v93: 656 pgs: 4 creating+activating, 160 unknown, 492 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 6 op/s 2026-03-09T15:56:57.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:57.010938+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:57.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:57 vm01 bash[20728]: audit 2026-03-09T15:56:57.010938+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.528702+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.528702+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.528746+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.528746+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.528768+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.528768+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: cluster 2026-03-09T15:56:57.538591+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: cluster 2026-03-09T15:56:57.538591+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.547755+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.101:0/3589624963' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.547755+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.101:0/3589624963' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.570151+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:57.570151+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:58.019479+0000 mon.a (mon.0) 1215 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:58 vm09 bash[22983]: audit 2026-03-09T15:56:58.019479+0000 mon.a (mon.0) 1215 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.528702+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.528702+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.528746+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.528746+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.528768+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.528768+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: cluster 2026-03-09T15:56:57.538591+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: cluster 2026-03-09T15:56:57.538591+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.547755+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.101:0/3589624963' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.547755+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.101:0/3589624963' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.570151+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:57.570151+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:58.019479+0000 mon.a (mon.0) 1215 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:58 vm01 bash[28152]: audit 2026-03-09T15:56:58.019479+0000 mon.a (mon.0) 1215 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.528702+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.528702+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.101:0/531540639' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.528746+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.528746+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.101:0/1825200401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.528768+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.528768+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-7"}]': finished 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: cluster 2026-03-09T15:56:57.538591+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: cluster 2026-03-09T15:56:57.538591+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.547755+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.101:0/3589624963' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.547755+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.101:0/3589624963' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.570151+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:57.570151+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:58.019479+0000 mon.a (mon.0) 1215 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:56:59.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:58 vm01 bash[20728]: audit 2026-03-09T15:56:58.019479+0000 mon.a (mon.0) 1215 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.594269+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.594269+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: cluster 2026-03-09T15:56:58.622731+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T15:57:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: cluster 2026-03-09T15:56:58.622731+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T15:57:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.638346+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.638346+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: cluster 2026-03-09T15:56:58.671400+0000 mgr.y (mgr.14520) 128 : cluster [DBG] pgmap v96: 712 pgs: 256 unknown, 456 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: cluster 2026-03-09T15:56:58.671400+0000 mgr.y (mgr.14520) 128 : cluster [DBG] pgmap v96: 712 pgs: 256 unknown, 456 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.861988+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.861988+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.959361+0000 mon.a (mon.0) 1219 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.959361+0000 mon.a (mon.0) 1219 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.975780+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:58.975780+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.061020+0000 mon.a (mon.0) 1221 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.061020+0000 mon.a (mon.0) 1221 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.598758+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.598758+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: cluster 2026-03-09T15:56:59.602259+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: cluster 2026-03-09T15:56:59.602259+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.656198+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.656198+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.663677+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.663677+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.684224+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.684224+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.684613+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.101:0/1141098544' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.684613+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.101:0/1141098544' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.689966+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.689966+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.691404+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:56:59 vm09 bash[22983]: audit 2026-03-09T15:56:59.691404+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.594269+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.594269+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: cluster 2026-03-09T15:56:58.622731+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: cluster 2026-03-09T15:56:58.622731+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.638346+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.638346+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: cluster 2026-03-09T15:56:58.671400+0000 mgr.y (mgr.14520) 128 : cluster [DBG] pgmap v96: 712 pgs: 256 unknown, 456 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: cluster 2026-03-09T15:56:58.671400+0000 mgr.y (mgr.14520) 128 : cluster [DBG] pgmap v96: 712 pgs: 256 unknown, 456 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.861988+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.861988+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.959361+0000 mon.a (mon.0) 1219 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.959361+0000 mon.a (mon.0) 1219 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.975780+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:58.975780+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.061020+0000 mon.a (mon.0) 1221 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.061020+0000 mon.a (mon.0) 1221 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.598758+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.598758+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.594269+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.594269+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm01-59602-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: cluster 2026-03-09T15:56:58.622731+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T15:57:00.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: cluster 2026-03-09T15:56:58.622731+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.638346+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.638346+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: cluster 2026-03-09T15:56:58.671400+0000 mgr.y (mgr.14520) 128 : cluster [DBG] pgmap v96: 712 pgs: 256 unknown, 456 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: cluster 2026-03-09T15:56:58.671400+0000 mgr.y (mgr.14520) 128 : cluster [DBG] pgmap v96: 712 pgs: 256 unknown, 456 active+clean; 144 MiB data, 909 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.861988+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.861988+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.959361+0000 mon.a (mon.0) 1219 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.959361+0000 mon.a (mon.0) 1219 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.975780+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:58.975780+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.061020+0000 mon.a (mon.0) 1221 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.061020+0000 mon.a (mon.0) 1221 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.598758+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.598758+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: cluster 2026-03-09T15:56:59.602259+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: cluster 2026-03-09T15:56:59.602259+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.656198+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.656198+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.663677+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.663677+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.684224+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.684224+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.684613+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.101:0/1141098544' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.684613+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.101:0/1141098544' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.689966+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.689966+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.691404+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:56:59 vm01 bash[20728]: audit 2026-03-09T15:56:59.691404+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: cluster 2026-03-09T15:56:59.602259+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: cluster 2026-03-09T15:56:59.602259+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.656198+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.656198+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.101:0/2737150567' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.663677+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.663677+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.684224+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.684224+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.684613+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.101:0/1141098544' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.684613+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.101:0/1141098544' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.689966+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.689966+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.691404+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:56:59 vm01 bash[28152]: audit 2026-03-09T15:56:59.691404+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: Running main() from gmock_main.cc 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [==========] Running 3 tests from 1 test suite. 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [----------] Global test environment set-up. 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [----------] 3 tests from NeoradosECList 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [ RUN ] NeoradosECList.ListObjects 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [ OK ] NeoradosECList.ListObjects (6579 ms) 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsNS 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsNS (8157 ms) 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsMany 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsMany (11194 ms) 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [----------] 3 tests from NeoradosECList (25930 ms total) 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [----------] Global test environment tear-down 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [==========] 3 tests from 1 test suite ran. (25930 ms total) 2026-03-09T15:57:00.621 INFO:tasks.workunit.client.0.vm01.stdout: ec_list: [ PASSED ] 3 tests. 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.071198+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.071198+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.606147+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.606147+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.606179+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.606179+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.606207+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.606207+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: cluster 2026-03-09T15:57:00.621936+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: cluster 2026-03-09T15:57:00.621936+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.740017+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.101:0/3208278701' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.740017+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.101:0/3208278701' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.749942+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:00 vm09 bash[22983]: audit 2026-03-09T15:57:00.749942+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.071198+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.071198+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.606147+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.606147+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.606179+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.606179+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.606207+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.606207+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: cluster 2026-03-09T15:57:00.621936+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: cluster 2026-03-09T15:57:00.621936+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.740017+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.101:0/3208278701' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.740017+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.101:0/3208278701' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.749942+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:00 vm01 bash[28152]: audit 2026-03-09T15:57:00.749942+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.071198+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.071198+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.606147+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.606147+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm01-60504-3"}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.606179+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.606179+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.606207+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.606207+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm01-59610-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: cluster 2026-03-09T15:57:00.621936+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: cluster 2026-03-09T15:57:00.621936+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.740017+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.101:0/3208278701' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.740017+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.101:0/3208278701' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.749942+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:00 vm01 bash[20728]: audit 2026-03-09T15:57:00.749942+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:02.032 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [==========] Running 4 tests from 1 test suite. 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [----------] Global test environment set-up. 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [----------] 4 tests from LibRadosService 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [ RUN ] LibRadosService.RegisterEarly 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [ OK ] LibRadosService.RegisterEarly (5082 ms) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [ RUN ] LibRadosService.RegisterLate 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [ OK ] LibRadosService.RegisterLate (175 ms) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [ RUN ] LibRadosService.StatusFormat 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: cluster: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: id: 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: health: HEALTH_WARN 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 14 pool(s) do not have an application enabled 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: services: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 7m) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: mgr: y(active, since 117s), standbys: x 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: osd: 8 osds: 8 up (since 2m), 8 in (since 3m) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: laundry: 2 daemons active (1 hosts) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: data: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: pools: 29 pools, 820 pgs 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: objects: 218 objects, 456 KiB 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: usage: 239 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: pgs: 72.805% pgs unknown 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2.927% pgs not active 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 597 unknown 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 199 active+clean 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 24 creating+peering 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: io: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: client: 255 B/s rd, 12 KiB/s wr, 1 op/s rd, 19 op/s wr 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: cluster: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: id: 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: health: HEALTH_WARN 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 13 pool(s) do not have an application enabled 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: services: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 7m) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: mgr: y(active, since 119s), standbys: x 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: osd: 8 osds: 8 up (since 2m), 8 in (since 3m) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: foo: 16 portals active (1 hosts, 3 zones) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: laundry: 1 daemon active (1 hosts) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: data: 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: pools: 28 pools, 788 pgs 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: objects: 285 objects, 464 KiB 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: usage: 377 MiB used, 160 GiB / 160 GiB avail 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: pgs: 34.518% pgs unknown 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 4.061% pgs not active 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 484 active+clean 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 272 unknown 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 32 creating+peering 2026-03-09T15:57:02.033 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: io: 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: client: 2.7 KiB/s rd, 57 KiB/s wr, 35 op/s rd, 115 op/s wr 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [ OK ] LibRadosService.StatusFormat (2383 ms) 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [ RUN ] LibRadosService.Status 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [ OK ] LibRadosService.Status (20015 ms) 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [----------] 4 tests from LibRadosService (27655 ms total) 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [----------] Global test environment tear-down 2026-03-09T15:57:02.034 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [==========] 4 tests from 1 test suite ran. (27655 ms total) 2026-03-09T15:57:02.043 INFO:tasks.workunit.client.0.vm01.stdout: api_service: [ PASSED ] 4 tests. 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: cluster 2026-03-09T15:57:00.671892+0000 mgr.y (mgr.14520) 129 : cluster [DBG] pgmap v99: 680 pgs: 160 unknown, 520 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.0 KiB/s wr, 7 op/s 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: cluster 2026-03-09T15:57:00.671892+0000 mgr.y (mgr.14520) 129 : cluster [DBG] pgmap v99: 680 pgs: 160 unknown, 520 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.0 KiB/s wr, 7 op/s 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:00.908291+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:00.908291+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:00.918612+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:00.918612+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:00.934361+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:00.934361+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:00.935077+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:00.935077+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.083138+0000 mon.a (mon.0) 1235 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.083138+0000 mon.a (mon.0) 1235 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: cluster 2026-03-09T15:57:01.607830+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: cluster 2026-03-09T15:57:01.607830+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.610498+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.610498+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.610542+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.610542+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:02.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.610667+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]': finished 2026-03-09T15:57:02.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.610667+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]': finished 2026-03-09T15:57:02.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: cluster 2026-03-09T15:57:01.640474+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T15:57:02.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: cluster 2026-03-09T15:57:01.640474+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T15:57:02.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.691059+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.691059+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.703266+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:01 vm09 bash[22983]: audit 2026-03-09T15:57:01.703266+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: cluster 2026-03-09T15:57:00.671892+0000 mgr.y (mgr.14520) 129 : cluster [DBG] pgmap v99: 680 pgs: 160 unknown, 520 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.0 KiB/s wr, 7 op/s 2026-03-09T15:57:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: cluster 2026-03-09T15:57:00.671892+0000 mgr.y (mgr.14520) 129 : cluster [DBG] pgmap v99: 680 pgs: 160 unknown, 520 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.0 KiB/s wr, 7 op/s 2026-03-09T15:57:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:00.908291+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:00.908291+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:00.918612+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:00.918612+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:00.934361+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:00.934361+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:00.935077+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:00.935077+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.083138+0000 mon.a (mon.0) 1235 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.083138+0000 mon.a (mon.0) 1235 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: cluster 2026-03-09T15:57:01.607830+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: cluster 2026-03-09T15:57:01.607830+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.610498+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.610498+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.610542+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.610542+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.610667+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.610667+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: cluster 2026-03-09T15:57:01.640474+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: cluster 2026-03-09T15:57:01.640474+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.691059+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.691059+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.703266+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:01 vm01 bash[20728]: audit 2026-03-09T15:57:01.703266+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:01 vm01 bash[28152]: cluster 2026-03-09T15:57:00.671892+0000 mgr.y (mgr.14520) 129 : cluster [DBG] pgmap v99: 680 pgs: 160 unknown, 520 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.0 KiB/s wr, 7 op/s 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:01 vm01 bash[28152]: cluster 2026-03-09T15:57:00.671892+0000 mgr.y (mgr.14520) 129 : cluster [DBG] pgmap v99: 680 pgs: 160 unknown, 520 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.0 KiB/s wr, 7 op/s 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:01 vm01 bash[28152]: audit 2026-03-09T15:57:00.908291+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:01 vm01 bash[28152]: audit 2026-03-09T15:57:00.908291+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:01 vm01 bash[28152]: audit 2026-03-09T15:57:00.918612+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:01 vm01 bash[28152]: audit 2026-03-09T15:57:00.918612+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:01 vm01 bash[28152]: audit 2026-03-09T15:57:00.934361+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:01 vm01 bash[28152]: audit 2026-03-09T15:57:00.934361+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:00.935077+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:00.935077+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.083138+0000 mon.a (mon.0) 1235 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.083138+0000 mon.a (mon.0) 1235 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: cluster 2026-03-09T15:57:01.607830+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: cluster 2026-03-09T15:57:01.607830+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.610498+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.610498+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm01-59602-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.610542+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.610542+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.610667+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.610667+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4"}]': finished 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: cluster 2026-03-09T15:57:01.640474+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: cluster 2026-03-09T15:57:01.640474+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.691059+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.691059+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.703266+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:02.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.703266+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.730302+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.730302+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.852819+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.101:0/3041294402' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.852819+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.101:0/3041294402' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.930058+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.930058+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.932935+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:01.932935+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.085923+0000 mon.a (mon.0) 1244 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.085923+0000 mon.a (mon.0) 1244 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: cluster 2026-03-09T15:57:02.610985+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: cluster 2026-03-09T15:57:02.610985+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.616555+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.616555+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:57:03.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.616600+0000 mon.a (mon.0) 1247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:57:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:57:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:01.730302+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:01.730302+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:01.852819+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.101:0/3041294402' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:01.852819+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.101:0/3041294402' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:01.930058+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:01.930058+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:01.932935+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:01.932935+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.085923+0000 mon.a (mon.0) 1244 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.085923+0000 mon.a (mon.0) 1244 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: cluster 2026-03-09T15:57:02.610985+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: cluster 2026-03-09T15:57:02.610985+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.616555+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.616555+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.616600+0000 mon.a (mon.0) 1247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.616600+0000 mon.a (mon.0) 1247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.616623+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.616623+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: cluster 2026-03-09T15:57:02.626245+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: cluster 2026-03-09T15:57:02.626245+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.667581+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.667581+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.668389+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.668389+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.678688+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.678688+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.687446+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.687446+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.694590+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.101:0/3274519016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.694590+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.101:0/3274519016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.696184+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:02 vm01 bash[20728]: audit 2026-03-09T15:57:02.696184+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.616600+0000 mon.a (mon.0) 1247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.616623+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.616623+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: cluster 2026-03-09T15:57:02.626245+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: cluster 2026-03-09T15:57:02.626245+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.667581+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.667581+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.668389+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.668389+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.678688+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.678688+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.687446+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.687446+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.694590+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.101:0/3274519016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.694590+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.101:0/3274519016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.696184+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:02 vm01 bash[28152]: audit 2026-03-09T15:57:02.696184+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:01.730302+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:01.730302+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:01.852819+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.101:0/3041294402' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:01.852819+0000 mon.c (mon.2) 97 : audit [INF] from='client.? 192.168.123.101:0/3041294402' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:01.930058+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:01.930058+0000 mon.a (mon.0) 1242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:01.932935+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:01.932935+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.085923+0000 mon.a (mon.0) 1244 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.085923+0000 mon.a (mon.0) 1244 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: cluster 2026-03-09T15:57:02.610985+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: cluster 2026-03-09T15:57:02.610985+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.616555+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.616555+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm01-60007-4", "tierpool": "test-rados-api-vm01-60007-6"}]': finished 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.616600+0000 mon.a (mon.0) 1247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.616600+0000 mon.a (mon.0) 1247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.616623+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.616623+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: cluster 2026-03-09T15:57:02.626245+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: cluster 2026-03-09T15:57:02.626245+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.667581+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.667581+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.668389+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.668389+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.101:0/461375897' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.678688+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.678688+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.687446+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.687446+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.694590+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.101:0/3274519016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.694590+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.101:0/3274519016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.696184+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:02 vm09 bash[22983]: audit 2026-03-09T15:57:02.696184+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: Running main() from gmock_main.cc 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [==========] Running 8 tests from 2 test suites. 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [----------] Global test environment set-up. 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ RUN ] LibradosCWriteOps.NewDelete 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ OK ] LibradosCWriteOps.NewDelete (0 ms) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps (0 ms total) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.assertExists 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.assertExists (2516 ms) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteOpAssertVersion 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteOpAssertVersion (3352 ms) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Xattrs 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Xattrs (4184 ms) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Write 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Write (3368 ms) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Exec 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Exec (2577 ms) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteSame 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteSame (3070 ms) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.CmpExt 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.CmpExt (10189 ms) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps (29256 ms total) 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [----------] Global test environment tear-down 2026-03-09T15:57:03.709 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [==========] 8 tests from 2 test suites ran. (29256 ms total) 2026-03-09T15:57:03.710 INFO:tasks.workunit.client.0.vm01.stdout: api_c_write_operations: [ PASSED ] 8 tests. 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: cluster 2026-03-09T15:57:02.675285+0000 mgr.y (mgr.14520) 130 : cluster [DBG] pgmap v102: 616 pgs: 160 unknown, 456 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: cluster 2026-03-09T15:57:02.675285+0000 mgr.y (mgr.14520) 130 : cluster [DBG] pgmap v102: 616 pgs: 160 unknown, 456 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.063105+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.063105+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.087403+0000 mon.a (mon.0) 1254 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.087403+0000 mon.a (mon.0) 1254 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.138441+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.138441+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.138722+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.138722+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: cluster 2026-03-09T15:57:03.590384+0000 mon.a (mon.0) 1257 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: cluster 2026-03-09T15:57:03.590384+0000 mon.a (mon.0) 1257 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.653867+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]': finished 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.653867+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]': finished 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.653921+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.653921+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.653955+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.653955+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.653989+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.653989+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: cluster 2026-03-09T15:57:03.678450+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: cluster 2026-03-09T15:57:03.678450+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T15:57:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.720490+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.720490+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.802054+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:04.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:03 vm09 bash[22983]: audit 2026-03-09T15:57:03.802054+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: cluster 2026-03-09T15:57:02.675285+0000 mgr.y (mgr.14520) 130 : cluster [DBG] pgmap v102: 616 pgs: 160 unknown, 456 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: cluster 2026-03-09T15:57:02.675285+0000 mgr.y (mgr.14520) 130 : cluster [DBG] pgmap v102: 616 pgs: 160 unknown, 456 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.063105+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.063105+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.087403+0000 mon.a (mon.0) 1254 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.087403+0000 mon.a (mon.0) 1254 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.138441+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.138441+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.138722+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.138722+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: cluster 2026-03-09T15:57:03.590384+0000 mon.a (mon.0) 1257 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: cluster 2026-03-09T15:57:03.590384+0000 mon.a (mon.0) 1257 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.653867+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]': finished 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.653867+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]': finished 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.653921+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.653921+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.653955+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.653955+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.653989+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.653989+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: cluster 2026-03-09T15:57:03.678450+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: cluster 2026-03-09T15:57:03.678450+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.720490+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.720490+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.802054+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:03 vm01 bash[28152]: audit 2026-03-09T15:57:03.802054+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:03 vm01 bash[20728]: cluster 2026-03-09T15:57:02.675285+0000 mgr.y (mgr.14520) 130 : cluster [DBG] pgmap v102: 616 pgs: 160 unknown, 456 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:03 vm01 bash[20728]: cluster 2026-03-09T15:57:02.675285+0000 mgr.y (mgr.14520) 130 : cluster [DBG] pgmap v102: 616 pgs: 160 unknown, 456 active+clean; 144 MiB data, 952 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:03 vm01 bash[20728]: audit 2026-03-09T15:57:03.063105+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:03 vm01 bash[20728]: audit 2026-03-09T15:57:03.063105+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.087403+0000 mon.a (mon.0) 1254 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.087403+0000 mon.a (mon.0) 1254 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.138441+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.138441+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.138722+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.138722+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: cluster 2026-03-09T15:57:03.590384+0000 mon.a (mon.0) 1257 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: cluster 2026-03-09T15:57:03.590384+0000 mon.a (mon.0) 1257 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.653867+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]': finished 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.653867+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-9", "mode": "writeback"}]': finished 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.653921+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.653921+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm01-60007-6", "pool2": "test-rados-api-vm01-60007-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.653955+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.653955+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm01-59610-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.653989+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.653989+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: cluster 2026-03-09T15:57:03.678450+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: cluster 2026-03-09T15:57:03.678450+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.720490+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.720490+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.802054+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:04.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:03.802054+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:04.755 INFO:tasks.workunit.client.0.vm01.stdout:watch_notify_pp: flushed 2026-03-09T15:57:04.755 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 (3003 ms) 2026-03-09T15:57:04.755 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 2026-03-09T15:57:04.755 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: trying... 2026-03-09T15:57:04.755 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547066652896 notify_id 352187318274 notifier_gid 25037 2026-03-09T15:57:04.755 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: timed out 2026-03-09T15:57:04.755 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: flushing 2026-03-09T15:57:04.755 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: flushed 2026-03-09T15:57:04.755 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 (3006 ms) 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: List watches 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: notify2 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547066652896 notify_id 365072220165 notifier_gid 25037 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: notify2 done 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: watch_check 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: unwatch2 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: flushing 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: done 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 (3025 ms) 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: List watches 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: notify2 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: handle_notify cookie 94547066652896 notify_id 377957122052 notifier_gid 25037 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: notify2 done 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: watch_check 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: unwatch2 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: flushing 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: done 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 (3109 ms) 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP (16477 ms total) 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [----------] Global test environment tear-down 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [==========] 16 tests from 2 test suites ran. (30410 ms total) 2026-03-09T15:57:04.756 INFO:tasks.workunit.client.0.vm01.stdout: api_watch_notify_pp: [ PASSED ] 16 tests. 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.088711+0000 mon.a (mon.0) 1265 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.088711+0000 mon.a (mon.0) 1265 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.187394+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.187394+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.187899+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.187899+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.661401+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.661401+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.661531+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.661531+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.690437+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.690437+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: cluster 2026-03-09T15:57:04.754792+0000 mon.a (mon.0) 1269 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: cluster 2026-03-09T15:57:04.754792+0000 mon.a (mon.0) 1269 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.794705+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:04 vm09 bash[22983]: audit 2026-03-09T15:57:04.794705+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.088711+0000 mon.a (mon.0) 1265 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.088711+0000 mon.a (mon.0) 1265 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.187394+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.187394+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.187899+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.187899+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.661401+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.661401+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.661531+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.661531+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.690437+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.690437+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: cluster 2026-03-09T15:57:04.754792+0000 mon.a (mon.0) 1269 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: cluster 2026-03-09T15:57:04.754792+0000 mon.a (mon.0) 1269 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.794705+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:05 vm01 bash[28152]: audit 2026-03-09T15:57:04.794705+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.088711+0000 mon.a (mon.0) 1265 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.088711+0000 mon.a (mon.0) 1265 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.187394+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.187394+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.187899+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.187899+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:05.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.661401+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.661401+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.101:0/1171287945' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm01-59602-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.661531+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.661531+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.690437+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.690437+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: cluster 2026-03-09T15:57:04.754792+0000 mon.a (mon.0) 1269 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T15:57:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: cluster 2026-03-09T15:57:04.754792+0000 mon.a (mon.0) 1269 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T15:57:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.794705+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:04 vm01 bash[20728]: audit 2026-03-09T15:57:04.794705+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]: dispatch 2026-03-09T15:57:06.250 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: cluster 2026-03-09T15:57:04.677117+0000 mgr.y (mgr.14520) 131 : cluster [DBG] pgmap v105: 612 pgs: 96 creating+peering, 12 creating+activating, 1 active+clean+snaptrim, 40 unknown, 463 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:06.250 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: cluster 2026-03-09T15:57:04.677117+0000 mgr.y (mgr.14520) 131 : cluster [DBG] pgmap v105: 612 pgs: 96 creating+peering, 12 creating+activating, 1 active+clean+snaptrim, 40 unknown, 463 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:06.250 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.090307+0000 mon.a (mon.0) 1271 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:06.250 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.090307+0000 mon.a (mon.0) 1271 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: cluster 2026-03-09T15:57:05.661913+0000 mon.a (mon.0) 1272 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: cluster 2026-03-09T15:57:05.661913+0000 mon.a (mon.0) 1272 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.664842+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.664842+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.664879+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.664879+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: cluster 2026-03-09T15:57:05.668905+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: cluster 2026-03-09T15:57:05.668905+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.771504+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.101:0/1508266905' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.771504+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.101:0/1508266905' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.771787+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:05 vm09 bash[22983]: audit 2026-03-09T15:57:05.771787+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.278 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [==========] Running 4 tests from 1 test suite. 2026-03-09T15:57:06.278 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [----------] Global test environment set-up. 2026-03-09T15:57:06.278 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP 2026-03-09T15:57:06.278 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterEarly 2026-03-09T15:57:06.278 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterEarly (5037 ms) 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterLate 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterLate (110 ms) 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Status 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [ OK ] LibRadosServicePP.Status (20038 ms) 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Close 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: attempt 0 of 20 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [ OK ] LibRadosServicePP.Close (6688 ms) 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP (31873 ms total) 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [----------] Global test environment tear-down 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [==========] 4 tests from 1 test suite ran. (31873 ms total) 2026-03-09T15:57:06.279 INFO:tasks.workunit.client.0.vm01.stdout: api_service_pp: [ PASSED ] 4 tests. 2026-03-09T15:57:06.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:05 vm01 bash[20728]: cluster 2026-03-09T15:57:04.677117+0000 mgr.y (mgr.14520) 131 : cluster [DBG] pgmap v105: 612 pgs: 96 creating+peering, 12 creating+activating, 1 active+clean+snaptrim, 40 unknown, 463 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:06.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:05 vm01 bash[20728]: cluster 2026-03-09T15:57:04.677117+0000 mgr.y (mgr.14520) 131 : cluster [DBG] pgmap v105: 612 pgs: 96 creating+peering, 12 creating+activating, 1 active+clean+snaptrim, 40 unknown, 463 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:06.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.090307+0000 mon.a (mon.0) 1271 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:06.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.090307+0000 mon.a (mon.0) 1271 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:06.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: cluster 2026-03-09T15:57:05.661913+0000 mon.a (mon.0) 1272 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:06.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: cluster 2026-03-09T15:57:05.661913+0000 mon.a (mon.0) 1272 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:06.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.664842+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:06.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.664842+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.664879+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.664879+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: cluster 2026-03-09T15:57:05.668905+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: cluster 2026-03-09T15:57:05.668905+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.771504+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.101:0/1508266905' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.771504+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.101:0/1508266905' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.771787+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:06 vm01 bash[20728]: audit 2026-03-09T15:57:05.771787+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: cluster 2026-03-09T15:57:04.677117+0000 mgr.y (mgr.14520) 131 : cluster [DBG] pgmap v105: 612 pgs: 96 creating+peering, 12 creating+activating, 1 active+clean+snaptrim, 40 unknown, 463 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: cluster 2026-03-09T15:57:04.677117+0000 mgr.y (mgr.14520) 131 : cluster [DBG] pgmap v105: 612 pgs: 96 creating+peering, 12 creating+activating, 1 active+clean+snaptrim, 40 unknown, 463 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.090307+0000 mon.a (mon.0) 1271 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.090307+0000 mon.a (mon.0) 1271 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: cluster 2026-03-09T15:57:05.661913+0000 mon.a (mon.0) 1272 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: cluster 2026-03-09T15:57:05.661913+0000 mon.a (mon.0) 1272 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.664842+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.664842+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm01-59878-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.664879+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.664879+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-9"}]': finished 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: cluster 2026-03-09T15:57:05.668905+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: cluster 2026-03-09T15:57:05.668905+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.771504+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.101:0/1508266905' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.771504+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.101:0/1508266905' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.771787+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:06 vm01 bash[28152]: audit 2026-03-09T15:57:05.771787+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:06.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:57:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.091109+0000 mon.a (mon.0) 1277 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.091109+0000 mon.a (mon.0) 1277 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.277201+0000 mon.c (mon.2) 104 : audit [DBG] from='client.? 192.168.123.101:0/2113924273' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.277201+0000 mon.c (mon.2) 104 : audit [DBG] from='client.? 192.168.123.101:0/2113924273' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.669667+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.669667+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: cluster 2026-03-09T15:57:06.700897+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: cluster 2026-03-09T15:57:06.700897+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.729197+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.101:0/4198722817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.729197+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.101:0/4198722817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.731746+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.731746+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.733284+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:07 vm09 bash[22983]: audit 2026-03-09T15:57:06.733284+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.091109+0000 mon.a (mon.0) 1277 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.091109+0000 mon.a (mon.0) 1277 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.277201+0000 mon.c (mon.2) 104 : audit [DBG] from='client.? 192.168.123.101:0/2113924273' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.277201+0000 mon.c (mon.2) 104 : audit [DBG] from='client.? 192.168.123.101:0/2113924273' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.669667+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.669667+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: cluster 2026-03-09T15:57:06.700897+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: cluster 2026-03-09T15:57:06.700897+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.729197+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.101:0/4198722817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.729197+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.101:0/4198722817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.731746+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.731746+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.733284+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:07 vm01 bash[28152]: audit 2026-03-09T15:57:06.733284+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.091109+0000 mon.a (mon.0) 1277 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.091109+0000 mon.a (mon.0) 1277 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.277201+0000 mon.c (mon.2) 104 : audit [DBG] from='client.? 192.168.123.101:0/2113924273' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.277201+0000 mon.c (mon.2) 104 : audit [DBG] from='client.? 192.168.123.101:0/2113924273' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.669667+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:07.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.669667+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm01-59610-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:07.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: cluster 2026-03-09T15:57:06.700897+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T15:57:07.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: cluster 2026-03-09T15:57:06.700897+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T15:57:07.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.729197+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.101:0/4198722817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.729197+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.101:0/4198722817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.731746+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.731746+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.733284+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:07.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:07 vm01 bash[20728]: audit 2026-03-09T15:57:06.733284+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:06.253242+0000 mgr.y (mgr.14520) 132 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:06.253242+0000 mgr.y (mgr.14520) 132 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:06.277378+0000 mgr.y (mgr.14520) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:06.277378+0000 mgr.y (mgr.14520) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: cluster 2026-03-09T15:57:06.678095+0000 mgr.y (mgr.14520) 134 : cluster [DBG] pgmap v108: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: cluster 2026-03-09T15:57:06.678095+0000 mgr.y (mgr.14520) 134 : cluster [DBG] pgmap v108: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.092034+0000 mon.a (mon.0) 1282 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.092034+0000 mon.a (mon.0) 1282 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: cluster 2026-03-09T15:57:07.773433+0000 mon.a (mon.0) 1283 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: cluster 2026-03-09T15:57:07.773433+0000 mon.a (mon.0) 1283 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.778994+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.778994+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.779031+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.779031+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: cluster 2026-03-09T15:57:07.830738+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: cluster 2026-03-09T15:57:07.830738+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.845742+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.845742+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.863222+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:07.863222+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:08.097099+0000 mon.a (mon.0) 1288 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:08 vm09 bash[22983]: audit 2026-03-09T15:57:08.097099+0000 mon.a (mon.0) 1288 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:06.253242+0000 mgr.y (mgr.14520) 132 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:06.253242+0000 mgr.y (mgr.14520) 132 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:06.277378+0000 mgr.y (mgr.14520) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:06.277378+0000 mgr.y (mgr.14520) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: cluster 2026-03-09T15:57:06.678095+0000 mgr.y (mgr.14520) 134 : cluster [DBG] pgmap v108: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: cluster 2026-03-09T15:57:06.678095+0000 mgr.y (mgr.14520) 134 : cluster [DBG] pgmap v108: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.092034+0000 mon.a (mon.0) 1282 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.092034+0000 mon.a (mon.0) 1282 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: cluster 2026-03-09T15:57:07.773433+0000 mon.a (mon.0) 1283 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: cluster 2026-03-09T15:57:07.773433+0000 mon.a (mon.0) 1283 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.778994+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.778994+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.779031+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.779031+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: cluster 2026-03-09T15:57:07.830738+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: cluster 2026-03-09T15:57:07.830738+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.845742+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.845742+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.863222+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:07.863222+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:08.097099+0000 mon.a (mon.0) 1288 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:08 vm01 bash[28152]: audit 2026-03-09T15:57:08.097099+0000 mon.a (mon.0) 1288 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:06.253242+0000 mgr.y (mgr.14520) 132 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:06.253242+0000 mgr.y (mgr.14520) 132 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:06.277378+0000 mgr.y (mgr.14520) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:06.277378+0000 mgr.y (mgr.14520) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: cluster 2026-03-09T15:57:06.678095+0000 mgr.y (mgr.14520) 134 : cluster [DBG] pgmap v108: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: cluster 2026-03-09T15:57:06.678095+0000 mgr.y (mgr.14520) 134 : cluster [DBG] pgmap v108: 588 pgs: 1 active+clean+snaptrim, 200 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.092034+0000 mon.a (mon.0) 1282 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.092034+0000 mon.a (mon.0) 1282 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: cluster 2026-03-09T15:57:07.773433+0000 mon.a (mon.0) 1283 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: cluster 2026-03-09T15:57:07.773433+0000 mon.a (mon.0) 1283 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.778994+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.778994+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59854-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.779031+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.779031+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm01-59602-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: cluster 2026-03-09T15:57:07.830738+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: cluster 2026-03-09T15:57:07.830738+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.845742+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.845742+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.863222+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:07.863222+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:08.097099+0000 mon.a (mon.0) 1288 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:08 vm01 bash[20728]: audit 2026-03-09T15:57:08.097099+0000 mon.a (mon.0) 1288 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: Running main() from gmock_main.cc 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [==========] Running 12 tests from 1 test suite. 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [----------] Global test environment set-up. 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [----------] 12 tests from NeoRadosMisc 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.Version 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.Version (1258 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.WaitOSDMap 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.WaitOSDMap (2219 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.LongName 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.LongName (3128 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.LongLocator 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.LongLocator (4453 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.LongNamespace 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.LongNamespace (2646 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.LongAttrName 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.LongAttrName (3043 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.Exec 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.Exec (3128 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.Operate1 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.Operate1 (2980 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.Operate2 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.Operate2 (3101 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.BigObject 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.BigObject (3061 ms) 2026-03-09T15:57:08.861 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.BigAttr 2026-03-09T15:57:08.862 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.BigAttr (2042 ms) 2026-03-09T15:57:08.862 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ RUN ] NeoRadosMisc.WriteSame 2026-03-09T15:57:08.862 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ OK ] NeoRadosMisc.WriteSame (3104 ms) 2026-03-09T15:57:08.862 INFO:tasks.workunit.client.0.vm01.stdout: misc: [----------] 12 tests from NeoRadosMisc (34163 ms total) 2026-03-09T15:57:08.862 INFO:tasks.workunit.client.0.vm01.stdout: misc: 2026-03-09T15:57:08.862 INFO:tasks.workunit.client.0.vm01.stdout: misc: [----------] Global test environment tear-down 2026-03-09T15:57:08.862 INFO:tasks.workunit.client.0.vm01.stdout: misc: [==========] 12 tests from 1 test suite ran. (34165 ms total) 2026-03-09T15:57:08.862 INFO:tasks.workunit.client.0.vm01.stdout: misc: [ PASSED ] 12 tests. 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: cluster 2026-03-09T15:57:08.678591+0000 mgr.y (mgr.14520) 135 : cluster [DBG] pgmap v110: 620 pgs: 1 active+clean+snaptrim, 232 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: cluster 2026-03-09T15:57:08.678591+0000 mgr.y (mgr.14520) 135 : cluster [DBG] pgmap v110: 620 pgs: 1 active+clean+snaptrim, 232 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: audit 2026-03-09T15:57:08.783344+0000 mon.a (mon.0) 1289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: audit 2026-03-09T15:57:08.783344+0000 mon.a (mon.0) 1289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: cluster 2026-03-09T15:57:08.796686+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: cluster 2026-03-09T15:57:08.796686+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: audit 2026-03-09T15:57:08.844604+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: audit 2026-03-09T15:57:08.844604+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: audit 2026-03-09T15:57:08.851572+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: audit 2026-03-09T15:57:08.851572+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: audit 2026-03-09T15:57:09.101029+0000 mon.a (mon.0) 1293 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:09 vm09 bash[22983]: audit 2026-03-09T15:57:09.101029+0000 mon.a (mon.0) 1293 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: cluster 2026-03-09T15:57:08.678591+0000 mgr.y (mgr.14520) 135 : cluster [DBG] pgmap v110: 620 pgs: 1 active+clean+snaptrim, 232 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: cluster 2026-03-09T15:57:08.678591+0000 mgr.y (mgr.14520) 135 : cluster [DBG] pgmap v110: 620 pgs: 1 active+clean+snaptrim, 232 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: audit 2026-03-09T15:57:08.783344+0000 mon.a (mon.0) 1289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: audit 2026-03-09T15:57:08.783344+0000 mon.a (mon.0) 1289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: cluster 2026-03-09T15:57:08.796686+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: cluster 2026-03-09T15:57:08.796686+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: audit 2026-03-09T15:57:08.844604+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: audit 2026-03-09T15:57:08.844604+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: audit 2026-03-09T15:57:08.851572+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: audit 2026-03-09T15:57:08.851572+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: audit 2026-03-09T15:57:09.101029+0000 mon.a (mon.0) 1293 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:09 vm01 bash[20728]: audit 2026-03-09T15:57:09.101029+0000 mon.a (mon.0) 1293 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: cluster 2026-03-09T15:57:08.678591+0000 mgr.y (mgr.14520) 135 : cluster [DBG] pgmap v110: 620 pgs: 1 active+clean+snaptrim, 232 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: cluster 2026-03-09T15:57:08.678591+0000 mgr.y (mgr.14520) 135 : cluster [DBG] pgmap v110: 620 pgs: 1 active+clean+snaptrim, 232 unknown, 387 active+clean; 144 MiB data, 1004 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: audit 2026-03-09T15:57:08.783344+0000 mon.a (mon.0) 1289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: audit 2026-03-09T15:57:08.783344+0000 mon.a (mon.0) 1289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: cluster 2026-03-09T15:57:08.796686+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: cluster 2026-03-09T15:57:08.796686+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: audit 2026-03-09T15:57:08.844604+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: audit 2026-03-09T15:57:08.844604+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: audit 2026-03-09T15:57:08.851572+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: audit 2026-03-09T15:57:08.851572+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: audit 2026-03-09T15:57:09.101029+0000 mon.a (mon.0) 1293 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:10.187 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:09 vm01 bash[28152]: audit 2026-03-09T15:57:09.101029+0000 mon.a (mon.0) 1293 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.787445+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.787445+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.787477+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]': finished 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.787477+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]': finished 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: cluster 2026-03-09T15:57:09.801329+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: cluster 2026-03-09T15:57:09.801329+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.809996+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.809996+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.825749+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.101:0/2147477554' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.825749+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.101:0/2147477554' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.889205+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:09.889205+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: cluster 2026-03-09T15:57:09.993083+0000 osd.3 (osd.3) 3 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: cluster 2026-03-09T15:57:09.993083+0000 osd.3 (osd.3) 3 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:11.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: cluster 2026-03-09T15:57:09.999166+0000 osd.3 (osd.3) 4 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: cluster 2026-03-09T15:57:09.999166+0000 osd.3 (osd.3) 4 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.011557+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.011557+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.011930+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.011930+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.102776+0000 mon.a (mon.0) 1300 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.102776+0000 mon.a (mon.0) 1300 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.791244+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.791244+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.791277+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.791277+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.791303+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.791303+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.826112+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.826112+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: cluster 2026-03-09T15:57:10.830338+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: cluster 2026-03-09T15:57:10.830338+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.841819+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:11 vm09 bash[22983]: audit 2026-03-09T15:57:10.841819+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.787445+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.787445+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.787477+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]': finished 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.787477+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]': finished 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: cluster 2026-03-09T15:57:09.801329+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: cluster 2026-03-09T15:57:09.801329+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.809996+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.809996+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.825749+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.101:0/2147477554' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.825749+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.101:0/2147477554' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.889205+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:09.889205+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: cluster 2026-03-09T15:57:09.993083+0000 osd.3 (osd.3) 3 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: cluster 2026-03-09T15:57:09.993083+0000 osd.3 (osd.3) 3 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: cluster 2026-03-09T15:57:09.999166+0000 osd.3 (osd.3) 4 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:11.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: cluster 2026-03-09T15:57:09.999166+0000 osd.3 (osd.3) 4 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:10.011557+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:10.011557+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:10.011930+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:10 vm01 bash[20728]: audit 2026-03-09T15:57:10.011930+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.102776+0000 mon.a (mon.0) 1300 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.102776+0000 mon.a (mon.0) 1300 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.791244+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.791244+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.791277+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.791277+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.791303+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.791303+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.826112+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.826112+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: cluster 2026-03-09T15:57:10.830338+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: cluster 2026-03-09T15:57:10.830338+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.841819+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:11 vm01 bash[20728]: audit 2026-03-09T15:57:10.841819+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.787445+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.787445+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.101:0/2147482252' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm01-59610-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.787477+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.787477+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache", "force_nonempty":""}]': finished 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: cluster 2026-03-09T15:57:09.801329+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: cluster 2026-03-09T15:57:09.801329+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.809996+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.809996+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.825749+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.101:0/2147477554' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.825749+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.101:0/2147477554' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.889205+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:09.889205+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: cluster 2026-03-09T15:57:09.993083+0000 osd.3 (osd.3) 3 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: cluster 2026-03-09T15:57:09.993083+0000 osd.3 (osd.3) 3 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: cluster 2026-03-09T15:57:09.999166+0000 osd.3 (osd.3) 4 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: cluster 2026-03-09T15:57:09.999166+0000 osd.3 (osd.3) 4 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.011557+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.011557+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.011930+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.011930+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.102776+0000 mon.a (mon.0) 1300 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:11.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.102776+0000 mon.a (mon.0) 1300 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.791244+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.791244+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59854-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.791277+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.791277+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm01-59602-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.791303+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.791303+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.826112+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.826112+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: cluster 2026-03-09T15:57:10.830338+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: cluster 2026-03-09T15:57:10.830338+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.841819+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:11.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:10 vm01 bash[28152]: audit 2026-03-09T15:57:10.841819+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: cluster 2026-03-09T15:57:10.679027+0000 mgr.y (mgr.14520) 136 : cluster [DBG] pgmap v113: 652 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 56 creating+peering, 132 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 2 op/s 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: cluster 2026-03-09T15:57:10.679027+0000 mgr.y (mgr.14520) 136 : cluster [DBG] pgmap v113: 652 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 56 creating+peering, 132 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 2 op/s 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:10.900053+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:10.900053+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.106406+0000 mon.a (mon.0) 1307 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.106406+0000 mon.a (mon.0) 1307 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.797593+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.797593+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.797646+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]': finished 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.797646+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]': finished 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: cluster 2026-03-09T15:57:11.812120+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: cluster 2026-03-09T15:57:11.812120+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.843724+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.843724+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.891419+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.101:0/244849011' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.891419+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.101:0/244849011' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.910071+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:12 vm09 bash[22983]: audit 2026-03-09T15:57:11.910071+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: cluster 2026-03-09T15:57:10.679027+0000 mgr.y (mgr.14520) 136 : cluster [DBG] pgmap v113: 652 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 56 creating+peering, 132 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 2 op/s 2026-03-09T15:57:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: cluster 2026-03-09T15:57:10.679027+0000 mgr.y (mgr.14520) 136 : cluster [DBG] pgmap v113: 652 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 56 creating+peering, 132 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 2 op/s 2026-03-09T15:57:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:10.900053+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]: dispatch 2026-03-09T15:57:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:10.900053+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]: dispatch 2026-03-09T15:57:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.106406+0000 mon.a (mon.0) 1307 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.106406+0000 mon.a (mon.0) 1307 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.797593+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.797593+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.797646+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]': finished 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.797646+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]': finished 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: cluster 2026-03-09T15:57:11.812120+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: cluster 2026-03-09T15:57:11.812120+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.843724+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.843724+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.891419+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.101:0/244849011' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.891419+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.101:0/244849011' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.910071+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:12 vm01 bash[28152]: audit 2026-03-09T15:57:11.910071+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: cluster 2026-03-09T15:57:10.679027+0000 mgr.y (mgr.14520) 136 : cluster [DBG] pgmap v113: 652 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 56 creating+peering, 132 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 2 op/s 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: cluster 2026-03-09T15:57:10.679027+0000 mgr.y (mgr.14520) 136 : cluster [DBG] pgmap v113: 652 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 56 creating+peering, 132 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 256 KiB/s wr, 2 op/s 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:10.900053+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:10.900053+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.106406+0000 mon.a (mon.0) 1307 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.106406+0000 mon.a (mon.0) 1307 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.797593+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.797593+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.797646+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]': finished 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.797646+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? 192.168.123.101:0/3642165903' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59854-10", "tierpool":"test-rados-api-vm01-59854-10-cache"}]': finished 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: cluster 2026-03-09T15:57:11.812120+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: cluster 2026-03-09T15:57:11.812120+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.843724+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.843724+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.891419+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.101:0/244849011' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.891419+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.101:0/244849011' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.910071+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:12 vm01 bash[20728]: audit 2026-03-09T15:57:11.910071+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]: dispatch 2026-03-09T15:57:13.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:57:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:57:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:57:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:13 vm09 bash[22983]: audit 2026-03-09T15:57:12.000009+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:13 vm09 bash[22983]: audit 2026-03-09T15:57:12.000009+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:13 vm09 bash[22983]: audit 2026-03-09T15:57:12.113318+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:13 vm09 bash[22983]: audit 2026-03-09T15:57:12.113318+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:13.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:13 vm01 bash[20728]: audit 2026-03-09T15:57:12.000009+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:13.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:13 vm01 bash[20728]: audit 2026-03-09T15:57:12.000009+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:13.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:13 vm01 bash[20728]: audit 2026-03-09T15:57:12.113318+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:13.933 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:13 vm01 bash[20728]: audit 2026-03-09T15:57:12.113318+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:13.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:13 vm01 bash[28152]: audit 2026-03-09T15:57:12.000009+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:13.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:13 vm01 bash[28152]: audit 2026-03-09T15:57:12.000009+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:13.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:13 vm01 bash[28152]: audit 2026-03-09T15:57:12.113318+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:13.933 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:13 vm01 bash[28152]: audit 2026-03-09T15:57:12.113318+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: Running main() from gmock_main.cc 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [==========] Running 9 tests from 1 test suite. 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [----------] Global test environment set-up. 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [----------] 9 tests from LibRadosPools 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ RUN ] LibRadosPools.PoolList 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ OK ] LibRadosPools.PoolList (2723 ms) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup (3332 ms) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup2 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup2 (4148 ms) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookupOtherInstance 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ OK ] LibRadosPools.PoolLookupOtherInstance (3426 ms) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ RUN ] LibRadosPools.PoolReverseLookupOtherInstance 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ OK ] LibRadosPools.PoolReverseLookupOtherInstance (2572 ms) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ RUN ] LibRadosPools.PoolDelete 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ OK ] LibRadosPools.PoolDelete (5156 ms) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateDelete 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateDelete (5107 ms) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateWithCrushRule 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateWithCrushRule (5065 ms) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ RUN ] LibRadosPools.PoolGetBaseTier 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ OK ] LibRadosPools.PoolGetBaseTier (8093 ms) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [----------] 9 tests from LibRadosPools (39622 ms total) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [----------] Global test environment tear-down 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [==========] 9 tests from 1 test suite ran. (39622 ms total) 2026-03-09T15:57:14.029 INFO:tasks.workunit.client.0.vm01.stdout: api_pool: [ PASSED ] 9 tests. 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:12.679658+0000 mgr.y (mgr.14520) 137 : cluster [DBG] pgmap v116: 556 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 42 creating+peering, 50 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:12.679658+0000 mgr.y (mgr.14520) 137 : cluster [DBG] pgmap v116: 556 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 42 creating+peering, 50 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:12.798785+0000 mon.a (mon.0) 1314 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:12.798785+0000 mon.a (mon.0) 1314 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:13.449683+0000 mon.a (mon.0) 1315 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:13.449683+0000 mon.a (mon.0) 1315 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.451091+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.451091+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.462623+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]': finished 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.462623+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]': finished 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.462654+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:14.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.462654+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:13.534791+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:13.534791+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.611097+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.611097+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.708331+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:13.708331+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:13.758672+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: cluster 2026-03-09T15:57:13.758672+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:14.026289+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:14.026289+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:14.470487+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:14 vm01 bash[20728]: audit 2026-03-09T15:57:14.470487+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:12.679658+0000 mgr.y (mgr.14520) 137 : cluster [DBG] pgmap v116: 556 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 42 creating+peering, 50 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:12.679658+0000 mgr.y (mgr.14520) 137 : cluster [DBG] pgmap v116: 556 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 42 creating+peering, 50 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:12.798785+0000 mon.a (mon.0) 1314 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:12.798785+0000 mon.a (mon.0) 1314 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:13.449683+0000 mon.a (mon.0) 1315 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:13.449683+0000 mon.a (mon.0) 1315 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.451091+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.451091+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.462623+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]': finished 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.462623+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]': finished 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.462654+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.462654+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:13.534791+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:13.534791+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.611097+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.611097+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.708331+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:13.708331+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:13.758672+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: cluster 2026-03-09T15:57:13.758672+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:14.026289+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:14.026289+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:14.470487+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:14.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:14 vm01 bash[28152]: audit 2026-03-09T15:57:14.470487+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:12.679658+0000 mgr.y (mgr.14520) 137 : cluster [DBG] pgmap v116: 556 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 42 creating+peering, 50 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:12.679658+0000 mgr.y (mgr.14520) 137 : cluster [DBG] pgmap v116: 556 pgs: 1 active+clean+snaptrim, 8 creating+activating, 1 active, 42 creating+peering, 50 unknown, 454 active+clean; 145 MiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 257 KiB/s wr, 2 op/s 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:12.798785+0000 mon.a (mon.0) 1314 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:12.798785+0000 mon.a (mon.0) 1314 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:13.449683+0000 mon.a (mon.0) 1315 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:13.449683+0000 mon.a (mon.0) 1315 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.451091+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.451091+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.462623+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]': finished 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.462623+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-11", "mode": "writeback"}]': finished 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.462654+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.462654+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:13.534791+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:13.534791+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T15:57:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.611097+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.611097+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.708331+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:13.708331+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.101:0/3633726681' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm01-59602-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:13.758672+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T15:57:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: cluster 2026-03-09T15:57:13.758672+0000 mon.a (mon.0) 1322 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T15:57:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:14.026289+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:14.026289+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:14.470487+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:14 vm09 bash[22983]: audit 2026-03-09T15:57:14.470487+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: cluster 2026-03-09T15:57:14.680901+0000 mgr.y (mgr.14520) 138 : cluster [DBG] pgmap v119: 556 pgs: 19 creating+activating, 43 creating+peering, 494 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 7 op/s 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: cluster 2026-03-09T15:57:14.680901+0000 mgr.y (mgr.14520) 138 : cluster [DBG] pgmap v119: 556 pgs: 19 creating+activating, 43 creating+peering, 494 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 7 op/s 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: cluster 2026-03-09T15:57:14.734304+0000 mon.a (mon.0) 1325 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: cluster 2026-03-09T15:57:14.734304+0000 mon.a (mon.0) 1325 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: audit 2026-03-09T15:57:15.004443+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: audit 2026-03-09T15:57:15.004443+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: audit 2026-03-09T15:57:15.004778+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: audit 2026-03-09T15:57:15.004778+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: audit 2026-03-09T15:57:15.471855+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:15 vm09 bash[22983]: audit 2026-03-09T15:57:15.471855+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: cluster 2026-03-09T15:57:14.680901+0000 mgr.y (mgr.14520) 138 : cluster [DBG] pgmap v119: 556 pgs: 19 creating+activating, 43 creating+peering, 494 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 7 op/s 2026-03-09T15:57:16.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: cluster 2026-03-09T15:57:14.680901+0000 mgr.y (mgr.14520) 138 : cluster [DBG] pgmap v119: 556 pgs: 19 creating+activating, 43 creating+peering, 494 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 7 op/s 2026-03-09T15:57:16.182 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: cluster 2026-03-09T15:57:14.734304+0000 mon.a (mon.0) 1325 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: cluster 2026-03-09T15:57:14.734304+0000 mon.a (mon.0) 1325 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: audit 2026-03-09T15:57:15.004443+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: audit 2026-03-09T15:57:15.004443+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: audit 2026-03-09T15:57:15.004778+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: audit 2026-03-09T15:57:15.004778+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: audit 2026-03-09T15:57:15.471855+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:15 vm01 bash[20728]: audit 2026-03-09T15:57:15.471855+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: cluster 2026-03-09T15:57:14.680901+0000 mgr.y (mgr.14520) 138 : cluster [DBG] pgmap v119: 556 pgs: 19 creating+activating, 43 creating+peering, 494 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 7 op/s 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: cluster 2026-03-09T15:57:14.680901+0000 mgr.y (mgr.14520) 138 : cluster [DBG] pgmap v119: 556 pgs: 19 creating+activating, 43 creating+peering, 494 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 7 op/s 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: cluster 2026-03-09T15:57:14.734304+0000 mon.a (mon.0) 1325 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: cluster 2026-03-09T15:57:14.734304+0000 mon.a (mon.0) 1325 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: audit 2026-03-09T15:57:15.004443+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: audit 2026-03-09T15:57:15.004443+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: audit 2026-03-09T15:57:15.004778+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: audit 2026-03-09T15:57:15.004778+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: audit 2026-03-09T15:57:15.471855+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:16.183 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:15 vm01 bash[28152]: audit 2026-03-09T15:57:15.471855+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:16.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:57:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.718729+0000 mon.a (mon.0) 1328 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.718729+0000 mon.a (mon.0) 1328 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: cluster 2026-03-09T15:57:15.722713+0000 mon.a (mon.0) 1329 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: cluster 2026-03-09T15:57:15.722713+0000 mon.a (mon.0) 1329 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.750030+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.101:0/2161247849' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.750030+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.101:0/2161247849' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.792667+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.792667+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.792954+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.792954+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.862763+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.862763+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.864788+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:15.864788+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:16.472629+0000 mon.a (mon.0) 1333 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:17.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: audit 2026-03-09T15:57:16.472629+0000 mon.a (mon.0) 1333 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:17.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: cluster 2026-03-09T15:57:16.719192+0000 mon.a (mon.0) 1334 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:17.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:17 vm09 bash[22983]: cluster 2026-03-09T15:57:16.719192+0000 mon.a (mon.0) 1334 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.718729+0000 mon.a (mon.0) 1328 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.718729+0000 mon.a (mon.0) 1328 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: cluster 2026-03-09T15:57:15.722713+0000 mon.a (mon.0) 1329 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: cluster 2026-03-09T15:57:15.722713+0000 mon.a (mon.0) 1329 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.750030+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.101:0/2161247849' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.750030+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.101:0/2161247849' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.792667+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.792667+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.792954+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.792954+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.862763+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.862763+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.864788+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:15.864788+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:16.472629+0000 mon.a (mon.0) 1333 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: audit 2026-03-09T15:57:16.472629+0000 mon.a (mon.0) 1333 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: cluster 2026-03-09T15:57:16.719192+0000 mon.a (mon.0) 1334 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:17 vm01 bash[20728]: cluster 2026-03-09T15:57:16.719192+0000 mon.a (mon.0) 1334 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.718729+0000 mon.a (mon.0) 1328 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.718729+0000 mon.a (mon.0) 1328 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: cluster 2026-03-09T15:57:15.722713+0000 mon.a (mon.0) 1329 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: cluster 2026-03-09T15:57:15.722713+0000 mon.a (mon.0) 1329 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.750030+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.101:0/2161247849' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.750030+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.101:0/2161247849' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.792667+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.792667+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.792954+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.792954+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.862763+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.862763+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.864788+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:15.864788+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:16.472629+0000 mon.a (mon.0) 1333 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: audit 2026-03-09T15:57:16.472629+0000 mon.a (mon.0) 1333 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: cluster 2026-03-09T15:57:16.719192+0000 mon.a (mon.0) 1334 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:17.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:17 vm01 bash[28152]: cluster 2026-03-09T15:57:16.719192+0000 mon.a (mon.0) 1334 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:57:18.044 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: Running main() from gmock_main.cc 2026-03-09T15:57:18.044 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [==========] Running 14 tests from 1 test suite. 2026-03-09T15:57:18.044 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [----------] Global test environment set-up. 2026-03-09T15:57:18.044 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps 2026-03-09T15:57:18.044 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.SetOpFlags 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.SetOpFlags (2237 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertExists 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertExists (3341 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertVersion 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertVersion (4140 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpXattr 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpXattr (3421 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.Read 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.Read (2552 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.Checksum 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.Checksum (3077 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.RWOrderedRead 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.RWOrderedRead (3055 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.ShortRead 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.ShortRead (3130 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.Exec 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.Exec (2993 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.Stat 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.Stat (3060 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.Omap 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.Omap (3133 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.OmapNuls 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.OmapNuls (2981 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.GetXattrs 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.GetXattrs (2903 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpExt 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpExt (3277 ms) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps (43300 ms total) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [----------] Global test environment tear-down 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [==========] 14 tests from 1 test suite ran. (43311 ms total) 2026-03-09T15:57:18.045 INFO:tasks.workunit.client.0.vm01.stdout: read_operations: [ PASSED ] 14 tests. 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: Running main() from gmock_main.cc 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [==========] Running 14 tests from 1 test suite. 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [----------] Global test environment set-up. 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [----------] 14 tests from NeoRadosIo 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.Limits 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.Limits (2337 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.SimpleWrite 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.SimpleWrite (3337 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.ReadOp 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.ReadOp (4181 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.SparseRead 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.SparseRead (3369 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.RoundTrip 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.RoundTrip (2564 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.ReadIntoBuufferlist 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.ReadIntoBuufferlist (3087 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.OverlappingWriteRoundTrip 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.OverlappingWriteRoundTrip (3086 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.WriteFullRoundTrip 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.WriteFullRoundTrip (3068 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.AppendRoundTrip 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.AppendRoundTrip (3003 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.Trunc 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.Trunc (3082 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.Remove 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.Remove (3113 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.XattrsRoundTrip 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.XattrsRoundTrip (2981 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.RmXattr 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.RmXattr (2923 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ RUN ] NeoRadosIo.GetXattrs 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ OK ] NeoRadosIo.GetXattrs (3281 ms) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [----------] 14 tests from NeoRadosIo (43412 ms total) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [----------] Global test environment tear-down 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [==========] 14 tests from 1 test suite ran. (43412 ms total) 2026-03-09T15:57:18.119 INFO:tasks.workunit.client.0.vm01.stdout: io: [ PASSED ] 14 tests. 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:16.263919+0000 mgr.y (mgr.14520) 139 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:16.263919+0000 mgr.y (mgr.14520) 139 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:16.681426+0000 mgr.y (mgr.14520) 140 : cluster [DBG] pgmap v122: 588 pgs: 160 unknown, 428 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.2 KiB/s wr, 6 op/s 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:16.681426+0000 mgr.y (mgr.14520) 140 : cluster [DBG] pgmap v122: 588 pgs: 160 unknown, 428 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.2 KiB/s wr, 6 op/s 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:16.900672+0000 osd.3 (osd.3) 5 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:16.900672+0000 osd.3 (osd.3) 5 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:16.968171+0000 osd.3 (osd.3) 6 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:16.968171+0000 osd.3 (osd.3) 6 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:16.969457+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:16.969457+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:16.969490+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:16.969490+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:16.969508+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:16.969508+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:16.996971+0000 mon.a (mon.0) 1338 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:16.996971+0000 mon.a (mon.0) 1338 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:17.043040+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:17.043040+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:17.473706+0000 mon.a (mon.0) 1340 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:17.473706+0000 mon.a (mon.0) 1340 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:18.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:17.990173+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:18.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:17.990173+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:18.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:17.999459+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T15:57:18.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: cluster 2026-03-09T15:57:17.999459+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T15:57:18.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:18.000156+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:18 vm09 bash[22983]: audit 2026-03-09T15:57:18.000156+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:16.263919+0000 mgr.y (mgr.14520) 139 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:16.263919+0000 mgr.y (mgr.14520) 139 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:16.681426+0000 mgr.y (mgr.14520) 140 : cluster [DBG] pgmap v122: 588 pgs: 160 unknown, 428 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.2 KiB/s wr, 6 op/s 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:16.681426+0000 mgr.y (mgr.14520) 140 : cluster [DBG] pgmap v122: 588 pgs: 160 unknown, 428 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.2 KiB/s wr, 6 op/s 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:16.900672+0000 osd.3 (osd.3) 5 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:16.900672+0000 osd.3 (osd.3) 5 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:16.968171+0000 osd.3 (osd.3) 6 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:16.968171+0000 osd.3 (osd.3) 6 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:16.969457+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:16.969457+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:16.969490+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:16.969490+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:16.969508+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:16.969508+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:16.996971+0000 mon.a (mon.0) 1338 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:16.996971+0000 mon.a (mon.0) 1338 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:17.043040+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:17.043040+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:17.473706+0000 mon.a (mon.0) 1340 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:17.473706+0000 mon.a (mon.0) 1340 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:17.990173+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:17.990173+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:17.999459+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: cluster 2026-03-09T15:57:17.999459+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:18.000156+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:18 vm01 bash[20728]: audit 2026-03-09T15:57:18.000156+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:16.263919+0000 mgr.y (mgr.14520) 139 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:16.263919+0000 mgr.y (mgr.14520) 139 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:16.681426+0000 mgr.y (mgr.14520) 140 : cluster [DBG] pgmap v122: 588 pgs: 160 unknown, 428 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.2 KiB/s wr, 6 op/s 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:16.681426+0000 mgr.y (mgr.14520) 140 : cluster [DBG] pgmap v122: 588 pgs: 160 unknown, 428 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.2 KiB/s wr, 6 op/s 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:16.900672+0000 osd.3 (osd.3) 5 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:16.900672+0000 osd.3 (osd.3) 5 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:16.968171+0000 osd.3 (osd.3) 6 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:16.968171+0000 osd.3 (osd.3) 6 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:16.969457+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:16.969457+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm01-59602-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:16.969490+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:16.969490+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.101:0/2953494875' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm01-59610-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:16.969508+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:16.969508+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-11"}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:16.996971+0000 mon.a (mon.0) 1338 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:16.996971+0000 mon.a (mon.0) 1338 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:17.043040+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:17.043040+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:17.473706+0000 mon.a (mon.0) 1340 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:17.473706+0000 mon.a (mon.0) 1340 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:17.990173+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:17.990173+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:17.999459+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: cluster 2026-03-09T15:57:17.999459+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:18.000156+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:18.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:18 vm01 bash[28152]: audit 2026-03-09T15:57:18.000156+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]: dispatch 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.474444+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.474444+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: cluster 2026-03-09T15:57:18.670440+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: cluster 2026-03-09T15:57:18.670440+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.676155+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.676155+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: cluster 2026-03-09T15:57:18.682470+0000 mgr.y (mgr.14520) 141 : cluster [DBG] pgmap v125: 420 pgs: 32 unknown, 388 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: cluster 2026-03-09T15:57:18.682470+0000 mgr.y (mgr.14520) 141 : cluster [DBG] pgmap v125: 420 pgs: 32 unknown, 388 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: cluster 2026-03-09T15:57:18.713833+0000 mon.a (mon.0) 1347 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: cluster 2026-03-09T15:57:18.713833+0000 mon.a (mon.0) 1347 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T15:57:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.738120+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.101:0/2280250602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.738120+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.101:0/2280250602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.759143+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.759143+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.763287+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.763287+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.764820+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.764820+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.782092+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:19 vm09 bash[22983]: audit 2026-03-09T15:57:18.782092+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.474444+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.474444+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: cluster 2026-03-09T15:57:18.670440+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: cluster 2026-03-09T15:57:18.670440+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.676155+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.676155+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: cluster 2026-03-09T15:57:18.682470+0000 mgr.y (mgr.14520) 141 : cluster [DBG] pgmap v125: 420 pgs: 32 unknown, 388 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: cluster 2026-03-09T15:57:18.682470+0000 mgr.y (mgr.14520) 141 : cluster [DBG] pgmap v125: 420 pgs: 32 unknown, 388 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: cluster 2026-03-09T15:57:18.713833+0000 mon.a (mon.0) 1347 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: cluster 2026-03-09T15:57:18.713833+0000 mon.a (mon.0) 1347 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.738120+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.101:0/2280250602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.738120+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.101:0/2280250602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.759143+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.759143+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.763287+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.763287+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.764820+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.764820+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.782092+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:19 vm01 bash[20728]: audit 2026-03-09T15:57:18.782092+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.474444+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.474444+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: cluster 2026-03-09T15:57:18.670440+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: cluster 2026-03-09T15:57:18.670440+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.676155+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.676155+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.101:0/4243619146' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm01-59878-10"}]': finished 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: cluster 2026-03-09T15:57:18.682470+0000 mgr.y (mgr.14520) 141 : cluster [DBG] pgmap v125: 420 pgs: 32 unknown, 388 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: cluster 2026-03-09T15:57:18.682470+0000 mgr.y (mgr.14520) 141 : cluster [DBG] pgmap v125: 420 pgs: 32 unknown, 388 active+clean; 144 MiB data, 993 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: cluster 2026-03-09T15:57:18.713833+0000 mon.a (mon.0) 1347 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: cluster 2026-03-09T15:57:18.713833+0000 mon.a (mon.0) 1347 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.738120+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.101:0/2280250602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.738120+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.101:0/2280250602' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.759143+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.759143+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.763287+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.763287+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.764820+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.764820+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.782092+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:19.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:19 vm01 bash[28152]: audit 2026-03-09T15:57:18.782092+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.367264+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.367264+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.367484+0000 mgr.y (mgr.14520) 142 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.367484+0000 mgr.y (mgr.14520) 142 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.369743+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.369743+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.369886+0000 mgr.y (mgr.14520) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.369886+0000 mgr.y (mgr.14520) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.370583+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.370583+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.370692+0000 mgr.y (mgr.14520) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.370692+0000 mgr.y (mgr.14520) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.372043+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.372043+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.372151+0000 mgr.y (mgr.14520) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.372151+0000 mgr.y (mgr.14520) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.372787+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.372787+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.372880+0000 mgr.y (mgr.14520) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.372880+0000 mgr.y (mgr.14520) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.374161+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.374161+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.374265+0000 mgr.y (mgr.14520) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.374265+0000 mgr.y (mgr.14520) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.375223+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.375223+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.375318+0000 mgr.y (mgr.14520) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.375318+0000 mgr.y (mgr.14520) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.375826+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.375826+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.375919+0000 mgr.y (mgr.14520) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.375919+0000 mgr.y (mgr.14520) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.377179+0000 mon.a (mon.0) 1359 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.377179+0000 mon.a (mon.0) 1359 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.377274+0000 mgr.y (mgr.14520) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.377274+0000 mgr.y (mgr.14520) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.377859+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.377859+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: cluster 2026-03-09T15:57:19.377937+0000 osd.6 (osd.6) 5 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: cluster 2026-03-09T15:57:19.377937+0000 osd.6 (osd.6) 5 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.377952+0000 mgr.y (mgr.14520) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.377952+0000 mgr.y (mgr.14520) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: cluster 2026-03-09T15:57:19.403741+0000 osd.6 (osd.6) 6 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: cluster 2026-03-09T15:57:19.403741+0000 osd.6 (osd.6) 6 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.475085+0000 mon.a (mon.0) 1361 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.475085+0000 mon.a (mon.0) 1361 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.630199+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.630199+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.631580+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.631580+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.632261+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.632261+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.634038+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.634038+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.635275+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.635275+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.635884+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.635884+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.686423+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.686423+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.686588+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.686588+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.686656+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.686656+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.686689+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.686689+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: cluster 2026-03-09T15:57:19.689930+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: cluster 2026-03-09T15:57:19.689930+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.706053+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.706053+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.721530+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:20 vm09 bash[22983]: audit 2026-03-09T15:57:19.721530+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.367264+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.367264+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.367484+0000 mgr.y (mgr.14520) 142 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.367484+0000 mgr.y (mgr.14520) 142 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.369743+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.369743+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.369886+0000 mgr.y (mgr.14520) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.369886+0000 mgr.y (mgr.14520) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.370583+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.370583+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.370692+0000 mgr.y (mgr.14520) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.370692+0000 mgr.y (mgr.14520) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.372043+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.372043+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.372151+0000 mgr.y (mgr.14520) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.372151+0000 mgr.y (mgr.14520) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.372787+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.372787+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.372880+0000 mgr.y (mgr.14520) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.372880+0000 mgr.y (mgr.14520) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.374161+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.374161+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.374265+0000 mgr.y (mgr.14520) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.374265+0000 mgr.y (mgr.14520) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.375223+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.375223+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.375318+0000 mgr.y (mgr.14520) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.375318+0000 mgr.y (mgr.14520) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.375826+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.375826+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.375919+0000 mgr.y (mgr.14520) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.375919+0000 mgr.y (mgr.14520) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.377179+0000 mon.a (mon.0) 1359 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.377179+0000 mon.a (mon.0) 1359 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.377274+0000 mgr.y (mgr.14520) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.377274+0000 mgr.y (mgr.14520) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.377859+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.377859+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: cluster 2026-03-09T15:57:19.377937+0000 osd.6 (osd.6) 5 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: cluster 2026-03-09T15:57:19.377937+0000 osd.6 (osd.6) 5 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.377952+0000 mgr.y (mgr.14520) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.377952+0000 mgr.y (mgr.14520) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: cluster 2026-03-09T15:57:19.403741+0000 osd.6 (osd.6) 6 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: cluster 2026-03-09T15:57:19.403741+0000 osd.6 (osd.6) 6 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.475085+0000 mon.a (mon.0) 1361 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.475085+0000 mon.a (mon.0) 1361 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.630199+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.630199+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.631580+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.631580+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.632261+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.632261+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.634038+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.634038+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.635275+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.635275+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.635884+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.635884+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.686423+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.686423+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.686588+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.686588+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.686656+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.686656+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.686689+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.686689+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: cluster 2026-03-09T15:57:19.689930+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: cluster 2026-03-09T15:57:19.689930+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.706053+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.706053+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.721530+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:20 vm01 bash[20728]: audit 2026-03-09T15:57:19.721530+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.367264+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.367264+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.367484+0000 mgr.y (mgr.14520) 142 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.367484+0000 mgr.y (mgr.14520) 142 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.369743+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.369743+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.369886+0000 mgr.y (mgr.14520) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.369886+0000 mgr.y (mgr.14520) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.370583+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.370583+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.370692+0000 mgr.y (mgr.14520) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.370692+0000 mgr.y (mgr.14520) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.372043+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.372043+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.372151+0000 mgr.y (mgr.14520) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.372151+0000 mgr.y (mgr.14520) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.372787+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.372787+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.372880+0000 mgr.y (mgr.14520) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.372880+0000 mgr.y (mgr.14520) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.374161+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.374161+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.374265+0000 mgr.y (mgr.14520) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.374265+0000 mgr.y (mgr.14520) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.375223+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.375223+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.375318+0000 mgr.y (mgr.14520) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.375318+0000 mgr.y (mgr.14520) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.375826+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.375826+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.375919+0000 mgr.y (mgr.14520) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.375919+0000 mgr.y (mgr.14520) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.377179+0000 mon.a (mon.0) 1359 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.377179+0000 mon.a (mon.0) 1359 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.377274+0000 mgr.y (mgr.14520) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.377274+0000 mgr.y (mgr.14520) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.377859+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.430 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.377859+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: cluster 2026-03-09T15:57:19.377937+0000 osd.6 (osd.6) 5 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: cluster 2026-03-09T15:57:19.377937+0000 osd.6 (osd.6) 5 : cluster [DBG] 16.6 deep-scrub starts 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.377952+0000 mgr.y (mgr.14520) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.377952+0000 mgr.y (mgr.14520) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: cluster 2026-03-09T15:57:19.403741+0000 osd.6 (osd.6) 6 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: cluster 2026-03-09T15:57:19.403741+0000 osd.6 (osd.6) 6 : cluster [DBG] 16.6 deep-scrub ok 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.475085+0000 mon.a (mon.0) 1361 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.475085+0000 mon.a (mon.0) 1361 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.630199+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.630199+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.631580+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.631580+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.632261+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.632261+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.634038+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.634038+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.635275+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.635275+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.635884+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.635884+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.686423+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.686423+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm01-59602-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.686588+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.686588+0000 mon.a (mon.0) 1366 : audit [INF] from='client.? 192.168.123.101:0/527494714' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm01-59610-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.686656+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.686656+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.686689+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.686689+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: cluster 2026-03-09T15:57:19.689930+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: cluster 2026-03-09T15:57:19.689930+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.706053+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.706053+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.721530+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:20.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:20 vm01 bash[28152]: audit 2026-03-09T15:57:19.721530+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.581089+0000 osd.5 (osd.5) 5 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.581089+0000 osd.5 (osd.5) 5 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.583263+0000 osd.5 (osd.5) 6 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.583263+0000 osd.5 (osd.5) 6 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.600296+0000 osd.4 (osd.4) 5 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.600296+0000 osd.4 (osd.4) 5 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.601697+0000 osd.4 (osd.4) 6 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.601697+0000 osd.4 (osd.4) 6 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.873381+0000 osd.3 (osd.3) 7 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.873381+0000 osd.3 (osd.3) 7 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.890766+0000 osd.3 (osd.3) 8 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:19.890766+0000 osd.3 (osd.3) 8 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.006728+0000 osd.1 (osd.1) 5 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.006728+0000 osd.1 (osd.1) 5 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.007456+0000 osd.1 (osd.1) 6 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.007456+0000 osd.1 (osd.1) 6 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.120470+0000 osd.2 (osd.2) 5 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.120470+0000 osd.2 (osd.2) 5 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.125296+0000 osd.2 (osd.2) 6 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.125296+0000 osd.2 (osd.2) 6 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.328940+0000 osd.0 (osd.0) 9 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.328940+0000 osd.0 (osd.0) 9 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.329980+0000 osd.0 (osd.0) 10 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:57:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.329980+0000 osd.0 (osd.0) 10 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:57:21.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: audit 2026-03-09T15:57:20.475845+0000 mon.a (mon.0) 1371 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:21.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: audit 2026-03-09T15:57:20.475845+0000 mon.a (mon.0) 1371 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:21.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.682946+0000 mgr.y (mgr.14520) 152 : cluster [DBG] pgmap v128: 516 pgs: 57 creating+peering, 39 unknown, 420 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-09T15:57:21.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.682946+0000 mgr.y (mgr.14520) 152 : cluster [DBG] pgmap v128: 516 pgs: 57 creating+peering, 39 unknown, 420 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-09T15:57:21.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.724859+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T15:57:21.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:21 vm09 bash[22983]: cluster 2026-03-09T15:57:20.724859+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T15:57:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.581089+0000 osd.5 (osd.5) 5 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.581089+0000 osd.5 (osd.5) 5 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.583263+0000 osd.5 (osd.5) 6 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.583263+0000 osd.5 (osd.5) 6 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.600296+0000 osd.4 (osd.4) 5 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.600296+0000 osd.4 (osd.4) 5 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.601697+0000 osd.4 (osd.4) 6 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.601697+0000 osd.4 (osd.4) 6 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.873381+0000 osd.3 (osd.3) 7 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.873381+0000 osd.3 (osd.3) 7 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.890766+0000 osd.3 (osd.3) 8 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:19.890766+0000 osd.3 (osd.3) 8 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.006728+0000 osd.1 (osd.1) 5 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.006728+0000 osd.1 (osd.1) 5 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.007456+0000 osd.1 (osd.1) 6 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.007456+0000 osd.1 (osd.1) 6 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.120470+0000 osd.2 (osd.2) 5 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.120470+0000 osd.2 (osd.2) 5 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.125296+0000 osd.2 (osd.2) 6 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.125296+0000 osd.2 (osd.2) 6 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.328940+0000 osd.0 (osd.0) 9 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.328940+0000 osd.0 (osd.0) 9 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.329980+0000 osd.0 (osd.0) 10 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.329980+0000 osd.0 (osd.0) 10 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: audit 2026-03-09T15:57:20.475845+0000 mon.a (mon.0) 1371 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: audit 2026-03-09T15:57:20.475845+0000 mon.a (mon.0) 1371 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.682946+0000 mgr.y (mgr.14520) 152 : cluster [DBG] pgmap v128: 516 pgs: 57 creating+peering, 39 unknown, 420 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.682946+0000 mgr.y (mgr.14520) 152 : cluster [DBG] pgmap v128: 516 pgs: 57 creating+peering, 39 unknown, 420 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.724859+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:21 vm01 bash[20728]: cluster 2026-03-09T15:57:20.724859+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.581089+0000 osd.5 (osd.5) 5 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.581089+0000 osd.5 (osd.5) 5 : cluster [DBG] 16.3 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.583263+0000 osd.5 (osd.5) 6 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.583263+0000 osd.5 (osd.5) 6 : cluster [DBG] 16.3 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.600296+0000 osd.4 (osd.4) 5 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.600296+0000 osd.4 (osd.4) 5 : cluster [DBG] 16.2 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.601697+0000 osd.4 (osd.4) 6 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.601697+0000 osd.4 (osd.4) 6 : cluster [DBG] 16.2 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.873381+0000 osd.3 (osd.3) 7 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.873381+0000 osd.3 (osd.3) 7 : cluster [DBG] 16.4 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.890766+0000 osd.3 (osd.3) 8 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:19.890766+0000 osd.3 (osd.3) 8 : cluster [DBG] 16.4 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.006728+0000 osd.1 (osd.1) 5 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.006728+0000 osd.1 (osd.1) 5 : cluster [DBG] 16.9 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.007456+0000 osd.1 (osd.1) 6 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.007456+0000 osd.1 (osd.1) 6 : cluster [DBG] 16.9 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.120470+0000 osd.2 (osd.2) 5 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.120470+0000 osd.2 (osd.2) 5 : cluster [DBG] 16.1 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.125296+0000 osd.2 (osd.2) 6 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.125296+0000 osd.2 (osd.2) 6 : cluster [DBG] 16.1 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.328940+0000 osd.0 (osd.0) 9 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.328940+0000 osd.0 (osd.0) 9 : cluster [DBG] 16.8 deep-scrub starts 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.329980+0000 osd.0 (osd.0) 10 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.329980+0000 osd.0 (osd.0) 10 : cluster [DBG] 16.8 deep-scrub ok 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: audit 2026-03-09T15:57:20.475845+0000 mon.a (mon.0) 1371 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: audit 2026-03-09T15:57:20.475845+0000 mon.a (mon.0) 1371 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.682946+0000 mgr.y (mgr.14520) 152 : cluster [DBG] pgmap v128: 516 pgs: 57 creating+peering, 39 unknown, 420 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-09T15:57:21.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.682946+0000 mgr.y (mgr.14520) 152 : cluster [DBG] pgmap v128: 516 pgs: 57 creating+peering, 39 unknown, 420 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-09T15:57:21.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.724859+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T15:57:21.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:21 vm01 bash[28152]: cluster 2026-03-09T15:57:20.724859+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:20.892302+0000 osd.3 (osd.3) 9 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:20.892302+0000 osd.3 (osd.3) 9 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:20.894053+0000 osd.3 (osd.3) 10 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:20.894053+0000 osd.3 (osd.3) 10 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:21.351565+0000 osd.0 (osd.0) 11 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:21.351565+0000 osd.0 (osd.0) 11 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:21.352508+0000 osd.0 (osd.0) 12 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:21.352508+0000 osd.0 (osd.0) 12 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.476692+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.476692+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.694981+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.694981+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.732185+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.101:0/4038338097' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.732185+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.101:0/4038338097' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:21.732724+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T15:57:22.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: cluster 2026-03-09T15:57:21.732724+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T15:57:22.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.750788+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.750788+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.765704+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.101:0/284698556' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.765704+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.101:0/284698556' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.768898+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:22 vm09 bash[22983]: audit 2026-03-09T15:57:21.768898+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:20.892302+0000 osd.3 (osd.3) 9 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:20.892302+0000 osd.3 (osd.3) 9 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:20.894053+0000 osd.3 (osd.3) 10 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:20.894053+0000 osd.3 (osd.3) 10 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:21.351565+0000 osd.0 (osd.0) 11 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:21.351565+0000 osd.0 (osd.0) 11 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:21.352508+0000 osd.0 (osd.0) 12 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:21.352508+0000 osd.0 (osd.0) 12 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.476692+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.476692+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.694981+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.694981+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.732185+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.101:0/4038338097' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.732185+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.101:0/4038338097' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:21.732724+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: cluster 2026-03-09T15:57:21.732724+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.750788+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.750788+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.765704+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.101:0/284698556' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.765704+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.101:0/284698556' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.768898+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:22 vm01 bash[20728]: audit 2026-03-09T15:57:21.768898+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:20.892302+0000 osd.3 (osd.3) 9 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:20.892302+0000 osd.3 (osd.3) 9 : cluster [DBG] 16.7 deep-scrub starts 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:20.894053+0000 osd.3 (osd.3) 10 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:20.894053+0000 osd.3 (osd.3) 10 : cluster [DBG] 16.7 deep-scrub ok 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:21.351565+0000 osd.0 (osd.0) 11 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:21.351565+0000 osd.0 (osd.0) 11 : cluster [DBG] 16.5 deep-scrub starts 2026-03-09T15:57:22.456 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:21.352508+0000 osd.0 (osd.0) 12 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:21.352508+0000 osd.0 (osd.0) 12 : cluster [DBG] 16.5 deep-scrub ok 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.476692+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.476692+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.694981+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.694981+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm01-59878-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.732185+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.101:0/4038338097' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.732185+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.101:0/4038338097' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:21.732724+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: cluster 2026-03-09T15:57:21.732724+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.750788+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.750788+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.765704+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.101:0/284698556' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.765704+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.101:0/284698556' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.768898+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:22.457 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:22 vm01 bash[28152]: audit 2026-03-09T15:57:21.768898+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:23.157 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:57:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:57:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:57:23.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: cluster 2026-03-09T15:57:22.394562+0000 osd.0 (osd.0) 13 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: cluster 2026-03-09T15:57:22.394562+0000 osd.0 (osd.0) 13 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: cluster 2026-03-09T15:57:22.394562+0000 osd.0 (osd.0) 13 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: audit 2026-03-09T15:57:22.477377+0000 mon.a (mon.0) 1378 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: audit 2026-03-09T15:57:22.477377+0000 mon.a (mon.0) 1378 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: cluster 2026-03-09T15:57:22.576457+0000 osd.0 (osd.0) 14 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: cluster 2026-03-09T15:57:22.576457+0000 osd.0 (osd.0) 14 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: cluster 2026-03-09T15:57:22.683409+0000 mgr.y (mgr.14520) 153 : cluster [DBG] pgmap v131: 492 pgs: 19 creating+peering, 85 unknown, 388 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 2 op/s 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: cluster 2026-03-09T15:57:22.683409+0000 mgr.y (mgr.14520) 153 : cluster [DBG] pgmap v131: 492 pgs: 19 creating+peering, 85 unknown, 388 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 2 op/s 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: audit 2026-03-09T15:57:22.706898+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: audit 2026-03-09T15:57:22.706898+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: audit 2026-03-09T15:57:22.706944+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: audit 2026-03-09T15:57:22.706944+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: cluster 2026-03-09T15:57:22.709739+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:23 vm01 bash[28152]: cluster 2026-03-09T15:57:22.709739+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: cluster 2026-03-09T15:57:22.394562+0000 osd.0 (osd.0) 13 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: audit 2026-03-09T15:57:22.477377+0000 mon.a (mon.0) 1378 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: audit 2026-03-09T15:57:22.477377+0000 mon.a (mon.0) 1378 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: cluster 2026-03-09T15:57:22.576457+0000 osd.0 (osd.0) 14 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: cluster 2026-03-09T15:57:22.576457+0000 osd.0 (osd.0) 14 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: cluster 2026-03-09T15:57:22.683409+0000 mgr.y (mgr.14520) 153 : cluster [DBG] pgmap v131: 492 pgs: 19 creating+peering, 85 unknown, 388 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 2 op/s 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: cluster 2026-03-09T15:57:22.683409+0000 mgr.y (mgr.14520) 153 : cluster [DBG] pgmap v131: 492 pgs: 19 creating+peering, 85 unknown, 388 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 2 op/s 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: audit 2026-03-09T15:57:22.706898+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: audit 2026-03-09T15:57:22.706898+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: audit 2026-03-09T15:57:22.706944+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: audit 2026-03-09T15:57:22.706944+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: cluster 2026-03-09T15:57:22.709739+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T15:57:23.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:23 vm01 bash[20728]: cluster 2026-03-09T15:57:22.709739+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: cluster 2026-03-09T15:57:22.394562+0000 osd.0 (osd.0) 13 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: cluster 2026-03-09T15:57:22.394562+0000 osd.0 (osd.0) 13 : cluster [DBG] 16.0 deep-scrub starts 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: audit 2026-03-09T15:57:22.477377+0000 mon.a (mon.0) 1378 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: audit 2026-03-09T15:57:22.477377+0000 mon.a (mon.0) 1378 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: cluster 2026-03-09T15:57:22.576457+0000 osd.0 (osd.0) 14 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: cluster 2026-03-09T15:57:22.576457+0000 osd.0 (osd.0) 14 : cluster [DBG] 16.0 deep-scrub ok 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: cluster 2026-03-09T15:57:22.683409+0000 mgr.y (mgr.14520) 153 : cluster [DBG] pgmap v131: 492 pgs: 19 creating+peering, 85 unknown, 388 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 2 op/s 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: cluster 2026-03-09T15:57:22.683409+0000 mgr.y (mgr.14520) 153 : cluster [DBG] pgmap v131: 492 pgs: 19 creating+peering, 85 unknown, 388 active+clean; 168 MiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 2 op/s 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: audit 2026-03-09T15:57:22.706898+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: audit 2026-03-09T15:57:22.706898+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm01-59610-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: audit 2026-03-09T15:57:22.706944+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: audit 2026-03-09T15:57:22.706944+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm01-59602-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: cluster 2026-03-09T15:57:22.709739+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T15:57:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:23 vm09 bash[22983]: cluster 2026-03-09T15:57:22.709739+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T15:57:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:24 vm01 bash[20728]: audit 2026-03-09T15:57:23.478011+0000 mon.a (mon.0) 1382 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:24 vm01 bash[20728]: audit 2026-03-09T15:57:23.478011+0000 mon.a (mon.0) 1382 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:24 vm01 bash[20728]: cluster 2026-03-09T15:57:23.673640+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:24 vm01 bash[20728]: cluster 2026-03-09T15:57:23.673640+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:24 vm01 bash[20728]: cluster 2026-03-09T15:57:23.681043+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T15:57:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:24 vm01 bash[20728]: cluster 2026-03-09T15:57:23.681043+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T15:57:24.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:24 vm01 bash[28152]: audit 2026-03-09T15:57:23.478011+0000 mon.a (mon.0) 1382 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:24 vm01 bash[28152]: audit 2026-03-09T15:57:23.478011+0000 mon.a (mon.0) 1382 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:24 vm01 bash[28152]: cluster 2026-03-09T15:57:23.673640+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:24 vm01 bash[28152]: cluster 2026-03-09T15:57:23.673640+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:24 vm01 bash[28152]: cluster 2026-03-09T15:57:23.681043+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T15:57:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:24 vm01 bash[28152]: cluster 2026-03-09T15:57:23.681043+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T15:57:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:24 vm09 bash[22983]: audit 2026-03-09T15:57:23.478011+0000 mon.a (mon.0) 1382 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:24 vm09 bash[22983]: audit 2026-03-09T15:57:23.478011+0000 mon.a (mon.0) 1382 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:24 vm09 bash[22983]: cluster 2026-03-09T15:57:23.673640+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:24 vm09 bash[22983]: cluster 2026-03-09T15:57:23.673640+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:24 vm09 bash[22983]: cluster 2026-03-09T15:57:23.681043+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T15:57:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:24 vm09 bash[22983]: cluster 2026-03-09T15:57:23.681043+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.478738+0000 mon.a (mon.0) 1385 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.478738+0000 mon.a (mon.0) 1385 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: cluster 2026-03-09T15:57:24.684371+0000 mgr.y (mgr.14520) 154 : cluster [DBG] pgmap v134: 460 pgs: 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 78 op/s 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: cluster 2026-03-09T15:57:24.684371+0000 mgr.y (mgr.14520) 154 : cluster [DBG] pgmap v134: 460 pgs: 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 78 op/s 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: cluster 2026-03-09T15:57:24.790593+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: cluster 2026-03-09T15:57:24.790593+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.798431+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.101:0/3633702013' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.798431+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.101:0/3633702013' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.799175+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.101:0/3836518052' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.799175+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.101:0/3836518052' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.802265+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.802265+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.802635+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:25 vm01 bash[28152]: audit 2026-03-09T15:57:24.802635+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.478738+0000 mon.a (mon.0) 1385 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.478738+0000 mon.a (mon.0) 1385 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: cluster 2026-03-09T15:57:24.684371+0000 mgr.y (mgr.14520) 154 : cluster [DBG] pgmap v134: 460 pgs: 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 78 op/s 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: cluster 2026-03-09T15:57:24.684371+0000 mgr.y (mgr.14520) 154 : cluster [DBG] pgmap v134: 460 pgs: 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 78 op/s 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: cluster 2026-03-09T15:57:24.790593+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: cluster 2026-03-09T15:57:24.790593+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.798431+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.101:0/3633702013' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.798431+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.101:0/3633702013' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.799175+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.101:0/3836518052' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.799175+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.101:0/3836518052' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.802265+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.802265+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.802635+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:25 vm01 bash[20728]: audit 2026-03-09T15:57:24.802635+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.478738+0000 mon.a (mon.0) 1385 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.478738+0000 mon.a (mon.0) 1385 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: cluster 2026-03-09T15:57:24.684371+0000 mgr.y (mgr.14520) 154 : cluster [DBG] pgmap v134: 460 pgs: 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 78 op/s 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: cluster 2026-03-09T15:57:24.684371+0000 mgr.y (mgr.14520) 154 : cluster [DBG] pgmap v134: 460 pgs: 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 78 op/s 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: cluster 2026-03-09T15:57:24.790593+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: cluster 2026-03-09T15:57:24.790593+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.798431+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.101:0/3633702013' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.798431+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.101:0/3633702013' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.799175+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.101:0/3836518052' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.799175+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.101:0/3836518052' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.802265+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.802265+0000 mon.a (mon.0) 1387 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.802635+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:25 vm09 bash[22983]: audit 2026-03-09T15:57:24.802635+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:57:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.299409+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.299409+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.299891+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.299891+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.479462+0000 mon.a (mon.0) 1390 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.479462+0000 mon.a (mon.0) 1390 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.769973+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.769973+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.770047+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.770047+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.770076+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.770076+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: cluster 2026-03-09T15:57:25.772628+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: cluster 2026-03-09T15:57:25.772628+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.860031+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.860031+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.888424+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:26 vm09 bash[22983]: audit 2026-03-09T15:57:25.888424+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.299409+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.299409+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.299891+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.299891+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.479462+0000 mon.a (mon.0) 1390 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.479462+0000 mon.a (mon.0) 1390 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.769973+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.769973+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.770047+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.770047+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.770076+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.770076+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: cluster 2026-03-09T15:57:25.772628+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: cluster 2026-03-09T15:57:25.772628+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.860031+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.860031+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.888424+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:26 vm01 bash[20728]: audit 2026-03-09T15:57:25.888424+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.299409+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.299409+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.299891+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.299891+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.479462+0000 mon.a (mon.0) 1390 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.479462+0000 mon.a (mon.0) 1390 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.769973+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.769973+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm01-59610-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.770047+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.770047+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm01-59602-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.770076+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.770076+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: cluster 2026-03-09T15:57:25.772628+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: cluster 2026-03-09T15:57:25.772628+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.860031+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.860031+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.888424+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:26.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:26 vm01 bash[28152]: audit 2026-03-09T15:57:25.888424+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.274639+0000 mgr.y (mgr.14520) 155 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.274639+0000 mgr.y (mgr.14520) 155 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.414347+0000 mon.a (mon.0) 1396 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.414347+0000 mon.a (mon.0) 1396 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.485768+0000 mon.a (mon.0) 1397 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.485768+0000 mon.a (mon.0) 1397 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: cluster 2026-03-09T15:57:26.684855+0000 mgr.y (mgr.14520) 156 : cluster [DBG] pgmap v137: 524 pgs: 64 unknown, 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 82 op/s 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: cluster 2026-03-09T15:57:26.684855+0000 mgr.y (mgr.14520) 156 : cluster [DBG] pgmap v137: 524 pgs: 64 unknown, 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 82 op/s 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.773891+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.773891+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: cluster 2026-03-09T15:57:26.777197+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: cluster 2026-03-09T15:57:26.777197+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.794659+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.794659+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.810008+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:27 vm09 bash[22983]: audit 2026-03-09T15:57:26.810008+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.274639+0000 mgr.y (mgr.14520) 155 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.274639+0000 mgr.y (mgr.14520) 155 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.414347+0000 mon.a (mon.0) 1396 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.414347+0000 mon.a (mon.0) 1396 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.485768+0000 mon.a (mon.0) 1397 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.485768+0000 mon.a (mon.0) 1397 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: cluster 2026-03-09T15:57:26.684855+0000 mgr.y (mgr.14520) 156 : cluster [DBG] pgmap v137: 524 pgs: 64 unknown, 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 82 op/s 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: cluster 2026-03-09T15:57:26.684855+0000 mgr.y (mgr.14520) 156 : cluster [DBG] pgmap v137: 524 pgs: 64 unknown, 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 82 op/s 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.773891+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.773891+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: cluster 2026-03-09T15:57:26.777197+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: cluster 2026-03-09T15:57:26.777197+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.794659+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.794659+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.810008+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:27 vm01 bash[20728]: audit 2026-03-09T15:57:26.810008+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.274639+0000 mgr.y (mgr.14520) 155 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.274639+0000 mgr.y (mgr.14520) 155 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.414347+0000 mon.a (mon.0) 1396 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.414347+0000 mon.a (mon.0) 1396 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.485768+0000 mon.a (mon.0) 1397 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.485768+0000 mon.a (mon.0) 1397 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: cluster 2026-03-09T15:57:26.684855+0000 mgr.y (mgr.14520) 156 : cluster [DBG] pgmap v137: 524 pgs: 64 unknown, 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 82 op/s 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: cluster 2026-03-09T15:57:26.684855+0000 mgr.y (mgr.14520) 156 : cluster [DBG] pgmap v137: 524 pgs: 64 unknown, 10 creating+activating, 4 creating+peering, 446 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 12 MiB/s wr, 82 op/s 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.773891+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.773891+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: cluster 2026-03-09T15:57:26.777197+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: cluster 2026-03-09T15:57:26.777197+0000 mon.a (mon.0) 1399 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.794659+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.794659+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.810008+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:27.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:27 vm01 bash[28152]: audit 2026-03-09T15:57:26.810008+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]: dispatch 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.487384+0000 mon.a (mon.0) 1401 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.487384+0000 mon.a (mon.0) 1401 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: cluster 2026-03-09T15:57:27.774370+0000 mon.a (mon.0) 1402 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: cluster 2026-03-09T15:57:27.774370+0000 mon.a (mon.0) 1402 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.910528+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]': finished 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.910528+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]': finished 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: cluster 2026-03-09T15:57:27.963572+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: cluster 2026-03-09T15:57:27.963572+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.974327+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.101:0/2863167595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.974327+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.101:0/2863167595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.981681+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.981681+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.981772+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:28 vm09 bash[22983]: audit 2026-03-09T15:57:27.981772+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.487384+0000 mon.a (mon.0) 1401 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.487384+0000 mon.a (mon.0) 1401 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: cluster 2026-03-09T15:57:27.774370+0000 mon.a (mon.0) 1402 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: cluster 2026-03-09T15:57:27.774370+0000 mon.a (mon.0) 1402 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.910528+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]': finished 2026-03-09T15:57:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.910528+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]': finished 2026-03-09T15:57:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: cluster 2026-03-09T15:57:27.963572+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T15:57:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: cluster 2026-03-09T15:57:27.963572+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.974327+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.101:0/2863167595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.974327+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.101:0/2863167595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.981681+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.981681+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.981772+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:28 vm01 bash[20728]: audit 2026-03-09T15:57:27.981772+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.487384+0000 mon.a (mon.0) 1401 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.487384+0000 mon.a (mon.0) 1401 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: cluster 2026-03-09T15:57:27.774370+0000 mon.a (mon.0) 1402 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: cluster 2026-03-09T15:57:27.774370+0000 mon.a (mon.0) 1402 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.910528+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]': finished 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.910528+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-13", "mode": "writeback"}]': finished 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: cluster 2026-03-09T15:57:27.963572+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: cluster 2026-03-09T15:57:27.963572+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.974327+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.101:0/2863167595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.974327+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.101:0/2863167595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.981681+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.981681+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.981772+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:28.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:28 vm01 bash[28152]: audit 2026-03-09T15:57:27.981772+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:28.488540+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:28.488540+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: cluster 2026-03-09T15:57:28.675775+0000 mon.a (mon.0) 1408 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: cluster 2026-03-09T15:57:28.675775+0000 mon.a (mon.0) 1408 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: cluster 2026-03-09T15:57:28.691255+0000 mgr.y (mgr.14520) 157 : cluster [DBG] pgmap v140: 492 pgs: 64 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: cluster 2026-03-09T15:57:28.691255+0000 mgr.y (mgr.14520) 157 : cluster [DBG] pgmap v140: 492 pgs: 64 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:28.919602+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:28.919602+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:28.919647+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:28.919647+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: cluster 2026-03-09T15:57:28.958514+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: cluster 2026-03-09T15:57:28.958514+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.047463+0000 mon.a (mon.0) 1412 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.047463+0000 mon.a (mon.0) 1412 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.225764+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.225764+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.227113+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.227113+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.229268+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.229268+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.230130+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.230130+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.231683+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.231683+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.232449+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.232449+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.236435+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.236435+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.240155+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.240155+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.249248+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.249248+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.262366+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:29 vm09 bash[22983]: audit 2026-03-09T15:57:29.262366+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:29.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:28.488540+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:29.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:28.488540+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:29.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: cluster 2026-03-09T15:57:28.675775+0000 mon.a (mon.0) 1408 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: cluster 2026-03-09T15:57:28.675775+0000 mon.a (mon.0) 1408 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: cluster 2026-03-09T15:57:28.691255+0000 mgr.y (mgr.14520) 157 : cluster [DBG] pgmap v140: 492 pgs: 64 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: cluster 2026-03-09T15:57:28.691255+0000 mgr.y (mgr.14520) 157 : cluster [DBG] pgmap v140: 492 pgs: 64 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:28.919602+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:28.919602+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:28.919647+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:28.919647+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: cluster 2026-03-09T15:57:28.958514+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: cluster 2026-03-09T15:57:28.958514+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.047463+0000 mon.a (mon.0) 1412 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.047463+0000 mon.a (mon.0) 1412 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.225764+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.225764+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.227113+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.227113+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.229268+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.229268+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.230130+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.230130+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.231683+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.231683+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.232449+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.232449+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.236435+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.236435+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.240155+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.240155+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.249248+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.249248+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.262366+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:29 vm01 bash[28152]: audit 2026-03-09T15:57:29.262366+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:29.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:28.488540+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:28.488540+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: cluster 2026-03-09T15:57:28.675775+0000 mon.a (mon.0) 1408 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: cluster 2026-03-09T15:57:28.675775+0000 mon.a (mon.0) 1408 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: cluster 2026-03-09T15:57:28.691255+0000 mgr.y (mgr.14520) 157 : cluster [DBG] pgmap v140: 492 pgs: 64 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: cluster 2026-03-09T15:57:28.691255+0000 mgr.y (mgr.14520) 157 : cluster [DBG] pgmap v140: 492 pgs: 64 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:28.919602+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:28.919602+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm01-59610-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:28.919647+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:28.919647+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? 192.168.123.101:0/1064299850' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm01-59602-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: cluster 2026-03-09T15:57:28.958514+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: cluster 2026-03-09T15:57:28.958514+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.047463+0000 mon.a (mon.0) 1412 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.047463+0000 mon.a (mon.0) 1412 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.225764+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.225764+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.227113+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.227113+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.229268+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.229268+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.230130+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.230130+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.231683+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.231683+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.232449+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.232449+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.236435+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.236435+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.240155+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.240155+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.249248+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.249248+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.262366+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:29.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:29 vm01 bash[20728]: audit 2026-03-09T15:57:29.262366+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:30.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.226346+0000 mgr.y (mgr.14520) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.226346+0000 mgr.y (mgr.14520) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.227500+0000 mgr.y (mgr.14520) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.227500+0000 mgr.y (mgr.14520) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.229411+0000 mgr.y (mgr.14520) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.229411+0000 mgr.y (mgr.14520) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.230269+0000 mgr.y (mgr.14520) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.230269+0000 mgr.y (mgr.14520) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.231808+0000 mgr.y (mgr.14520) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.231808+0000 mgr.y (mgr.14520) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.232560+0000 mgr.y (mgr.14520) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.232560+0000 mgr.y (mgr.14520) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.236610+0000 mgr.y (mgr.14520) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.236610+0000 mgr.y (mgr.14520) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.240345+0000 mgr.y (mgr.14520) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.240345+0000 mgr.y (mgr.14520) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.249451+0000 mgr.y (mgr.14520) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.249451+0000 mgr.y (mgr.14520) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.262563+0000 mgr.y (mgr.14520) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.262563+0000 mgr.y (mgr.14520) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.489750+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: audit 2026-03-09T15:57:29.489750+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.576066+0000 osd.4 (osd.4) 7 : cluster [DBG] 165.7 scrub starts 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.576066+0000 osd.4 (osd.4) 7 : cluster [DBG] 165.7 scrub starts 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.578295+0000 osd.4 (osd.4) 8 : cluster [DBG] 165.7 scrub ok 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.578295+0000 osd.4 (osd.4) 8 : cluster [DBG] 165.7 scrub ok 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.634235+0000 osd.5 (osd.5) 7 : cluster [DBG] 165.8 scrub starts 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.634235+0000 osd.5 (osd.5) 7 : cluster [DBG] 165.8 scrub starts 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.636275+0000 osd.5 (osd.5) 8 : cluster [DBG] 165.8 scrub ok 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.636275+0000 osd.5 (osd.5) 8 : cluster [DBG] 165.8 scrub ok 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.947536+0000 mon.a (mon.0) 1414 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:30 vm01 bash[20728]: cluster 2026-03-09T15:57:29.947536+0000 mon.a (mon.0) 1414 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.226346+0000 mgr.y (mgr.14520) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.226346+0000 mgr.y (mgr.14520) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.227500+0000 mgr.y (mgr.14520) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.227500+0000 mgr.y (mgr.14520) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.229411+0000 mgr.y (mgr.14520) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.229411+0000 mgr.y (mgr.14520) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.230269+0000 mgr.y (mgr.14520) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.230269+0000 mgr.y (mgr.14520) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.231808+0000 mgr.y (mgr.14520) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.231808+0000 mgr.y (mgr.14520) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.232560+0000 mgr.y (mgr.14520) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.232560+0000 mgr.y (mgr.14520) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.236610+0000 mgr.y (mgr.14520) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.236610+0000 mgr.y (mgr.14520) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.240345+0000 mgr.y (mgr.14520) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.240345+0000 mgr.y (mgr.14520) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.249451+0000 mgr.y (mgr.14520) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.249451+0000 mgr.y (mgr.14520) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.262563+0000 mgr.y (mgr.14520) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:30.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.262563+0000 mgr.y (mgr.14520) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.489750+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: audit 2026-03-09T15:57:29.489750+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.576066+0000 osd.4 (osd.4) 7 : cluster [DBG] 165.7 scrub starts 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.576066+0000 osd.4 (osd.4) 7 : cluster [DBG] 165.7 scrub starts 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.578295+0000 osd.4 (osd.4) 8 : cluster [DBG] 165.7 scrub ok 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.578295+0000 osd.4 (osd.4) 8 : cluster [DBG] 165.7 scrub ok 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.634235+0000 osd.5 (osd.5) 7 : cluster [DBG] 165.8 scrub starts 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.634235+0000 osd.5 (osd.5) 7 : cluster [DBG] 165.8 scrub starts 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.636275+0000 osd.5 (osd.5) 8 : cluster [DBG] 165.8 scrub ok 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.636275+0000 osd.5 (osd.5) 8 : cluster [DBG] 165.8 scrub ok 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.947536+0000 mon.a (mon.0) 1414 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T15:57:30.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:30 vm01 bash[28152]: cluster 2026-03-09T15:57:29.947536+0000 mon.a (mon.0) 1414 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.226346+0000 mgr.y (mgr.14520) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.226346+0000 mgr.y (mgr.14520) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.0"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.227500+0000 mgr.y (mgr.14520) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.227500+0000 mgr.y (mgr.14520) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.1"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.229411+0000 mgr.y (mgr.14520) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.229411+0000 mgr.y (mgr.14520) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.2"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.230269+0000 mgr.y (mgr.14520) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.230269+0000 mgr.y (mgr.14520) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.3"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.231808+0000 mgr.y (mgr.14520) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.231808+0000 mgr.y (mgr.14520) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.4"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.232560+0000 mgr.y (mgr.14520) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.232560+0000 mgr.y (mgr.14520) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.5"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.236610+0000 mgr.y (mgr.14520) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.236610+0000 mgr.y (mgr.14520) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.6"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.240345+0000 mgr.y (mgr.14520) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.240345+0000 mgr.y (mgr.14520) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.7"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.249451+0000 mgr.y (mgr.14520) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.249451+0000 mgr.y (mgr.14520) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.8"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.262563+0000 mgr.y (mgr.14520) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.262563+0000 mgr.y (mgr.14520) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "165.9"}]: dispatch 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.489750+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: audit 2026-03-09T15:57:29.489750+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.576066+0000 osd.4 (osd.4) 7 : cluster [DBG] 165.7 scrub starts 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.576066+0000 osd.4 (osd.4) 7 : cluster [DBG] 165.7 scrub starts 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.578295+0000 osd.4 (osd.4) 8 : cluster [DBG] 165.7 scrub ok 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.578295+0000 osd.4 (osd.4) 8 : cluster [DBG] 165.7 scrub ok 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.634235+0000 osd.5 (osd.5) 7 : cluster [DBG] 165.8 scrub starts 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.634235+0000 osd.5 (osd.5) 7 : cluster [DBG] 165.8 scrub starts 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.636275+0000 osd.5 (osd.5) 8 : cluster [DBG] 165.8 scrub ok 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.636275+0000 osd.5 (osd.5) 8 : cluster [DBG] 165.8 scrub ok 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.947536+0000 mon.a (mon.0) 1414 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T15:57:30.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:30 vm09 bash[22983]: cluster 2026-03-09T15:57:29.947536+0000 mon.a (mon.0) 1414 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.249965+0000 osd.2 (osd.2) 7 : cluster [DBG] 165.5 scrub starts 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.249965+0000 osd.2 (osd.2) 7 : cluster [DBG] 165.5 scrub starts 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.349805+0000 osd.1 (osd.1) 7 : cluster [DBG] 165.3 scrub starts 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.349805+0000 osd.1 (osd.1) 7 : cluster [DBG] 165.3 scrub starts 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.391218+0000 osd.2 (osd.2) 8 : cluster [DBG] 165.5 scrub ok 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.391218+0000 osd.2 (osd.2) 8 : cluster [DBG] 165.5 scrub ok 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.393368+0000 osd.1 (osd.1) 8 : cluster [DBG] 165.3 scrub ok 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.393368+0000 osd.1 (osd.1) 8 : cluster [DBG] 165.3 scrub ok 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:30.491236+0000 mon.a (mon.0) 1415 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:30.491236+0000 mon.a (mon.0) 1415 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.604313+0000 osd.5 (osd.5) 9 : cluster [DBG] 165.1 scrub starts 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.604313+0000 osd.5 (osd.5) 9 : cluster [DBG] 165.1 scrub starts 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.606222+0000 osd.5 (osd.5) 10 : cluster [DBG] 165.1 scrub ok 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.606222+0000 osd.5 (osd.5) 10 : cluster [DBG] 165.1 scrub ok 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.691645+0000 mgr.y (mgr.14520) 168 : cluster [DBG] pgmap v143: 460 pgs: 13 creating+peering, 19 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 48 KiB/s rd, 0 B/s wr, 109 op/s 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.691645+0000 mgr.y (mgr.14520) 168 : cluster [DBG] pgmap v143: 460 pgs: 13 creating+peering, 19 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 48 KiB/s rd, 0 B/s wr, 109 op/s 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.974448+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:30.974448+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:30.974746+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:30.974746+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:30.994520+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:30.994520+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:31.001596+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.101:0/2983801883' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:31.001596+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.101:0/2983801883' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:31.002522+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:31.002522+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:31.005758+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.101:0/1358056434' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:31.005758+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.101:0/1358056434' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:31.006099+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: audit 2026-03-09T15:57:31.006099+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:31.087899+0000 osd.1 (osd.1) 9 : cluster [DBG] 165.4 scrub starts 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:31.087899+0000 osd.1 (osd.1) 9 : cluster [DBG] 165.4 scrub starts 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:31.089856+0000 osd.1 (osd.1) 10 : cluster [DBG] 165.4 scrub ok 2026-03-09T15:57:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:31 vm09 bash[22983]: cluster 2026-03-09T15:57:31.089856+0000 osd.1 (osd.1) 10 : cluster [DBG] 165.4 scrub ok 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.249965+0000 osd.2 (osd.2) 7 : cluster [DBG] 165.5 scrub starts 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.249965+0000 osd.2 (osd.2) 7 : cluster [DBG] 165.5 scrub starts 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.349805+0000 osd.1 (osd.1) 7 : cluster [DBG] 165.3 scrub starts 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.349805+0000 osd.1 (osd.1) 7 : cluster [DBG] 165.3 scrub starts 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.391218+0000 osd.2 (osd.2) 8 : cluster [DBG] 165.5 scrub ok 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.391218+0000 osd.2 (osd.2) 8 : cluster [DBG] 165.5 scrub ok 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.393368+0000 osd.1 (osd.1) 8 : cluster [DBG] 165.3 scrub ok 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.393368+0000 osd.1 (osd.1) 8 : cluster [DBG] 165.3 scrub ok 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:30.491236+0000 mon.a (mon.0) 1415 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:30.491236+0000 mon.a (mon.0) 1415 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.604313+0000 osd.5 (osd.5) 9 : cluster [DBG] 165.1 scrub starts 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.604313+0000 osd.5 (osd.5) 9 : cluster [DBG] 165.1 scrub starts 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.606222+0000 osd.5 (osd.5) 10 : cluster [DBG] 165.1 scrub ok 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.606222+0000 osd.5 (osd.5) 10 : cluster [DBG] 165.1 scrub ok 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.691645+0000 mgr.y (mgr.14520) 168 : cluster [DBG] pgmap v143: 460 pgs: 13 creating+peering, 19 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 48 KiB/s rd, 0 B/s wr, 109 op/s 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.691645+0000 mgr.y (mgr.14520) 168 : cluster [DBG] pgmap v143: 460 pgs: 13 creating+peering, 19 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 48 KiB/s rd, 0 B/s wr, 109 op/s 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.974448+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:30.974448+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:30.974746+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:30.974746+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:30.994520+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:30.994520+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:31.001596+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.101:0/2983801883' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:31.001596+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.101:0/2983801883' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:31.002522+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:31.002522+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:31.005758+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.101:0/1358056434' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:31.005758+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.101:0/1358056434' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:31.006099+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: audit 2026-03-09T15:57:31.006099+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:31.087899+0000 osd.1 (osd.1) 9 : cluster [DBG] 165.4 scrub starts 2026-03-09T15:57:31.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:31.087899+0000 osd.1 (osd.1) 9 : cluster [DBG] 165.4 scrub starts 2026-03-09T15:57:31.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:31.089856+0000 osd.1 (osd.1) 10 : cluster [DBG] 165.4 scrub ok 2026-03-09T15:57:31.930 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:31 vm01 bash[28152]: cluster 2026-03-09T15:57:31.089856+0000 osd.1 (osd.1) 10 : cluster [DBG] 165.4 scrub ok 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.249965+0000 osd.2 (osd.2) 7 : cluster [DBG] 165.5 scrub starts 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.249965+0000 osd.2 (osd.2) 7 : cluster [DBG] 165.5 scrub starts 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.349805+0000 osd.1 (osd.1) 7 : cluster [DBG] 165.3 scrub starts 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.349805+0000 osd.1 (osd.1) 7 : cluster [DBG] 165.3 scrub starts 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.391218+0000 osd.2 (osd.2) 8 : cluster [DBG] 165.5 scrub ok 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.391218+0000 osd.2 (osd.2) 8 : cluster [DBG] 165.5 scrub ok 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.393368+0000 osd.1 (osd.1) 8 : cluster [DBG] 165.3 scrub ok 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.393368+0000 osd.1 (osd.1) 8 : cluster [DBG] 165.3 scrub ok 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:30.491236+0000 mon.a (mon.0) 1415 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:30.491236+0000 mon.a (mon.0) 1415 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.604313+0000 osd.5 (osd.5) 9 : cluster [DBG] 165.1 scrub starts 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.604313+0000 osd.5 (osd.5) 9 : cluster [DBG] 165.1 scrub starts 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.606222+0000 osd.5 (osd.5) 10 : cluster [DBG] 165.1 scrub ok 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.606222+0000 osd.5 (osd.5) 10 : cluster [DBG] 165.1 scrub ok 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.691645+0000 mgr.y (mgr.14520) 168 : cluster [DBG] pgmap v143: 460 pgs: 13 creating+peering, 19 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 48 KiB/s rd, 0 B/s wr, 109 op/s 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.691645+0000 mgr.y (mgr.14520) 168 : cluster [DBG] pgmap v143: 460 pgs: 13 creating+peering, 19 unknown, 428 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 48 KiB/s rd, 0 B/s wr, 109 op/s 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.974448+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:30.974448+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:30.974746+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:30.974746+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:30.994520+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:30.994520+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:31.001596+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.101:0/2983801883' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:31.001596+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.101:0/2983801883' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:31.002522+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:31.002522+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:31.005758+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.101:0/1358056434' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:31.005758+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.101:0/1358056434' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:31.006099+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: audit 2026-03-09T15:57:31.006099+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:31.087899+0000 osd.1 (osd.1) 9 : cluster [DBG] 165.4 scrub starts 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:31.087899+0000 osd.1 (osd.1) 9 : cluster [DBG] 165.4 scrub starts 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:31.089856+0000 osd.1 (osd.1) 10 : cluster [DBG] 165.4 scrub ok 2026-03-09T15:57:31.936 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:31 vm01 bash[20728]: cluster 2026-03-09T15:57:31.089856+0000 osd.1 (osd.1) 10 : cluster [DBG] 165.4 scrub ok 2026-03-09T15:57:32.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: cluster 2026-03-09T15:57:31.074478+0000 osd.2 (osd.2) 9 : cluster [DBG] 165.2 scrub starts 2026-03-09T15:57:32.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: cluster 2026-03-09T15:57:31.074478+0000 osd.2 (osd.2) 9 : cluster [DBG] 165.2 scrub starts 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: cluster 2026-03-09T15:57:31.075518+0000 osd.2 (osd.2) 10 : cluster [DBG] 165.2 scrub ok 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: cluster 2026-03-09T15:57:31.075518+0000 osd.2 (osd.2) 10 : cluster [DBG] 165.2 scrub ok 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.431539+0000 mon.a (mon.0) 1420 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.431539+0000 mon.a (mon.0) 1420 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.433383+0000 mon.a (mon.0) 1421 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.433383+0000 mon.a (mon.0) 1421 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.458663+0000 mon.a (mon.0) 1422 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.458663+0000 mon.a (mon.0) 1422 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.492045+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.492045+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.931248+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.931248+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.931297+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.931297+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.931322+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.931322+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.947262+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:31.947262+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: cluster 2026-03-09T15:57:31.951172+0000 mon.a (mon.0) 1427 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: cluster 2026-03-09T15:57:31.951172+0000 mon.a (mon.0) 1427 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:32.027992+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:32 vm01 bash[20728]: audit 2026-03-09T15:57:32.027992+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:57:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:57:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: cluster 2026-03-09T15:57:31.074478+0000 osd.2 (osd.2) 9 : cluster [DBG] 165.2 scrub starts 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: cluster 2026-03-09T15:57:31.074478+0000 osd.2 (osd.2) 9 : cluster [DBG] 165.2 scrub starts 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: cluster 2026-03-09T15:57:31.075518+0000 osd.2 (osd.2) 10 : cluster [DBG] 165.2 scrub ok 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: cluster 2026-03-09T15:57:31.075518+0000 osd.2 (osd.2) 10 : cluster [DBG] 165.2 scrub ok 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.431539+0000 mon.a (mon.0) 1420 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.431539+0000 mon.a (mon.0) 1420 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.433383+0000 mon.a (mon.0) 1421 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.433383+0000 mon.a (mon.0) 1421 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.458663+0000 mon.a (mon.0) 1422 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.458663+0000 mon.a (mon.0) 1422 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.492045+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.492045+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.931248+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.931248+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.931297+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.931297+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.931322+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.931322+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.947262+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:31.947262+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: cluster 2026-03-09T15:57:31.951172+0000 mon.a (mon.0) 1427 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: cluster 2026-03-09T15:57:31.951172+0000 mon.a (mon.0) 1427 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:32.027992+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:32 vm01 bash[28152]: audit 2026-03-09T15:57:32.027992+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: cluster 2026-03-09T15:57:31.074478+0000 osd.2 (osd.2) 9 : cluster [DBG] 165.2 scrub starts 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: cluster 2026-03-09T15:57:31.074478+0000 osd.2 (osd.2) 9 : cluster [DBG] 165.2 scrub starts 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: cluster 2026-03-09T15:57:31.075518+0000 osd.2 (osd.2) 10 : cluster [DBG] 165.2 scrub ok 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: cluster 2026-03-09T15:57:31.075518+0000 osd.2 (osd.2) 10 : cluster [DBG] 165.2 scrub ok 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.431539+0000 mon.a (mon.0) 1420 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.431539+0000 mon.a (mon.0) 1420 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.433383+0000 mon.a (mon.0) 1421 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.433383+0000 mon.a (mon.0) 1421 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.458663+0000 mon.a (mon.0) 1422 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.458663+0000 mon.a (mon.0) 1422 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.492045+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.492045+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.931248+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.931248+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.931297+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.931297+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm01-59610-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.931322+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.931322+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm01-59602-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.947262+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:31.947262+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.101:0/1698914398' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: cluster 2026-03-09T15:57:31.951172+0000 mon.a (mon.0) 1427 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: cluster 2026-03-09T15:57:31.951172+0000 mon.a (mon.0) 1427 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:32.027992+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:32 vm09 bash[22983]: audit 2026-03-09T15:57:32.027992+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]: dispatch 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: Running main() from gmock_main.cc 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [==========] Running 13 tests from 4 test suites. 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] Global test environment set-up. 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapList 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapList (2254 ms) 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapRemove 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapRemove (2135 ms) 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.Rollback 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshots.Rollback (3012 ms) 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapGetName 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapGetName (2431 ms) 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots (9832 ms total) 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Snap 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Snap (4126 ms) 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Rollback 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Rollback (4339 ms) 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.FutureSnapRollback 2026-03-09T15:57:33.013 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.FutureSnapRollback (5085 ms) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged (13550 ms total) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapList 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapList (3128 ms) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapRemove 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapRemove (2023 ms) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.Rollback 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.Rollback (2710 ms) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapGetName 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapGetName (2200 ms) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC (10061 ms total) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Snap 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Snap (4133 ms) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Rollback 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Rollback (4353 ms) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC (8486 ms total) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [----------] Global test environment tear-down 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [==========] 13 tests from 4 test suites ran. (58755 ms total) 2026-03-09T15:57:33.014 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots: [ PASSED ] 13 tests. 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: audit 2026-03-09T15:57:32.492610+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: audit 2026-03-09T15:57:32.492610+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: cluster 2026-03-09T15:57:32.693313+0000 mgr.y (mgr.14520) 169 : cluster [DBG] pgmap v146: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 108 op/s 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: cluster 2026-03-09T15:57:32.693313+0000 mgr.y (mgr.14520) 169 : cluster [DBG] pgmap v146: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 108 op/s 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: audit 2026-03-09T15:57:32.935230+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: audit 2026-03-09T15:57:32.935230+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: cluster 2026-03-09T15:57:32.938895+0000 mon.a (mon.0) 1431 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: cluster 2026-03-09T15:57:32.938895+0000 mon.a (mon.0) 1431 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: audit 2026-03-09T15:57:33.493740+0000 mon.a (mon.0) 1432 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:33 vm09 bash[22983]: audit 2026-03-09T15:57:33.493740+0000 mon.a (mon.0) 1432 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: audit 2026-03-09T15:57:32.492610+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: audit 2026-03-09T15:57:32.492610+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: cluster 2026-03-09T15:57:32.693313+0000 mgr.y (mgr.14520) 169 : cluster [DBG] pgmap v146: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 108 op/s 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: cluster 2026-03-09T15:57:32.693313+0000 mgr.y (mgr.14520) 169 : cluster [DBG] pgmap v146: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 108 op/s 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: audit 2026-03-09T15:57:32.935230+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: audit 2026-03-09T15:57:32.935230+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: cluster 2026-03-09T15:57:32.938895+0000 mon.a (mon.0) 1431 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: cluster 2026-03-09T15:57:32.938895+0000 mon.a (mon.0) 1431 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: audit 2026-03-09T15:57:33.493740+0000 mon.a (mon.0) 1432 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:33 vm01 bash[20728]: audit 2026-03-09T15:57:33.493740+0000 mon.a (mon.0) 1432 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: audit 2026-03-09T15:57:32.492610+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: audit 2026-03-09T15:57:32.492610+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: cluster 2026-03-09T15:57:32.693313+0000 mgr.y (mgr.14520) 169 : cluster [DBG] pgmap v146: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 108 op/s 2026-03-09T15:57:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: cluster 2026-03-09T15:57:32.693313+0000 mgr.y (mgr.14520) 169 : cluster [DBG] pgmap v146: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 108 op/s 2026-03-09T15:57:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: audit 2026-03-09T15:57:32.935230+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: audit 2026-03-09T15:57:32.935230+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm01-59878-15"}]': finished 2026-03-09T15:57:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: cluster 2026-03-09T15:57:32.938895+0000 mon.a (mon.0) 1431 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T15:57:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: cluster 2026-03-09T15:57:32.938895+0000 mon.a (mon.0) 1431 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T15:57:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: audit 2026-03-09T15:57:33.493740+0000 mon.a (mon.0) 1432 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:34.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:33 vm01 bash[28152]: audit 2026-03-09T15:57:33.493740+0000 mon.a (mon.0) 1432 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: cluster 2026-03-09T15:57:33.778649+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: cluster 2026-03-09T15:57:33.778649+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: cluster 2026-03-09T15:57:33.983443+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: cluster 2026-03-09T15:57:33.983443+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:33.989485+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.101:0/523027436' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:33.989485+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.101:0/523027436' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:34.000170+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:34.000170+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:34.021974+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:34.021974+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:34.031067+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:34.031067+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:34.494506+0000 mon.a (mon.0) 1438 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:34 vm09 bash[22983]: audit 2026-03-09T15:57:34.494506+0000 mon.a (mon.0) 1438 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: cluster 2026-03-09T15:57:33.778649+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: cluster 2026-03-09T15:57:33.778649+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: cluster 2026-03-09T15:57:33.983443+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: cluster 2026-03-09T15:57:33.983443+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:33.989485+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.101:0/523027436' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:33.989485+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.101:0/523027436' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:34.000170+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:34.000170+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:34.021974+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:34.021974+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:34.031067+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:34.031067+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:34.494506+0000 mon.a (mon.0) 1438 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:34 vm01 bash[20728]: audit 2026-03-09T15:57:34.494506+0000 mon.a (mon.0) 1438 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: cluster 2026-03-09T15:57:33.778649+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: cluster 2026-03-09T15:57:33.778649+0000 mon.a (mon.0) 1433 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: cluster 2026-03-09T15:57:33.983443+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: cluster 2026-03-09T15:57:33.983443+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:33.989485+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.101:0/523027436' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:33.989485+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.101:0/523027436' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:34.000170+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:34.000170+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:34.021974+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:34.021974+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:34.031067+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:34.031067+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:34.494506+0000 mon.a (mon.0) 1438 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:35.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:34 vm01 bash[28152]: audit 2026-03-09T15:57:34.494506+0000 mon.a (mon.0) 1438 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:36.279 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: cluster 2026-03-09T15:57:34.694152+0000 mgr.y (mgr.14520) 170 : cluster [DBG] pgmap v149: 516 pgs: 73 creating+peering, 44 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 op/s 2026-03-09T15:57:36.279 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: cluster 2026-03-09T15:57:34.694152+0000 mgr.y (mgr.14520) 170 : cluster [DBG] pgmap v149: 516 pgs: 73 creating+peering, 44 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 op/s 2026-03-09T15:57:36.279 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: audit 2026-03-09T15:57:34.952518+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.279 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: audit 2026-03-09T15:57:34.952518+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.279 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: audit 2026-03-09T15:57:34.952590+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.279 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: audit 2026-03-09T15:57:34.952590+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.279 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: audit 2026-03-09T15:57:34.952650+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.279 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: audit 2026-03-09T15:57:34.952650+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.280 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: cluster 2026-03-09T15:57:34.983880+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T15:57:36.280 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: cluster 2026-03-09T15:57:34.983880+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T15:57:36.280 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: audit 2026-03-09T15:57:35.495540+0000 mon.a (mon.0) 1443 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:36.280 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:35 vm09 bash[22983]: audit 2026-03-09T15:57:35.495540+0000 mon.a (mon.0) 1443 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: cluster 2026-03-09T15:57:34.694152+0000 mgr.y (mgr.14520) 170 : cluster [DBG] pgmap v149: 516 pgs: 73 creating+peering, 44 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 op/s 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: cluster 2026-03-09T15:57:34.694152+0000 mgr.y (mgr.14520) 170 : cluster [DBG] pgmap v149: 516 pgs: 73 creating+peering, 44 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 op/s 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: audit 2026-03-09T15:57:34.952518+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: audit 2026-03-09T15:57:34.952518+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: audit 2026-03-09T15:57:34.952590+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: audit 2026-03-09T15:57:34.952590+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: audit 2026-03-09T15:57:34.952650+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: audit 2026-03-09T15:57:34.952650+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: cluster 2026-03-09T15:57:34.983880+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: cluster 2026-03-09T15:57:34.983880+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: audit 2026-03-09T15:57:35.495540+0000 mon.a (mon.0) 1443 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:35 vm01 bash[20728]: audit 2026-03-09T15:57:35.495540+0000 mon.a (mon.0) 1443 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:35 vm01 bash[28152]: cluster 2026-03-09T15:57:34.694152+0000 mgr.y (mgr.14520) 170 : cluster [DBG] pgmap v149: 516 pgs: 73 creating+peering, 44 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 op/s 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:35 vm01 bash[28152]: cluster 2026-03-09T15:57:34.694152+0000 mgr.y (mgr.14520) 170 : cluster [DBG] pgmap v149: 516 pgs: 73 creating+peering, 44 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 10 op/s 2026-03-09T15:57:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:35 vm01 bash[28152]: audit 2026-03-09T15:57:34.952518+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:35 vm01 bash[28152]: audit 2026-03-09T15:57:34.952518+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59602-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: audit 2026-03-09T15:57:34.952590+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: audit 2026-03-09T15:57:34.952590+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.101:0/313206688' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59908-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: audit 2026-03-09T15:57:34.952650+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: audit 2026-03-09T15:57:34.952650+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? 192.168.123.101:0/4111510090' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm01-59610-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: cluster 2026-03-09T15:57:34.983880+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T15:57:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: cluster 2026-03-09T15:57:34.983880+0000 mon.a (mon.0) 1442 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T15:57:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: audit 2026-03-09T15:57:35.495540+0000 mon.a (mon.0) 1443 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:36.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: audit 2026-03-09T15:57:35.495540+0000 mon.a (mon.0) 1443 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:36.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:57:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:57:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:36 vm09 bash[22983]: cluster 2026-03-09T15:57:35.963723+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T15:57:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:36 vm09 bash[22983]: cluster 2026-03-09T15:57:35.963723+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T15:57:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:36 vm09 bash[22983]: cluster 2026-03-09T15:57:36.013201+0000 osd.3 (osd.3) 11 : cluster [DBG] 165.9 scrub starts 2026-03-09T15:57:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:36 vm09 bash[22983]: cluster 2026-03-09T15:57:36.013201+0000 osd.3 (osd.3) 11 : cluster [DBG] 165.9 scrub starts 2026-03-09T15:57:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:36 vm09 bash[22983]: cluster 2026-03-09T15:57:36.014603+0000 osd.3 (osd.3) 12 : cluster [DBG] 165.9 scrub ok 2026-03-09T15:57:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:36 vm09 bash[22983]: cluster 2026-03-09T15:57:36.014603+0000 osd.3 (osd.3) 12 : cluster [DBG] 165.9 scrub ok 2026-03-09T15:57:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:36 vm09 bash[22983]: audit 2026-03-09T15:57:36.496978+0000 mon.a (mon.0) 1445 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:36 vm09 bash[22983]: audit 2026-03-09T15:57:36.496978+0000 mon.a (mon.0) 1445 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:36 vm01 bash[20728]: cluster 2026-03-09T15:57:35.963723+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T15:57:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:36 vm01 bash[20728]: cluster 2026-03-09T15:57:35.963723+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T15:57:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:36 vm01 bash[20728]: cluster 2026-03-09T15:57:36.013201+0000 osd.3 (osd.3) 11 : cluster [DBG] 165.9 scrub starts 2026-03-09T15:57:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:36 vm01 bash[20728]: cluster 2026-03-09T15:57:36.013201+0000 osd.3 (osd.3) 11 : cluster [DBG] 165.9 scrub starts 2026-03-09T15:57:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:36 vm01 bash[20728]: cluster 2026-03-09T15:57:36.014603+0000 osd.3 (osd.3) 12 : cluster [DBG] 165.9 scrub ok 2026-03-09T15:57:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:36 vm01 bash[20728]: cluster 2026-03-09T15:57:36.014603+0000 osd.3 (osd.3) 12 : cluster [DBG] 165.9 scrub ok 2026-03-09T15:57:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:36 vm01 bash[20728]: audit 2026-03-09T15:57:36.496978+0000 mon.a (mon.0) 1445 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:37.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:36 vm01 bash[20728]: audit 2026-03-09T15:57:36.496978+0000 mon.a (mon.0) 1445 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: cluster 2026-03-09T15:57:35.963723+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T15:57:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: cluster 2026-03-09T15:57:35.963723+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T15:57:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: cluster 2026-03-09T15:57:36.013201+0000 osd.3 (osd.3) 11 : cluster [DBG] 165.9 scrub starts 2026-03-09T15:57:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: cluster 2026-03-09T15:57:36.013201+0000 osd.3 (osd.3) 11 : cluster [DBG] 165.9 scrub starts 2026-03-09T15:57:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: cluster 2026-03-09T15:57:36.014603+0000 osd.3 (osd.3) 12 : cluster [DBG] 165.9 scrub ok 2026-03-09T15:57:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:36 vm01 bash[28152]: cluster 2026-03-09T15:57:36.014603+0000 osd.3 (osd.3) 12 : cluster [DBG] 165.9 scrub ok 2026-03-09T15:57:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:37 vm01 bash[28152]: audit 2026-03-09T15:57:36.496978+0000 mon.a (mon.0) 1445 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:37.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:37 vm01 bash[28152]: audit 2026-03-09T15:57:36.496978+0000 mon.a (mon.0) 1445 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:36.282450+0000 mgr.y (mgr.14520) 171 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:36.282450+0000 mgr.y (mgr.14520) 171 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: cluster 2026-03-09T15:57:36.694518+0000 mgr.y (mgr.14520) 172 : cluster [DBG] pgmap v152: 452 pgs: 38 creating+peering, 15 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 11 KiB/s rd, 10 op/s 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: cluster 2026-03-09T15:57:36.694518+0000 mgr.y (mgr.14520) 172 : cluster [DBG] pgmap v152: 452 pgs: 38 creating+peering, 15 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 11 KiB/s rd, 10 op/s 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: cluster 2026-03-09T15:57:37.012289+0000 mon.a (mon.0) 1446 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: cluster 2026-03-09T15:57:37.012289+0000 mon.a (mon.0) 1446 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:37.030261+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.101:0/3000106707' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:37.030261+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.101:0/3000106707' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:37.068681+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:37.068681+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:37.069271+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:37.069271+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: cluster 2026-03-09T15:57:37.085546+0000 osd.3 (osd.3) 13 : cluster [DBG] 165.6 deep-scrub starts 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: cluster 2026-03-09T15:57:37.085546+0000 osd.3 (osd.3) 13 : cluster [DBG] 165.6 deep-scrub starts 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: cluster 2026-03-09T15:57:37.113670+0000 osd.3 (osd.3) 14 : cluster [DBG] 165.6 deep-scrub ok 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: cluster 2026-03-09T15:57:37.113670+0000 osd.3 (osd.3) 14 : cluster [DBG] 165.6 deep-scrub ok 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:37.497828+0000 mon.a (mon.0) 1449 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:38 vm09 bash[22983]: audit 2026-03-09T15:57:37.497828+0000 mon.a (mon.0) 1449 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:36.282450+0000 mgr.y (mgr.14520) 171 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:36.282450+0000 mgr.y (mgr.14520) 171 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: cluster 2026-03-09T15:57:36.694518+0000 mgr.y (mgr.14520) 172 : cluster [DBG] pgmap v152: 452 pgs: 38 creating+peering, 15 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 11 KiB/s rd, 10 op/s 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: cluster 2026-03-09T15:57:36.694518+0000 mgr.y (mgr.14520) 172 : cluster [DBG] pgmap v152: 452 pgs: 38 creating+peering, 15 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 11 KiB/s rd, 10 op/s 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: cluster 2026-03-09T15:57:37.012289+0000 mon.a (mon.0) 1446 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: cluster 2026-03-09T15:57:37.012289+0000 mon.a (mon.0) 1446 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:37.030261+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.101:0/3000106707' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:37.030261+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.101:0/3000106707' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:37.068681+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:37.068681+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:37.069271+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:37.069271+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: cluster 2026-03-09T15:57:37.085546+0000 osd.3 (osd.3) 13 : cluster [DBG] 165.6 deep-scrub starts 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: cluster 2026-03-09T15:57:37.085546+0000 osd.3 (osd.3) 13 : cluster [DBG] 165.6 deep-scrub starts 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: cluster 2026-03-09T15:57:37.113670+0000 osd.3 (osd.3) 14 : cluster [DBG] 165.6 deep-scrub ok 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: cluster 2026-03-09T15:57:37.113670+0000 osd.3 (osd.3) 14 : cluster [DBG] 165.6 deep-scrub ok 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:37.497828+0000 mon.a (mon.0) 1449 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:38 vm01 bash[20728]: audit 2026-03-09T15:57:37.497828+0000 mon.a (mon.0) 1449 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:36.282450+0000 mgr.y (mgr.14520) 171 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:36.282450+0000 mgr.y (mgr.14520) 171 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: cluster 2026-03-09T15:57:36.694518+0000 mgr.y (mgr.14520) 172 : cluster [DBG] pgmap v152: 452 pgs: 38 creating+peering, 15 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 11 KiB/s rd, 10 op/s 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: cluster 2026-03-09T15:57:36.694518+0000 mgr.y (mgr.14520) 172 : cluster [DBG] pgmap v152: 452 pgs: 38 creating+peering, 15 unknown, 399 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 11 KiB/s rd, 10 op/s 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: cluster 2026-03-09T15:57:37.012289+0000 mon.a (mon.0) 1446 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: cluster 2026-03-09T15:57:37.012289+0000 mon.a (mon.0) 1446 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T15:57:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:37.030261+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.101:0/3000106707' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:37.030261+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.101:0/3000106707' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:37.068681+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:37.068681+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:37.069271+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:37.069271+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: cluster 2026-03-09T15:57:37.085546+0000 osd.3 (osd.3) 13 : cluster [DBG] 165.6 deep-scrub starts 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: cluster 2026-03-09T15:57:37.085546+0000 osd.3 (osd.3) 13 : cluster [DBG] 165.6 deep-scrub starts 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: cluster 2026-03-09T15:57:37.113670+0000 osd.3 (osd.3) 14 : cluster [DBG] 165.6 deep-scrub ok 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: cluster 2026-03-09T15:57:37.113670+0000 osd.3 (osd.3) 14 : cluster [DBG] 165.6 deep-scrub ok 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:37.497828+0000 mon.a (mon.0) 1449 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:38.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:38 vm01 bash[28152]: audit 2026-03-09T15:57:37.497828+0000 mon.a (mon.0) 1449 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [==========] Running 12 tests from 4 test suites. 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] Global test environment set-up. 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMiscVersion.Version 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMiscVersion.Version (0 ms) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectFailure 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: unable to get monitor info from DNS SRV with service name: ceph-mon 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: 2026-03-09T15:56:34.221+0000 7f591d442980 -1 failed for service _ceph-mon._tcp 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: 2026-03-09T15:56:34.221+0000 7f591d442980 -1 monclient: get_monmap_and_config cannot identify monitors to contact 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectFailure (55 ms) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectTimeout 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectTimeout (5008 ms) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure (5063 ms total) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] 1 test from LibRadosMiscPool 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMiscPool.PoolCreationRace 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: started 0x7f58fc067d20 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: started 0x56091dc2d670 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: started 2 aios 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: waiting 0x7f58fc067d20 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: waiting 0x56091dc2d670 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: done. 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMiscPool.PoolCreationRace (6459 ms) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] 1 test from LibRadosMiscPool (6459 ms total) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] 8 tests from LibRadosMisc 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMisc.ClusterFSID 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMisc.ClusterFSID (0 ms) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMisc.Exec 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMisc.Exec (274 ms) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMisc.WriteSame 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMisc.WriteSame (241 ms) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMisc.CmpExt 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMisc.CmpExt (6 ms) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMisc.Applications 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMisc.Applications (5147 ms) 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatOSD 2026-03-09T15:57:39.098 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatOSD (0 ms) 2026-03-09T15:57:39.099 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatClient 2026-03-09T15:57:39.099 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatClient (0 ms) 2026-03-09T15:57:39.099 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ RUN ] LibRadosMisc.ShutdownRace 2026-03-09T15:57:39.099 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ OK ] LibRadosMisc.ShutdownRace (45103 ms) 2026-03-09T15:57:39.099 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] 8 tests from LibRadosMisc (50771 ms total) 2026-03-09T15:57:39.099 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: 2026-03-09T15:57:39.099 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [----------] Global test environment tear-down 2026-03-09T15:57:39.099 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [==========] 12 tests from 4 test suites ran. (64906 ms total) 2026-03-09T15:57:39.099 INFO:tasks.workunit.client.0.vm01.stdout: api_misc: [ PASSED ] 12 tests. 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: audit 2026-03-09T15:57:38.013186+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: audit 2026-03-09T15:57:38.013186+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: audit 2026-03-09T15:57:38.013217+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: audit 2026-03-09T15:57:38.013217+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.063318+0000 mon.a (mon.0) 1452 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.063318+0000 mon.a (mon.0) 1452 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.095308+0000 osd.3 (osd.3) 15 : cluster [DBG] 165.0 scrub starts 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.095308+0000 osd.3 (osd.3) 15 : cluster [DBG] 165.0 scrub starts 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.144450+0000 osd.3 (osd.3) 16 : cluster [DBG] 165.0 scrub ok 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.144450+0000 osd.3 (osd.3) 16 : cluster [DBG] 165.0 scrub ok 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: audit 2026-03-09T15:57:38.498625+0000 mon.a (mon.0) 1453 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: audit 2026-03-09T15:57:38.498625+0000 mon.a (mon.0) 1453 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.695159+0000 mgr.y (mgr.14520) 173 : cluster [DBG] pgmap v155: 516 pgs: 29 creating+peering, 70 unknown, 417 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.695159+0000 mgr.y (mgr.14520) 173 : cluster [DBG] pgmap v155: 516 pgs: 29 creating+peering, 70 unknown, 417 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.779553+0000 mon.a (mon.0) 1454 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:38.779553+0000 mon.a (mon.0) 1454 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:39.059364+0000 mon.a (mon.0) 1455 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T15:57:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:39 vm09 bash[22983]: cluster 2026-03-09T15:57:39.059364+0000 mon.a (mon.0) 1455 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: audit 2026-03-09T15:57:38.013186+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: audit 2026-03-09T15:57:38.013186+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: audit 2026-03-09T15:57:38.013217+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: audit 2026-03-09T15:57:38.013217+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.063318+0000 mon.a (mon.0) 1452 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.063318+0000 mon.a (mon.0) 1452 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.095308+0000 osd.3 (osd.3) 15 : cluster [DBG] 165.0 scrub starts 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.095308+0000 osd.3 (osd.3) 15 : cluster [DBG] 165.0 scrub starts 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.144450+0000 osd.3 (osd.3) 16 : cluster [DBG] 165.0 scrub ok 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.144450+0000 osd.3 (osd.3) 16 : cluster [DBG] 165.0 scrub ok 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: audit 2026-03-09T15:57:38.498625+0000 mon.a (mon.0) 1453 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: audit 2026-03-09T15:57:38.498625+0000 mon.a (mon.0) 1453 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.695159+0000 mgr.y (mgr.14520) 173 : cluster [DBG] pgmap v155: 516 pgs: 29 creating+peering, 70 unknown, 417 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.695159+0000 mgr.y (mgr.14520) 173 : cluster [DBG] pgmap v155: 516 pgs: 29 creating+peering, 70 unknown, 417 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.779553+0000 mon.a (mon.0) 1454 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:38.779553+0000 mon.a (mon.0) 1454 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:39.059364+0000 mon.a (mon.0) 1455 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:39 vm01 bash[28152]: cluster 2026-03-09T15:57:39.059364+0000 mon.a (mon.0) 1455 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: audit 2026-03-09T15:57:38.013186+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: audit 2026-03-09T15:57:38.013186+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.101:0/2954742031' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm01-59602-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: audit 2026-03-09T15:57:38.013217+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: audit 2026-03-09T15:57:38.013217+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.063318+0000 mon.a (mon.0) 1452 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.063318+0000 mon.a (mon.0) 1452 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.095308+0000 osd.3 (osd.3) 15 : cluster [DBG] 165.0 scrub starts 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.095308+0000 osd.3 (osd.3) 15 : cluster [DBG] 165.0 scrub starts 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.144450+0000 osd.3 (osd.3) 16 : cluster [DBG] 165.0 scrub ok 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.144450+0000 osd.3 (osd.3) 16 : cluster [DBG] 165.0 scrub ok 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: audit 2026-03-09T15:57:38.498625+0000 mon.a (mon.0) 1453 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: audit 2026-03-09T15:57:38.498625+0000 mon.a (mon.0) 1453 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.695159+0000 mgr.y (mgr.14520) 173 : cluster [DBG] pgmap v155: 516 pgs: 29 creating+peering, 70 unknown, 417 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.695159+0000 mgr.y (mgr.14520) 173 : cluster [DBG] pgmap v155: 516 pgs: 29 creating+peering, 70 unknown, 417 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.779553+0000 mon.a (mon.0) 1454 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:38.779553+0000 mon.a (mon.0) 1454 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:39.059364+0000 mon.a (mon.0) 1455 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T15:57:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:39 vm01 bash[20728]: cluster 2026-03-09T15:57:39.059364+0000 mon.a (mon.0) 1455 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.130445+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.130445+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.130787+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.130787+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.131407+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.131407+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.131694+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.131694+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.132289+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.132289+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.132583+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.132583+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.499630+0000 mon.a (mon.0) 1459 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:39.499630+0000 mon.a (mon.0) 1459 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.047403+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.047403+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: cluster 2026-03-09T15:57:40.053982+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: cluster 2026-03-09T15:57:40.053982+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.062273+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.101:0/2199556236' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.062273+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.101:0/2199556236' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.062666+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.062666+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.064145+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.064145+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.064765+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.064765+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.066108+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:40 vm09 bash[22983]: audit 2026-03-09T15:57:40.066108+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.130445+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.130445+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.130787+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.130787+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.131407+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.131407+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.131694+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.131694+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.132289+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.132289+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.132583+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.132583+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.499630+0000 mon.a (mon.0) 1459 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:39.499630+0000 mon.a (mon.0) 1459 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.047403+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.047403+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: cluster 2026-03-09T15:57:40.053982+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: cluster 2026-03-09T15:57:40.053982+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.062273+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.101:0/2199556236' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.062273+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.101:0/2199556236' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.062666+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.062666+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.064145+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.064145+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.064765+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.064765+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.066108+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:40 vm01 bash[28152]: audit 2026-03-09T15:57:40.066108+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.130445+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.130445+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.130787+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.130787+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.131407+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.131407+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.131694+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.131694+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.132289+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.132289+0000 mon.c (mon.2) 133 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.132583+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.132583+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.499630+0000 mon.a (mon.0) 1459 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:39.499630+0000 mon.a (mon.0) 1459 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.047403+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.047403+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: cluster 2026-03-09T15:57:40.053982+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: cluster 2026-03-09T15:57:40.053982+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.062273+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.101:0/2199556236' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.062273+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.101:0/2199556236' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.062666+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.062666+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.064145+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.064145+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.064765+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.064765+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.066108+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:40.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:40 vm01 bash[20728]: audit 2026-03-09T15:57:40.066108+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: audit 2026-03-09T15:57:40.500471+0000 mon.a (mon.0) 1465 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: audit 2026-03-09T15:57:40.500471+0000 mon.a (mon.0) 1465 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: cluster 2026-03-09T15:57:40.695508+0000 mgr.y (mgr.14520) 174 : cluster [DBG] pgmap v158: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: cluster 2026-03-09T15:57:40.695508+0000 mgr.y (mgr.14520) 174 : cluster [DBG] pgmap v158: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: audit 2026-03-09T15:57:41.051604+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: audit 2026-03-09T15:57:41.051604+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: audit 2026-03-09T15:57:41.051687+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: audit 2026-03-09T15:57:41.051687+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: cluster 2026-03-09T15:57:41.074808+0000 mon.a (mon.0) 1468 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T15:57:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:41 vm09 bash[22983]: cluster 2026-03-09T15:57:41.074808+0000 mon.a (mon.0) 1468 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: audit 2026-03-09T15:57:40.500471+0000 mon.a (mon.0) 1465 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: audit 2026-03-09T15:57:40.500471+0000 mon.a (mon.0) 1465 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: cluster 2026-03-09T15:57:40.695508+0000 mgr.y (mgr.14520) 174 : cluster [DBG] pgmap v158: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: cluster 2026-03-09T15:57:40.695508+0000 mgr.y (mgr.14520) 174 : cluster [DBG] pgmap v158: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: audit 2026-03-09T15:57:41.051604+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: audit 2026-03-09T15:57:41.051604+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: audit 2026-03-09T15:57:41.051687+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: audit 2026-03-09T15:57:41.051687+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: cluster 2026-03-09T15:57:41.074808+0000 mon.a (mon.0) 1468 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:41 vm01 bash[28152]: cluster 2026-03-09T15:57:41.074808+0000 mon.a (mon.0) 1468 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: audit 2026-03-09T15:57:40.500471+0000 mon.a (mon.0) 1465 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: audit 2026-03-09T15:57:40.500471+0000 mon.a (mon.0) 1465 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: cluster 2026-03-09T15:57:40.695508+0000 mgr.y (mgr.14520) 174 : cluster [DBG] pgmap v158: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: cluster 2026-03-09T15:57:40.695508+0000 mgr.y (mgr.14520) 174 : cluster [DBG] pgmap v158: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: audit 2026-03-09T15:57:41.051604+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: audit 2026-03-09T15:57:41.051604+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? 192.168.123.101:0/558971853' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm01-59602-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: audit 2026-03-09T15:57:41.051687+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: audit 2026-03-09T15:57:41.051687+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm01-59610-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: cluster 2026-03-09T15:57:41.074808+0000 mon.a (mon.0) 1468 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T15:57:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:41 vm01 bash[20728]: cluster 2026-03-09T15:57:41.074808+0000 mon.a (mon.0) 1468 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T15:57:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:42 vm09 bash[22983]: audit 2026-03-09T15:57:41.501185+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:42 vm09 bash[22983]: audit 2026-03-09T15:57:41.501185+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:42 vm09 bash[22983]: audit 2026-03-09T15:57:42.053965+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:42 vm09 bash[22983]: audit 2026-03-09T15:57:42.053965+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:42 vm09 bash[22983]: cluster 2026-03-09T15:57:42.058353+0000 mon.a (mon.0) 1471 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T15:57:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:42 vm09 bash[22983]: cluster 2026-03-09T15:57:42.058353+0000 mon.a (mon.0) 1471 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:42 vm01 bash[28152]: audit 2026-03-09T15:57:41.501185+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:42 vm01 bash[28152]: audit 2026-03-09T15:57:41.501185+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:42 vm01 bash[28152]: audit 2026-03-09T15:57:42.053965+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:42 vm01 bash[28152]: audit 2026-03-09T15:57:42.053965+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:42 vm01 bash[28152]: cluster 2026-03-09T15:57:42.058353+0000 mon.a (mon.0) 1471 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:42 vm01 bash[28152]: cluster 2026-03-09T15:57:42.058353+0000 mon.a (mon.0) 1471 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:42 vm01 bash[20728]: audit 2026-03-09T15:57:41.501185+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:42 vm01 bash[20728]: audit 2026-03-09T15:57:41.501185+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:42 vm01 bash[20728]: audit 2026-03-09T15:57:42.053965+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:42 vm01 bash[20728]: audit 2026-03-09T15:57:42.053965+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm01-59908-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:42 vm01 bash[20728]: cluster 2026-03-09T15:57:42.058353+0000 mon.a (mon.0) 1471 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T15:57:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:42 vm01 bash[20728]: cluster 2026-03-09T15:57:42.058353+0000 mon.a (mon.0) 1471 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T15:57:43.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:57:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:57:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:42.501967+0000 mon.a (mon.0) 1472 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:42.501967+0000 mon.a (mon.0) 1472 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: cluster 2026-03-09T15:57:42.695829+0000 mgr.y (mgr.14520) 175 : cluster [DBG] pgmap v161: 396 pgs: 8 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: cluster 2026-03-09T15:57:42.695829+0000 mgr.y (mgr.14520) 175 : cluster [DBG] pgmap v161: 396 pgs: 8 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: cluster 2026-03-09T15:57:43.089430+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: cluster 2026-03-09T15:57:43.089430+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:43.102973+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.101:0/3281022607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:43.102973+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.101:0/3281022607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:43.103083+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.101:0/1046104412' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:43.103083+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.101:0/1046104412' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:43.104677+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:43.104677+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:43.108773+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:43 vm09 bash[22983]: audit 2026-03-09T15:57:43.108773+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:42.501967+0000 mon.a (mon.0) 1472 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:42.501967+0000 mon.a (mon.0) 1472 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: cluster 2026-03-09T15:57:42.695829+0000 mgr.y (mgr.14520) 175 : cluster [DBG] pgmap v161: 396 pgs: 8 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: cluster 2026-03-09T15:57:42.695829+0000 mgr.y (mgr.14520) 175 : cluster [DBG] pgmap v161: 396 pgs: 8 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: cluster 2026-03-09T15:57:43.089430+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: cluster 2026-03-09T15:57:43.089430+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:43.102973+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.101:0/3281022607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:43.102973+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.101:0/3281022607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:43.103083+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.101:0/1046104412' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:43.103083+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.101:0/1046104412' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:43.104677+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:43.104677+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:43.108773+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:43 vm01 bash[28152]: audit 2026-03-09T15:57:43.108773+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:42.501967+0000 mon.a (mon.0) 1472 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:42.501967+0000 mon.a (mon.0) 1472 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: cluster 2026-03-09T15:57:42.695829+0000 mgr.y (mgr.14520) 175 : cluster [DBG] pgmap v161: 396 pgs: 8 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: cluster 2026-03-09T15:57:42.695829+0000 mgr.y (mgr.14520) 175 : cluster [DBG] pgmap v161: 396 pgs: 8 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: cluster 2026-03-09T15:57:43.089430+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: cluster 2026-03-09T15:57:43.089430+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:43.102973+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.101:0/3281022607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:43.102973+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.101:0/3281022607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:43.103083+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.101:0/1046104412' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:43.103083+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.101:0/1046104412' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:43.104677+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:43.104677+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:43.108773+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:43 vm01 bash[20728]: audit 2026-03-09T15:57:43.108773+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: audit 2026-03-09T15:57:43.502721+0000 mon.a (mon.0) 1476 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: audit 2026-03-09T15:57:43.502721+0000 mon.a (mon.0) 1476 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: cluster 2026-03-09T15:57:43.780140+0000 mon.a (mon.0) 1477 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: cluster 2026-03-09T15:57:43.780140+0000 mon.a (mon.0) 1477 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: audit 2026-03-09T15:57:43.784567+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: audit 2026-03-09T15:57:43.784567+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: audit 2026-03-09T15:57:43.784700+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: audit 2026-03-09T15:57:43.784700+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: cluster 2026-03-09T15:57:43.803759+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: cluster 2026-03-09T15:57:43.803759+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: audit 2026-03-09T15:57:44.074127+0000 mon.a (mon.0) 1481 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:44 vm09 bash[22983]: audit 2026-03-09T15:57:44.074127+0000 mon.a (mon.0) 1481 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: audit 2026-03-09T15:57:43.502721+0000 mon.a (mon.0) 1476 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: audit 2026-03-09T15:57:43.502721+0000 mon.a (mon.0) 1476 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: cluster 2026-03-09T15:57:43.780140+0000 mon.a (mon.0) 1477 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: cluster 2026-03-09T15:57:43.780140+0000 mon.a (mon.0) 1477 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: audit 2026-03-09T15:57:43.784567+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: audit 2026-03-09T15:57:43.784567+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: audit 2026-03-09T15:57:43.784700+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: audit 2026-03-09T15:57:43.784700+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: cluster 2026-03-09T15:57:43.803759+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: cluster 2026-03-09T15:57:43.803759+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: audit 2026-03-09T15:57:44.074127+0000 mon.a (mon.0) 1481 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:44 vm01 bash[28152]: audit 2026-03-09T15:57:44.074127+0000 mon.a (mon.0) 1481 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: audit 2026-03-09T15:57:43.502721+0000 mon.a (mon.0) 1476 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: audit 2026-03-09T15:57:43.502721+0000 mon.a (mon.0) 1476 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: cluster 2026-03-09T15:57:43.780140+0000 mon.a (mon.0) 1477 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: cluster 2026-03-09T15:57:43.780140+0000 mon.a (mon.0) 1477 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: audit 2026-03-09T15:57:43.784567+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: audit 2026-03-09T15:57:43.784567+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm01-59602-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: audit 2026-03-09T15:57:43.784700+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: audit 2026-03-09T15:57:43.784700+0000 mon.a (mon.0) 1479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm01-59610-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: cluster 2026-03-09T15:57:43.803759+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: cluster 2026-03-09T15:57:43.803759+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: audit 2026-03-09T15:57:44.074127+0000 mon.a (mon.0) 1481 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:44 vm01 bash[20728]: audit 2026-03-09T15:57:44.074127+0000 mon.a (mon.0) 1481 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:45 vm09 bash[22983]: audit 2026-03-09T15:57:44.503421+0000 mon.a (mon.0) 1482 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:45 vm09 bash[22983]: audit 2026-03-09T15:57:44.503421+0000 mon.a (mon.0) 1482 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:45 vm09 bash[22983]: cluster 2026-03-09T15:57:44.696688+0000 mgr.y (mgr.14520) 176 : cluster [DBG] pgmap v164: 460 pgs: 34 creating+peering, 426 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:45 vm09 bash[22983]: cluster 2026-03-09T15:57:44.696688+0000 mgr.y (mgr.14520) 176 : cluster [DBG] pgmap v164: 460 pgs: 34 creating+peering, 426 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:45 vm09 bash[22983]: cluster 2026-03-09T15:57:44.790715+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T15:57:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:45 vm09 bash[22983]: cluster 2026-03-09T15:57:44.790715+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:45 vm01 bash[28152]: audit 2026-03-09T15:57:44.503421+0000 mon.a (mon.0) 1482 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:45 vm01 bash[28152]: audit 2026-03-09T15:57:44.503421+0000 mon.a (mon.0) 1482 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:45 vm01 bash[28152]: cluster 2026-03-09T15:57:44.696688+0000 mgr.y (mgr.14520) 176 : cluster [DBG] pgmap v164: 460 pgs: 34 creating+peering, 426 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:45 vm01 bash[28152]: cluster 2026-03-09T15:57:44.696688+0000 mgr.y (mgr.14520) 176 : cluster [DBG] pgmap v164: 460 pgs: 34 creating+peering, 426 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:45 vm01 bash[28152]: cluster 2026-03-09T15:57:44.790715+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:45 vm01 bash[28152]: cluster 2026-03-09T15:57:44.790715+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:45 vm01 bash[20728]: audit 2026-03-09T15:57:44.503421+0000 mon.a (mon.0) 1482 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:45 vm01 bash[20728]: audit 2026-03-09T15:57:44.503421+0000 mon.a (mon.0) 1482 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:45 vm01 bash[20728]: cluster 2026-03-09T15:57:44.696688+0000 mgr.y (mgr.14520) 176 : cluster [DBG] pgmap v164: 460 pgs: 34 creating+peering, 426 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:45 vm01 bash[20728]: cluster 2026-03-09T15:57:44.696688+0000 mgr.y (mgr.14520) 176 : cluster [DBG] pgmap v164: 460 pgs: 34 creating+peering, 426 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:45 vm01 bash[20728]: cluster 2026-03-09T15:57:44.790715+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T15:57:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:45 vm01 bash[20728]: cluster 2026-03-09T15:57:44.790715+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T15:57:46.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:57:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.504112+0000 mon.a (mon.0) 1484 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.504112+0000 mon.a (mon.0) 1484 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: cluster 2026-03-09T15:57:45.901626+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: cluster 2026-03-09T15:57:45.901626+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.925767+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.101:0/146892094' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.925767+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.101:0/146892094' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.926107+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.101:0/1922239934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.926107+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.101:0/1922239934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.929393+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.929393+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.929623+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:46 vm09 bash[22983]: audit 2026-03-09T15:57:45.929623+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.504112+0000 mon.a (mon.0) 1484 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.504112+0000 mon.a (mon.0) 1484 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: cluster 2026-03-09T15:57:45.901626+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: cluster 2026-03-09T15:57:45.901626+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.925767+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.101:0/146892094' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.925767+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.101:0/146892094' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.926107+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.101:0/1922239934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.926107+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.101:0/1922239934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.929393+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.929393+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.929623+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:46 vm01 bash[28152]: audit 2026-03-09T15:57:45.929623+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.504112+0000 mon.a (mon.0) 1484 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.504112+0000 mon.a (mon.0) 1484 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: cluster 2026-03-09T15:57:45.901626+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: cluster 2026-03-09T15:57:45.901626+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.925767+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.101:0/146892094' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.925767+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.101:0/146892094' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.926107+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.101:0/1922239934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.926107+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.101:0/1922239934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.929393+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.929393+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.929623+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:46 vm01 bash[20728]: audit 2026-03-09T15:57:45.929623+0000 mon.a (mon.0) 1487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: audit 2026-03-09T15:57:46.290463+0000 mgr.y (mgr.14520) 177 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: audit 2026-03-09T15:57:46.290463+0000 mgr.y (mgr.14520) 177 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: audit 2026-03-09T15:57:46.504859+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: audit 2026-03-09T15:57:46.504859+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: cluster 2026-03-09T15:57:46.697070+0000 mgr.y (mgr.14520) 178 : cluster [DBG] pgmap v167: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: cluster 2026-03-09T15:57:46.697070+0000 mgr.y (mgr.14520) 178 : cluster [DBG] pgmap v167: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: audit 2026-03-09T15:57:46.892521+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: audit 2026-03-09T15:57:46.892521+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: audit 2026-03-09T15:57:46.892585+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: audit 2026-03-09T15:57:46.892585+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: cluster 2026-03-09T15:57:46.895838+0000 mon.a (mon.0) 1491 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T15:57:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:47 vm09 bash[22983]: cluster 2026-03-09T15:57:46.895838+0000 mon.a (mon.0) 1491 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: audit 2026-03-09T15:57:46.290463+0000 mgr.y (mgr.14520) 177 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: audit 2026-03-09T15:57:46.290463+0000 mgr.y (mgr.14520) 177 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: audit 2026-03-09T15:57:46.504859+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: audit 2026-03-09T15:57:46.504859+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: cluster 2026-03-09T15:57:46.697070+0000 mgr.y (mgr.14520) 178 : cluster [DBG] pgmap v167: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: cluster 2026-03-09T15:57:46.697070+0000 mgr.y (mgr.14520) 178 : cluster [DBG] pgmap v167: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: audit 2026-03-09T15:57:46.892521+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: audit 2026-03-09T15:57:46.892521+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: audit 2026-03-09T15:57:46.892585+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: audit 2026-03-09T15:57:46.892585+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: cluster 2026-03-09T15:57:46.895838+0000 mon.a (mon.0) 1491 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:47 vm01 bash[28152]: cluster 2026-03-09T15:57:46.895838+0000 mon.a (mon.0) 1491 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: audit 2026-03-09T15:57:46.290463+0000 mgr.y (mgr.14520) 177 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: audit 2026-03-09T15:57:46.290463+0000 mgr.y (mgr.14520) 177 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: audit 2026-03-09T15:57:46.504859+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: audit 2026-03-09T15:57:46.504859+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: cluster 2026-03-09T15:57:46.697070+0000 mgr.y (mgr.14520) 178 : cluster [DBG] pgmap v167: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: cluster 2026-03-09T15:57:46.697070+0000 mgr.y (mgr.14520) 178 : cluster [DBG] pgmap v167: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: audit 2026-03-09T15:57:46.892521+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: audit 2026-03-09T15:57:46.892521+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm01-59610-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: audit 2026-03-09T15:57:46.892585+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: audit 2026-03-09T15:57:46.892585+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm01-59602-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:47.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: cluster 2026-03-09T15:57:46.895838+0000 mon.a (mon.0) 1491 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T15:57:47.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:47 vm01 bash[20728]: cluster 2026-03-09T15:57:46.895838+0000 mon.a (mon.0) 1491 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T15:57:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:48 vm09 bash[22983]: audit 2026-03-09T15:57:47.505500+0000 mon.a (mon.0) 1492 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:48 vm09 bash[22983]: audit 2026-03-09T15:57:47.505500+0000 mon.a (mon.0) 1492 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:48 vm09 bash[22983]: cluster 2026-03-09T15:57:47.902833+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T15:57:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:48 vm09 bash[22983]: cluster 2026-03-09T15:57:47.902833+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T15:57:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:48 vm01 bash[28152]: audit 2026-03-09T15:57:47.505500+0000 mon.a (mon.0) 1492 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:48 vm01 bash[28152]: audit 2026-03-09T15:57:47.505500+0000 mon.a (mon.0) 1492 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:48 vm01 bash[28152]: cluster 2026-03-09T15:57:47.902833+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T15:57:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:48 vm01 bash[28152]: cluster 2026-03-09T15:57:47.902833+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T15:57:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:48 vm01 bash[20728]: audit 2026-03-09T15:57:47.505500+0000 mon.a (mon.0) 1492 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:48 vm01 bash[20728]: audit 2026-03-09T15:57:47.505500+0000 mon.a (mon.0) 1492 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:48 vm01 bash[20728]: cluster 2026-03-09T15:57:47.902833+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T15:57:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:48 vm01 bash[20728]: cluster 2026-03-09T15:57:47.902833+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: Running main() from gmock_main.cc 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [==========] Running 21 tests from 5 test suites. 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] Global test environment set-up. 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: seed 59908 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapListPP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapListPP (2259 ms) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapRemovePP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapRemovePP (2129 ms) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.RollbackPP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.RollbackPP (3011 ms) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapGetNamePP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapGetNamePP (2432 ms) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapCreateRemovePP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapCreateRemovePP (3711 ms) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP (13542 ms total) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapPP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapPP (4386 ms) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.RollbackPP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.RollbackPP (3854 ms) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP (6081 ms) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.Bug11677 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.Bug11677 (4124 ms) 2026-03-09T15:57:48.965 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.OrderSnap 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.OrderSnap (3048 ms) 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: ./src/test/librados/snapshots_cxx.cc:460: Skipped 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback (1 ms) 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: deleting snap 14 in pool LibRadosSnapshotsSelfManagedPP_vm01-59908-7 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: waiting for snaps to purge 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap (18221 ms) 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP (39715 ms total) 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected (49 ms) 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance (6083 ms) 2026-03-09T15:57:48.966 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode (6132 ms total) 2026-03-09T15:57:48.970 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: 2026-03-09T15:57:48.970 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP 2026-03-09T15:57:48.970 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapListPP 2026-03-09T15:57:48.970 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapListPP (2754 ms) 2026-03-09T15:57:48.970 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapRemovePP 2026-03-09T15:57:48.970 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapRemovePP (2089 ms) 2026-03-09T15:57:48.970 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.RollbackPP 2026-03-09T15:57:48.970 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.RollbackPP (2065 ms) 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: audit 2026-03-09T15:57:48.506285+0000 mon.a (mon.0) 1494 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: audit 2026-03-09T15:57:48.506285+0000 mon.a (mon.0) 1494 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: cluster 2026-03-09T15:57:48.697620+0000 mgr.y (mgr.14520) 179 : cluster [DBG] pgmap v170: 396 pgs: 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: cluster 2026-03-09T15:57:48.697620+0000 mgr.y (mgr.14520) 179 : cluster [DBG] pgmap v170: 396 pgs: 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: cluster 2026-03-09T15:57:48.781848+0000 mon.a (mon.0) 1495 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: cluster 2026-03-09T15:57:48.781848+0000 mon.a (mon.0) 1495 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: cluster 2026-03-09T15:57:48.965809+0000 mon.a (mon.0) 1496 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: cluster 2026-03-09T15:57:48.965809+0000 mon.a (mon.0) 1496 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: audit 2026-03-09T15:57:48.970677+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.101:0/1740693310' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: audit 2026-03-09T15:57:48.970677+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.101:0/1740693310' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: audit 2026-03-09T15:57:48.975907+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: audit 2026-03-09T15:57:48.975907+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: audit 2026-03-09T15:57:48.975994+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:49 vm09 bash[22983]: audit 2026-03-09T15:57:48.975994+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: audit 2026-03-09T15:57:48.506285+0000 mon.a (mon.0) 1494 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: audit 2026-03-09T15:57:48.506285+0000 mon.a (mon.0) 1494 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: cluster 2026-03-09T15:57:48.697620+0000 mgr.y (mgr.14520) 179 : cluster [DBG] pgmap v170: 396 pgs: 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: cluster 2026-03-09T15:57:48.697620+0000 mgr.y (mgr.14520) 179 : cluster [DBG] pgmap v170: 396 pgs: 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: cluster 2026-03-09T15:57:48.781848+0000 mon.a (mon.0) 1495 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: cluster 2026-03-09T15:57:48.781848+0000 mon.a (mon.0) 1495 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: cluster 2026-03-09T15:57:48.965809+0000 mon.a (mon.0) 1496 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: cluster 2026-03-09T15:57:48.965809+0000 mon.a (mon.0) 1496 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: audit 2026-03-09T15:57:48.970677+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.101:0/1740693310' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: audit 2026-03-09T15:57:48.970677+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.101:0/1740693310' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: audit 2026-03-09T15:57:48.975907+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: audit 2026-03-09T15:57:48.975907+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: audit 2026-03-09T15:57:48.975994+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:49 vm01 bash[28152]: audit 2026-03-09T15:57:48.975994+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: audit 2026-03-09T15:57:48.506285+0000 mon.a (mon.0) 1494 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: audit 2026-03-09T15:57:48.506285+0000 mon.a (mon.0) 1494 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: cluster 2026-03-09T15:57:48.697620+0000 mgr.y (mgr.14520) 179 : cluster [DBG] pgmap v170: 396 pgs: 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: cluster 2026-03-09T15:57:48.697620+0000 mgr.y (mgr.14520) 179 : cluster [DBG] pgmap v170: 396 pgs: 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: cluster 2026-03-09T15:57:48.781848+0000 mon.a (mon.0) 1495 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: cluster 2026-03-09T15:57:48.781848+0000 mon.a (mon.0) 1495 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: cluster 2026-03-09T15:57:48.965809+0000 mon.a (mon.0) 1496 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: cluster 2026-03-09T15:57:48.965809+0000 mon.a (mon.0) 1496 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: audit 2026-03-09T15:57:48.970677+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.101:0/1740693310' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: audit 2026-03-09T15:57:48.970677+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.101:0/1740693310' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: audit 2026-03-09T15:57:48.975907+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: audit 2026-03-09T15:57:48.975907+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: audit 2026-03-09T15:57:48.975994+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:49 vm01 bash[20728]: audit 2026-03-09T15:57:48.975994+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.386986+0000 mon.a (mon.0) 1499 : audit [DBG] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.386986+0000 mon.a (mon.0) 1499 : audit [DBG] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.388399+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.388399+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.507005+0000 mon.a (mon.0) 1501 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.507005+0000 mon.a (mon.0) 1501 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.507448+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.507448+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.952913+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.952913+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.953034+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.953034+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.953261+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.953261+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.953347+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.953347+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: cluster 2026-03-09T15:57:49.981380+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: cluster 2026-03-09T15:57:49.981380+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.999153+0000 mon.a (mon.0) 1508 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:49.999153+0000 mon.a (mon.0) 1508 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:50.000222+0000 mon.a (mon.0) 1509 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:50.000222+0000 mon.a (mon.0) 1509 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:50.003314+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2"}]: dispatch 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:50.003314+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2"}]: dispatch 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:50.003407+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:50.003407+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:50.003925+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:50 vm09 bash[22983]: audit 2026-03-09T15:57:50.003925+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.386986+0000 mon.a (mon.0) 1499 : audit [DBG] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.386986+0000 mon.a (mon.0) 1499 : audit [DBG] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.388399+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.388399+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.507005+0000 mon.a (mon.0) 1501 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.507005+0000 mon.a (mon.0) 1501 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.507448+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.507448+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.952913+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.952913+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.953034+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.953034+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.953261+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.953261+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.953347+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.953347+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: cluster 2026-03-09T15:57:49.981380+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: cluster 2026-03-09T15:57:49.981380+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T15:57:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.999153+0000 mon.a (mon.0) 1508 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:49.999153+0000 mon.a (mon.0) 1508 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:50.000222+0000 mon.a (mon.0) 1509 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:50.000222+0000 mon.a (mon.0) 1509 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:50.003314+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:50.003314+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:50.003407+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:50.003407+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:50.003925+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:50 vm01 bash[28152]: audit 2026-03-09T15:57:50.003925+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.386986+0000 mon.a (mon.0) 1499 : audit [DBG] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.386986+0000 mon.a (mon.0) 1499 : audit [DBG] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.388399+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.388399+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.507005+0000 mon.a (mon.0) 1501 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.507005+0000 mon.a (mon.0) 1501 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.507448+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.507448+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.952913+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.952913+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm01-59602-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.953034+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.953034+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.101:0/879688089' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm01-59610-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.953261+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.953261+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.953347+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.953347+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: cluster 2026-03-09T15:57:49.981380+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: cluster 2026-03-09T15:57:49.981380+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.999153+0000 mon.a (mon.0) 1508 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:49.999153+0000 mon.a (mon.0) 1508 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:50.000222+0000 mon.a (mon.0) 1509 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:50.000222+0000 mon.a (mon.0) 1509 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:50.003314+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:50.003314+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:50.003407+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:50.003407+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:50.003925+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:50.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:50 vm01 bash[20728]: audit 2026-03-09T15:57:50.003925+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: cluster 2026-03-09T15:57:50.698063+0000 mgr.y (mgr.14520) 180 : cluster [DBG] pgmap v173: 460 pgs: 35 creating+peering, 29 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: cluster 2026-03-09T15:57:50.698063+0000 mgr.y (mgr.14520) 180 : cluster [DBG] pgmap v173: 460 pgs: 35 creating+peering, 29 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.097577+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.097577+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.098116+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.098116+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: cluster 2026-03-09T15:57:51.135642+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: cluster 2026-03-09T15:57:51.135642+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.136436+0000 mon.a (mon.0) 1516 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.136436+0000 mon.a (mon.0) 1516 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.145983+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.145983+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.147147+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:51 vm09 bash[22983]: audit 2026-03-09T15:57:51.147147+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: cluster 2026-03-09T15:57:50.698063+0000 mgr.y (mgr.14520) 180 : cluster [DBG] pgmap v173: 460 pgs: 35 creating+peering, 29 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: cluster 2026-03-09T15:57:50.698063+0000 mgr.y (mgr.14520) 180 : cluster [DBG] pgmap v173: 460 pgs: 35 creating+peering, 29 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.097577+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.097577+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.098116+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.098116+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: cluster 2026-03-09T15:57:51.135642+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: cluster 2026-03-09T15:57:51.135642+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.136436+0000 mon.a (mon.0) 1516 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.136436+0000 mon.a (mon.0) 1516 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.145983+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.145983+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.147147+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:51 vm01 bash[28152]: audit 2026-03-09T15:57:51.147147+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: cluster 2026-03-09T15:57:50.698063+0000 mgr.y (mgr.14520) 180 : cluster [DBG] pgmap v173: 460 pgs: 35 creating+peering, 29 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T15:57:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: cluster 2026-03-09T15:57:50.698063+0000 mgr.y (mgr.14520) 180 : cluster [DBG] pgmap v173: 460 pgs: 35 creating+peering, 29 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.097577+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.097577+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.098116+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.098116+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm01-59801-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: cluster 2026-03-09T15:57:51.135642+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: cluster 2026-03-09T15:57:51.135642+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.136436+0000 mon.a (mon.0) 1516 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.136436+0000 mon.a (mon.0) 1516 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.145983+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.145983+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.147147+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:51.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:51 vm01 bash[20728]: audit 2026-03-09T15:57:51.147147+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ snapshots: Running main() from gmock_main.cc 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [==========] Running 11 tests from 2 test suites. 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [----------] Global test environment set-up. 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapList 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapList (4483 ms) 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapRemove 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapRemove (6525 ms) 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSnapshots.Rollback 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSnapshots.Rollback (3717 ms) 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapGetName 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapGetName (5077 ms) 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapCreateRemove 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapCreateRemove (7157 ms) 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots (26959 ms total) 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Snap 2026-03-09T15:57:52.117 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Snap (5043 ms) 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Rollback 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Rollback (6876 ms) 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.SnapOverlap 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.SnapOverlap (8148 ms) 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Bug11677 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Bug11677 (6232 ms) 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.OrderSnap 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.OrderSnap (4004 ms) 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.ReusePurgedSnap 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: Deleting snap 3 in pool ReusePurgedSnapvm01-60592-11. 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: Waiting for snaps to purge. 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.ReusePurgedSnap (20119 ms) 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps (50422 ms total) 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [----------] Global test environment tear-down 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [==========] 11 tests from 2 test suites ran. (77381 ms total) 2026-03-09T15:57:52.118 INFO:tasks.workunit.client.0.vm01.stdout: snapshots: [ PASSED ] 11 tests. 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:51.338004+0000 mon.a (mon.0) 1519 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:51.338004+0000 mon.a (mon.0) 1519 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.101358+0000 mon.a (mon.0) 1520 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.101358+0000 mon.a (mon.0) 1520 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: cluster 2026-03-09T15:57:52.105355+0000 mon.a (mon.0) 1521 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: cluster 2026-03-09T15:57:52.105355+0000 mon.a (mon.0) 1521 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.116042+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.116042+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.117429+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.117429+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.132840+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.132840+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.137690+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.137690+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.137793+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:52 vm09 bash[22983]: audit 2026-03-09T15:57:52.137793+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:51.338004+0000 mon.a (mon.0) 1519 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:51.338004+0000 mon.a (mon.0) 1519 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.101358+0000 mon.a (mon.0) 1520 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.101358+0000 mon.a (mon.0) 1520 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: cluster 2026-03-09T15:57:52.105355+0000 mon.a (mon.0) 1521 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: cluster 2026-03-09T15:57:52.105355+0000 mon.a (mon.0) 1521 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.116042+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.116042+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.117429+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.117429+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.132840+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.132840+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.137690+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.137690+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.137793+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:52 vm01 bash[28152]: audit 2026-03-09T15:57:52.137793+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:51.338004+0000 mon.a (mon.0) 1519 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:51.338004+0000 mon.a (mon.0) 1519 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.101358+0000 mon.a (mon.0) 1520 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.101358+0000 mon.a (mon.0) 1520 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: cluster 2026-03-09T15:57:52.105355+0000 mon.a (mon.0) 1521 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: cluster 2026-03-09T15:57:52.105355+0000 mon.a (mon.0) 1521 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.116042+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.116042+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.117429+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.117429+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.132840+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.132840+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.137690+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:57:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.137690+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T15:57:52.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.137793+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:52.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:52 vm01 bash[20728]: audit 2026-03-09T15:57:52.137793+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:53.180 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:57:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:57:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:52.338796+0000 mon.a (mon.0) 1526 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:52.338796+0000 mon.a (mon.0) 1526 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: cluster 2026-03-09T15:57:52.698528+0000 mgr.y (mgr.14520) 181 : cluster [DBG] pgmap v176: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: cluster 2026-03-09T15:57:52.698528+0000 mgr.y (mgr.14520) 181 : cluster [DBG] pgmap v176: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.176269+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.176269+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.176324+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.176324+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.176572+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.176572+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.176691+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.176691+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.189428+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.189428+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: cluster 2026-03-09T15:57:53.199589+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: cluster 2026-03-09T15:57:53.199589+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.213773+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.213773+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.216079+0000 mon.a (mon.0) 1533 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:57:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:53 vm09 bash[22983]: audit 2026-03-09T15:57:53.216079+0000 mon.a (mon.0) 1533 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:52.338796+0000 mon.a (mon.0) 1526 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:52.338796+0000 mon.a (mon.0) 1526 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: cluster 2026-03-09T15:57:52.698528+0000 mgr.y (mgr.14520) 181 : cluster [DBG] pgmap v176: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: cluster 2026-03-09T15:57:52.698528+0000 mgr.y (mgr.14520) 181 : cluster [DBG] pgmap v176: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.176269+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.176269+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.176324+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.176324+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.176572+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.176572+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.176691+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.176691+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.189428+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.189428+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: cluster 2026-03-09T15:57:53.199589+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: cluster 2026-03-09T15:57:53.199589+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.213773+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.213773+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.216079+0000 mon.a (mon.0) 1533 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:53 vm01 bash[28152]: audit 2026-03-09T15:57:53.216079+0000 mon.a (mon.0) 1533 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:52.338796+0000 mon.a (mon.0) 1526 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:52.338796+0000 mon.a (mon.0) 1526 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: cluster 2026-03-09T15:57:52.698528+0000 mgr.y (mgr.14520) 181 : cluster [DBG] pgmap v176: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:57:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: cluster 2026-03-09T15:57:52.698528+0000 mgr.y (mgr.14520) 181 : cluster [DBG] pgmap v176: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.176269+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.176269+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.176324+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.176324+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.101:0/559564295' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm01-59602-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.176572+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.176572+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.176691+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.176691+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.101:0/1881620040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm01-59610-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.189428+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.189428+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.101:0/3716951112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: cluster 2026-03-09T15:57:53.199589+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: cluster 2026-03-09T15:57:53.199589+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.213773+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.213773+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]: dispatch 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.216079+0000 mon.a (mon.0) 1533 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:57:53.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:53 vm01 bash[20728]: audit 2026-03-09T15:57:53.216079+0000 mon.a (mon.0) 1533 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T15:57:54.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:54 vm01 bash[28152]: audit 2026-03-09T15:57:53.339605+0000 mon.a (mon.0) 1534 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:54.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:54 vm01 bash[28152]: audit 2026-03-09T15:57:53.339605+0000 mon.a (mon.0) 1534 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:54.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:54 vm01 bash[28152]: cluster 2026-03-09T15:57:53.782481+0000 mon.a (mon.0) 1535 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:54.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:54 vm01 bash[28152]: cluster 2026-03-09T15:57:53.782481+0000 mon.a (mon.0) 1535 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:54 vm01 bash[20728]: audit 2026-03-09T15:57:53.339605+0000 mon.a (mon.0) 1534 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:54 vm01 bash[20728]: audit 2026-03-09T15:57:53.339605+0000 mon.a (mon.0) 1534 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:54 vm01 bash[20728]: cluster 2026-03-09T15:57:53.782481+0000 mon.a (mon.0) 1535 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:54 vm01 bash[20728]: cluster 2026-03-09T15:57:53.782481+0000 mon.a (mon.0) 1535 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:54.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:54 vm09 bash[22983]: audit 2026-03-09T15:57:53.339605+0000 mon.a (mon.0) 1534 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:54.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:54 vm09 bash[22983]: audit 2026-03-09T15:57:53.339605+0000 mon.a (mon.0) 1534 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:54.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:54 vm09 bash[22983]: cluster 2026-03-09T15:57:53.782481+0000 mon.a (mon.0) 1535 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:54.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:54 vm09 bash[22983]: cluster 2026-03-09T15:57:53.782481+0000 mon.a (mon.0) 1535 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:55.337 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [==========] Running 31 tests from 7 test suites. 2026-03-09T15:57:55.337 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] Global test environment set-up. 2026-03-09T15:57:55.337 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscVersion.VersionPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscVersion.VersionPP (0 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 22 tests from LibRadosMiscPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: seed 59801 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WaitOSDMapPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WaitOSDMapPP (23 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNamePP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNamePP (787 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongLocatorPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongLocatorPP (58 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNSpacePP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNSpacePP (14 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongAttrNamePP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongAttrNamePP (130 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.ExecPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.ExecPP (178 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BadFlagsPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BadFlagsPP (119 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate1PP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate1PP (26 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate2PP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate2PP (8 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigObjectPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigObjectPP (110 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AioOperatePP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AioOperatePP (4 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertExistsPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertExistsPP (8 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertVersionPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertVersionPP (28 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigAttrPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: osd_max_attr_size = 0 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: osd_max_attr_size == 0; skipping test 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigAttrPP (9900 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyPP (669 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyScrubPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: waiting for initial deep scrubs... 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: done waiting, doing copies 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: waiting for final deep scrubs... 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: done waiting 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyScrubPP (61346 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WriteSamePP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WriteSamePP (5 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CmpExtPP 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CmpExtPP (2 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Applications 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Applications (4982 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatOSD 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatOSD (0 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatClient 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatClient (0 ms) 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Conf 2026-03-09T15:57:55.338 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Conf (0 ms) 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.314251+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.314251+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.314389+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.314389+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: cluster 2026-03-09T15:57:54.333607+0000 mon.a (mon.0) 1538 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: cluster 2026-03-09T15:57:54.333607+0000 mon.a (mon.0) 1538 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.344853+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.344853+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.366366+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.366366+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.366473+0000 mon.a (mon.0) 1540 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.366473+0000 mon.a (mon.0) 1540 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.393232+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.393232+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.403727+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.403727+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.404431+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.404431+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.406443+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.406443+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.411637+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.411637+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.414138+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.414138+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.414236+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.414236+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.414856+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.414856+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.415294+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.415294+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.415505+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.415505+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: cluster 2026-03-09T15:57:54.698953+0000 mgr.y (mgr.14520) 182 : cluster [DBG] pgmap v179: 356 pgs: 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: cluster 2026-03-09T15:57:54.698953+0000 mgr.y (mgr.14520) 182 : cluster [DBG] pgmap v179: 356 pgs: 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.699683+0000 mon.a (mon.0) 1546 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:54.699683+0000 mon.a (mon.0) 1546 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.319850+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.319850+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.319904+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.319904+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.319936+0000 mon.a (mon.0) 1549 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T15:57:55.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.319936+0000 mon.a (mon.0) 1549 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: cluster 2026-03-09T15:57:55.327220+0000 mon.a (mon.0) 1550 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: cluster 2026-03-09T15:57:55.327220+0000 mon.a (mon.0) 1550 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.327553+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.327553+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.331932+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.331932+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.335100+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.335100+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.344197+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.101:0/3511493710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.344197+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.101:0/3511493710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.348043+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.348043+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.348366+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.348366+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.358392+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.314251+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.314251+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.314389+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.314389+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: cluster 2026-03-09T15:57:54.333607+0000 mon.a (mon.0) 1538 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: cluster 2026-03-09T15:57:54.333607+0000 mon.a (mon.0) 1538 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.344853+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.344853+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.366366+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.366366+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.366473+0000 mon.a (mon.0) 1540 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.366473+0000 mon.a (mon.0) 1540 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.393232+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.393232+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.403727+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.403727+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.404431+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.404431+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.406443+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.406443+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.411637+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.411637+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.414138+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.414138+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.414236+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.414236+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.414856+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.414856+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.415294+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.415294+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.415505+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.415505+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: cluster 2026-03-09T15:57:54.698953+0000 mgr.y (mgr.14520) 182 : cluster [DBG] pgmap v179: 356 pgs: 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: cluster 2026-03-09T15:57:54.698953+0000 mgr.y (mgr.14520) 182 : cluster [DBG] pgmap v179: 356 pgs: 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.699683+0000 mon.a (mon.0) 1546 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:54.699683+0000 mon.a (mon.0) 1546 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.319850+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.319850+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.319904+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.319904+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.319936+0000 mon.a (mon.0) 1549 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.319936+0000 mon.a (mon.0) 1549 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: cluster 2026-03-09T15:57:55.327220+0000 mon.a (mon.0) 1550 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: cluster 2026-03-09T15:57:55.327220+0000 mon.a (mon.0) 1550 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.327553+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.327553+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.331932+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.331932+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.335100+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.335100+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.344197+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.101:0/3511493710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.344197+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.101:0/3511493710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.348043+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.348043+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.348366+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.348366+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.358392+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.358392+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.359806+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.359806+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.360731+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.360731+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.368085+0000 mon.a (mon.0) 1557 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:55 vm01 bash[20728]: audit 2026-03-09T15:57:55.368085+0000 mon.a (mon.0) 1557 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.358392+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.359806+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.359806+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.360731+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.360731+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.368085+0000 mon.a (mon.0) 1557 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.680 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:55 vm01 bash[28152]: audit 2026-03-09T15:57:55.368085+0000 mon.a (mon.0) 1557 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.314251+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.314251+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm01-59908-16"}]': finished 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.314389+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.314389+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? 192.168.123.101:0/1804772182' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm01-59801-1","app":"app1","key":"key1"}]': finished 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: cluster 2026-03-09T15:57:54.333607+0000 mon.a (mon.0) 1538 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: cluster 2026-03-09T15:57:54.333607+0000 mon.a (mon.0) 1538 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.344853+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.344853+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.366366+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.366366+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.366473+0000 mon.a (mon.0) 1540 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.366473+0000 mon.a (mon.0) 1540 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.393232+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.393232+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.403727+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.403727+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.404431+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.404431+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.406443+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.406443+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.411637+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.411637+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.414138+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.414138+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.414236+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.414236+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.414856+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.414856+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.415294+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.415294+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.415505+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.415505+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: cluster 2026-03-09T15:57:54.698953+0000 mgr.y (mgr.14520) 182 : cluster [DBG] pgmap v179: 356 pgs: 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: cluster 2026-03-09T15:57:54.698953+0000 mgr.y (mgr.14520) 182 : cluster [DBG] pgmap v179: 356 pgs: 356 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.699683+0000 mon.a (mon.0) 1546 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:54.699683+0000 mon.a (mon.0) 1546 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.319850+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.319850+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.319904+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.319904+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm01-59602-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.319936+0000 mon.a (mon.0) 1549 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.319936+0000 mon.a (mon.0) 1549 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: cluster 2026-03-09T15:57:55.327220+0000 mon.a (mon.0) 1550 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: cluster 2026-03-09T15:57:55.327220+0000 mon.a (mon.0) 1550 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.327553+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.327553+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.331932+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.331932+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.335100+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.335100+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.344197+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.101:0/3511493710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.344197+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.101:0/3511493710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.348043+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.348043+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.348366+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.348366+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.358392+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.358392+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.359806+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.359806+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.360731+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.360731+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.368085+0000 mon.a (mon.0) 1557 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:55.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:55 vm09 bash[22983]: audit 2026-03-09T15:57:55.368085+0000 mon.a (mon.0) 1557 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:56.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:57:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.294694+0000 mgr.y (mgr.14520) 183 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.294694+0000 mgr.y (mgr.14520) 183 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.323272+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.323272+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.323358+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.323358+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: cluster 2026-03-09T15:57:56.354216+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: cluster 2026-03-09T15:57:56.354216+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.365475+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.365475+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.369964+0000 mon.a (mon.0) 1562 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.369964+0000 mon.a (mon.0) 1562 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.371129+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: audit 2026-03-09T15:57:56.371129+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: cluster 2026-03-09T15:57:56.699473+0000 mgr.y (mgr.14520) 184 : cluster [DBG] pgmap v182: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:57 vm09 bash[22983]: cluster 2026-03-09T15:57:56.699473+0000 mgr.y (mgr.14520) 184 : cluster [DBG] pgmap v182: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.294694+0000 mgr.y (mgr.14520) 183 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.294694+0000 mgr.y (mgr.14520) 183 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.323272+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.323272+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.323358+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.323358+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: cluster 2026-03-09T15:57:56.354216+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: cluster 2026-03-09T15:57:56.354216+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.365475+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.365475+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.369964+0000 mon.a (mon.0) 1562 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.369964+0000 mon.a (mon.0) 1562 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.371129+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: audit 2026-03-09T15:57:56.371129+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: cluster 2026-03-09T15:57:56.699473+0000 mgr.y (mgr.14520) 184 : cluster [DBG] pgmap v182: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:57 vm01 bash[20728]: cluster 2026-03-09T15:57:56.699473+0000 mgr.y (mgr.14520) 184 : cluster [DBG] pgmap v182: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.294694+0000 mgr.y (mgr.14520) 183 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.294694+0000 mgr.y (mgr.14520) 183 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.323272+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.323272+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm01-59610-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.323358+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.323358+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59801-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: cluster 2026-03-09T15:57:56.354216+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: cluster 2026-03-09T15:57:56.354216+0000 mon.a (mon.0) 1560 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T15:57:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.365475+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:57.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.365475+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:57:57.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.369964+0000 mon.a (mon.0) 1562 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:57.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.369964+0000 mon.a (mon.0) 1562 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:57.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.371129+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:57.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: audit 2026-03-09T15:57:56.371129+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T15:57:57.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: cluster 2026-03-09T15:57:56.699473+0000 mgr.y (mgr.14520) 184 : cluster [DBG] pgmap v182: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:57.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:57 vm01 bash[28152]: cluster 2026-03-09T15:57:56.699473+0000 mgr.y (mgr.14520) 184 : cluster [DBG] pgmap v182: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: audit 2026-03-09T15:57:57.333374+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: audit 2026-03-09T15:57:57.333374+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: audit 2026-03-09T15:57:57.333468+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: audit 2026-03-09T15:57:57.333468+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: audit 2026-03-09T15:57:57.333512+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: audit 2026-03-09T15:57:57.333512+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: cluster 2026-03-09T15:57:57.383865+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: cluster 2026-03-09T15:57:57.383865+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: audit 2026-03-09T15:57:57.384678+0000 mon.a (mon.0) 1568 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:58 vm09 bash[22983]: audit 2026-03-09T15:57:57.384678+0000 mon.a (mon.0) 1568 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: audit 2026-03-09T15:57:57.333374+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: audit 2026-03-09T15:57:57.333374+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: audit 2026-03-09T15:57:57.333468+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: audit 2026-03-09T15:57:57.333468+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: audit 2026-03-09T15:57:57.333512+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: audit 2026-03-09T15:57:57.333512+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: cluster 2026-03-09T15:57:57.383865+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: cluster 2026-03-09T15:57:57.383865+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: audit 2026-03-09T15:57:57.384678+0000 mon.a (mon.0) 1568 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:58 vm01 bash[20728]: audit 2026-03-09T15:57:57.384678+0000 mon.a (mon.0) 1568 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: audit 2026-03-09T15:57:57.333374+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: audit 2026-03-09T15:57:57.333374+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm01-59908-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: audit 2026-03-09T15:57:57.333468+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: audit 2026-03-09T15:57:57.333468+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm01-59602-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: audit 2026-03-09T15:57:57.333512+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: audit 2026-03-09T15:57:57.333512+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pg_num","val":"11"}]': finished 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: cluster 2026-03-09T15:57:57.383865+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: cluster 2026-03-09T15:57:57.383865+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: audit 2026-03-09T15:57:57.384678+0000 mon.a (mon.0) 1568 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:58.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:58 vm01 bash[28152]: audit 2026-03-09T15:57:57.384678+0000 mon.a (mon.0) 1568 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:58.385629+0000 mon.a (mon.0) 1569 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:58.385629+0000 mon.a (mon.0) 1569 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:58.508765+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:58.508765+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: cluster 2026-03-09T15:57:58.526916+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: cluster 2026-03-09T15:57:58.526916+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:58.528853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:58.528853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: cluster 2026-03-09T15:57:58.700178+0000 mgr.y (mgr.14520) 185 : cluster [DBG] pgmap v185: 380 pgs: 5 creating+peering, 48 unknown, 327 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: cluster 2026-03-09T15:57:58.700178+0000 mgr.y (mgr.14520) 185 : cluster [DBG] pgmap v185: 380 pgs: 5 creating+peering, 48 unknown, 327 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: cluster 2026-03-09T15:57:58.783171+0000 mon.a (mon.0) 1573 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: cluster 2026-03-09T15:57:58.783171+0000 mon.a (mon.0) 1573 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:59.081002+0000 mon.a (mon.0) 1574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:59.081002+0000 mon.a (mon.0) 1574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:59.386510+0000 mon.a (mon.0) 1575 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:57:59 vm09 bash[22983]: audit 2026-03-09T15:57:59.386510+0000 mon.a (mon.0) 1575 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:58.385629+0000 mon.a (mon.0) 1569 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:58.385629+0000 mon.a (mon.0) 1569 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:58.508765+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:58.508765+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: cluster 2026-03-09T15:57:58.526916+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: cluster 2026-03-09T15:57:58.526916+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:58.528853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:58.528853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: cluster 2026-03-09T15:57:58.700178+0000 mgr.y (mgr.14520) 185 : cluster [DBG] pgmap v185: 380 pgs: 5 creating+peering, 48 unknown, 327 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: cluster 2026-03-09T15:57:58.700178+0000 mgr.y (mgr.14520) 185 : cluster [DBG] pgmap v185: 380 pgs: 5 creating+peering, 48 unknown, 327 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: cluster 2026-03-09T15:57:58.783171+0000 mon.a (mon.0) 1573 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: cluster 2026-03-09T15:57:58.783171+0000 mon.a (mon.0) 1573 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:59.081002+0000 mon.a (mon.0) 1574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:59.081002+0000 mon.a (mon.0) 1574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:59.386510+0000 mon.a (mon.0) 1575 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:57:59 vm01 bash[28152]: audit 2026-03-09T15:57:59.386510+0000 mon.a (mon.0) 1575 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:58.385629+0000 mon.a (mon.0) 1569 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:58.385629+0000 mon.a (mon.0) 1569 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:58.508765+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:58.508765+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59801-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: cluster 2026-03-09T15:57:58.526916+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: cluster 2026-03-09T15:57:58.526916+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:58.528853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:58.528853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: cluster 2026-03-09T15:57:58.700178+0000 mgr.y (mgr.14520) 185 : cluster [DBG] pgmap v185: 380 pgs: 5 creating+peering, 48 unknown, 327 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: cluster 2026-03-09T15:57:58.700178+0000 mgr.y (mgr.14520) 185 : cluster [DBG] pgmap v185: 380 pgs: 5 creating+peering, 48 unknown, 327 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: cluster 2026-03-09T15:57:58.783171+0000 mon.a (mon.0) 1573 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: cluster 2026-03-09T15:57:58.783171+0000 mon.a (mon.0) 1573 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:57:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:59.081002+0000 mon.a (mon.0) 1574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:59.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:59.081002+0000 mon.a (mon.0) 1574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:57:59.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:59.386510+0000 mon.a (mon.0) 1575 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:57:59.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:57:59 vm01 bash[20728]: audit 2026-03-09T15:57:59.386510+0000 mon.a (mon.0) 1575 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:57:59.515908+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:57:59.515908+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:57:59.542776+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:57:59.542776+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: cluster 2026-03-09T15:57:59.545699+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: cluster 2026-03-09T15:57:59.545699+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:57:59.549695+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:57:59.549695+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:57:59.549762+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:57:59.549762+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:58:00.387431+0000 mon.a (mon.0) 1580 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:00 vm09 bash[22983]: audit 2026-03-09T15:58:00.387431+0000 mon.a (mon.0) 1580 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:57:59.515908+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:57:59.515908+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:57:59.542776+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:57:59.542776+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: cluster 2026-03-09T15:57:59.545699+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: cluster 2026-03-09T15:57:59.545699+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:57:59.549695+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:57:59.549695+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:57:59.549762+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:57:59.549762+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:58:00.387431+0000 mon.a (mon.0) 1580 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:00 vm01 bash[28152]: audit 2026-03-09T15:58:00.387431+0000 mon.a (mon.0) 1580 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:57:59.515908+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:57:59.515908+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.101:0/3007244973' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm01-59610-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:57:59.542776+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:57:59.542776+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: cluster 2026-03-09T15:57:59.545699+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: cluster 2026-03-09T15:57:59.545699+0000 mon.a (mon.0) 1577 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:57:59.549695+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:57:59.549695+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:57:59.549762+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:57:59.549762+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:58:00.387431+0000 mon.a (mon.0) 1580 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:00 vm01 bash[20728]: audit 2026-03-09T15:58:00.387431+0000 mon.a (mon.0) 1580 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 22 tests from LibRa api_aio: Running main() from gmock_main.cc 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [==========] Running 42 tests from 2 test suites. 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [----------] Global test environment set-up. 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [----------] 26 tests from LibRadosAio 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.TooBig 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.TooBig (2959 ms) 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.SimpleWrite 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.SimpleWrite (3310 ms) 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.WaitForSafe 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.WaitForSafe (4146 ms) 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip (3445 ms) 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip2 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip2 (2555 ms) 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip3 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip3 (3082 ms) 2026-03-09T15:58:01.527 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripAppend 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.RoundTripAppend (3128 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.RemoveTest 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.RemoveTest (3039 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.XattrsRoundTrip 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.XattrsRoundTrip (3017 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.RmXattr 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.RmXattr (3057 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.XattrIter 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.XattrIter (3112 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.IsComplete 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.IsComplete (3015 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.IsSafe 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.IsSafe (2900 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.ReturnValue 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.ReturnValue (3231 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.Flush 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.Flush (2725 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.FlushAsync 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.FlushAsync (3004 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteFull 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteFull (3057 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteSame 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteSame (3236 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStat 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.SimpleStat (2921 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.OperateMtime 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.OperateMtime (3015 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.Operate2Mtime 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.Operate2Mtime (3096 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStatNS 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.SimpleStatNS (2996 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.StatRemove 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.StatRemove (2751 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.ExecuteClass 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.ExecuteClass (3092 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.MultiWrite 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.MultiWrite (3229 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAio.AioUnlock 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAio.AioUnlock (3231 ms) 2026-03-09T15:58:01.528 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [----------] 26 tests from LibRadosAio (80350 ms total) 2026-03-09T15:58:01.529 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: 2026-03-09T15:58:01.529 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [----------] 16 tests from LibRadosAioEC 2026-03-09T15:58:01.529 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleWrite 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.519198+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.519198+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.519240+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.519240+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.519931+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.519931+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: cluster 2026-03-09T15:58:00.531405+0000 mon.a (mon.0) 1583 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: cluster 2026-03-09T15:58:00.531405+0000 mon.a (mon.0) 1583 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.541096+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.541096+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.560631+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.560631+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: cluster 2026-03-09T15:58:00.700616+0000 mgr.y (mgr.14520) 186 : cluster [DBG] pgmap v188: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: cluster 2026-03-09T15:58:00.700616+0000 mgr.y (mgr.14520) 186 : cluster [DBG] pgmap v188: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.701516+0000 mon.a (mon.0) 1586 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:00.701516+0000 mon.a (mon.0) 1586 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.388281+0000 mon.a (mon.0) 1587 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.388281+0000 mon.a (mon.0) 1587 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.523698+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.523698+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.523748+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.523748+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.523793+0000 mon.a (mon.0) 1590 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T15:58:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.523793+0000 mon.a (mon.0) 1590 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T15:58:01.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.527259+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.101:0/3961260633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.527259+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.101:0/3961260633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: cluster 2026-03-09T15:58:01.527673+0000 mon.a (mon.0) 1591 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T15:58:01.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: cluster 2026-03-09T15:58:01.527673+0000 mon.a (mon.0) 1591 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T15:58:01.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.548385+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:01 vm09 bash[22983]: audit 2026-03-09T15:58:01.548385+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.519198+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.519198+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.519240+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.519240+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.519931+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.519931+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: cluster 2026-03-09T15:58:00.531405+0000 mon.a (mon.0) 1583 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: cluster 2026-03-09T15:58:00.531405+0000 mon.a (mon.0) 1583 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.541096+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.541096+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.560631+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.560631+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: cluster 2026-03-09T15:58:00.700616+0000 mgr.y (mgr.14520) 186 : cluster [DBG] pgmap v188: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: cluster 2026-03-09T15:58:00.700616+0000 mgr.y (mgr.14520) 186 : cluster [DBG] pgmap v188: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.701516+0000 mon.a (mon.0) 1586 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:00.701516+0000 mon.a (mon.0) 1586 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.388281+0000 mon.a (mon.0) 1587 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.388281+0000 mon.a (mon.0) 1587 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.523698+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.523698+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.523748+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.523748+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.523793+0000 mon.a (mon.0) 1590 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.523793+0000 mon.a (mon.0) 1590 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.527259+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.101:0/3961260633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.527259+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.101:0/3961260633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: cluster 2026-03-09T15:58:01.527673+0000 mon.a (mon.0) 1591 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: cluster 2026-03-09T15:58:01.527673+0000 mon.a (mon.0) 1591 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.548385+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:01 vm01 bash[20728]: audit 2026-03-09T15:58:01.548385+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.519198+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.519198+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.519240+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.519240+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.519931+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.519931+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.101:0/3577756957' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: cluster 2026-03-09T15:58:00.531405+0000 mon.a (mon.0) 1583 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: cluster 2026-03-09T15:58:00.531405+0000 mon.a (mon.0) 1583 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.541096+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.541096+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.560631+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.560631+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: cluster 2026-03-09T15:58:00.700616+0000 mgr.y (mgr.14520) 186 : cluster [DBG] pgmap v188: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: cluster 2026-03-09T15:58:00.700616+0000 mgr.y (mgr.14520) 186 : cluster [DBG] pgmap v188: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.701516+0000 mon.a (mon.0) 1586 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:00.701516+0000 mon.a (mon.0) 1586 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.388281+0000 mon.a (mon.0) 1587 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.388281+0000 mon.a (mon.0) 1587 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.523698+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.523698+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm01-59602-27"}]': finished 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.523748+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.523748+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59801-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.523793+0000 mon.a (mon.0) 1590 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.523793+0000 mon.a (mon.0) 1590 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.527259+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.101:0/3961260633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.527259+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.101:0/3961260633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: cluster 2026-03-09T15:58:01.527673+0000 mon.a (mon.0) 1591 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: cluster 2026-03-09T15:58:01.527673+0000 mon.a (mon.0) 1591 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.548385+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:01.929 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:01 vm01 bash[28152]: audit 2026-03-09T15:58:01.548385+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.571080+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.571080+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.616236+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.616236+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.621917+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.621917+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.656610+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.656610+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.660110+0000 mon.c (mon.2) 145 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.660110+0000 mon.c (mon.2) 145 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.660461+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:01.660461+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.389417+0000 mon.a (mon.0) 1596 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.389417+0000 mon.a (mon.0) 1596 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.527973+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.527973+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:02.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.528035+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.528035+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: cluster 2026-03-09T15:58:02.534978+0000 mon.a (mon.0) 1599 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: cluster 2026-03-09T15:58:02.534978+0000 mon.a (mon.0) 1599 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.547924+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.547924+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.551043+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:02 vm01 bash[20728]: audit 2026-03-09T15:58:02.551043+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.571080+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.571080+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.616236+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.616236+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.621917+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.621917+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.656610+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.656610+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.660110+0000 mon.c (mon.2) 145 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.660110+0000 mon.c (mon.2) 145 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.660461+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:01.660461+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.389417+0000 mon.a (mon.0) 1596 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.389417+0000 mon.a (mon.0) 1596 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.527973+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.527973+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.528035+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.528035+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: cluster 2026-03-09T15:58:02.534978+0000 mon.a (mon.0) 1599 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: cluster 2026-03-09T15:58:02.534978+0000 mon.a (mon.0) 1599 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.547924+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.547924+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.551043+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.880 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:02 vm01 bash[28152]: audit 2026-03-09T15:58:02.551043+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.571080+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.571080+0000 mon.c (mon.2) 143 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.616236+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.616236+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.621917+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.621917+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.656610+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.656610+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.660110+0000 mon.c (mon.2) 145 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.660110+0000 mon.c (mon.2) 145 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.660461+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:01.660461+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.389417+0000 mon.a (mon.0) 1596 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.389417+0000 mon.a (mon.0) 1596 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.527973+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.527973+0000 mon.a (mon.0) 1597 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm01-59610-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.528035+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.528035+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm01-59602-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:02.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: cluster 2026-03-09T15:58:02.534978+0000 mon.a (mon.0) 1599 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T15:58:02.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: cluster 2026-03-09T15:58:02.534978+0000 mon.a (mon.0) 1599 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T15:58:02.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.547924+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.547924+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.551043+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:02.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:02 vm09 bash[22983]: audit 2026-03-09T15:58:02.551043+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:03.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:58:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:58:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: cluster 2026-03-09T15:58:02.701072+0000 mgr.y (mgr.14520) 187 : cluster [DBG] pgmap v191: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: cluster 2026-03-09T15:58:02.701072+0000 mgr.y (mgr.14520) 187 : cluster [DBG] pgmap v191: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:02.785946+0000 mon.c (mon.2) 147 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:02.785946+0000 mon.c (mon.2) 147 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:02.786374+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:02.786374+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:03.390356+0000 mon.a (mon.0) 1602 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:03.390356+0000 mon.a (mon.0) 1602 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:03.531711+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:03.531711+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:03.539036+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:03.539036+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: cluster 2026-03-09T15:58:03.547456+0000 mon.a (mon.0) 1604 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: cluster 2026-03-09T15:58:03.547456+0000 mon.a (mon.0) 1604 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:03.557311+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:03 vm09 bash[22983]: audit 2026-03-09T15:58:03.557311+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: cluster 2026-03-09T15:58:02.701072+0000 mgr.y (mgr.14520) 187 : cluster [DBG] pgmap v191: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: cluster 2026-03-09T15:58:02.701072+0000 mgr.y (mgr.14520) 187 : cluster [DBG] pgmap v191: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:02.785946+0000 mon.c (mon.2) 147 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:02.785946+0000 mon.c (mon.2) 147 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:02.786374+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:02.786374+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:03.390356+0000 mon.a (mon.0) 1602 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:03.390356+0000 mon.a (mon.0) 1602 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:03.531711+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:03.531711+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:03.539036+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:03.539036+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: cluster 2026-03-09T15:58:03.547456+0000 mon.a (mon.0) 1604 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: cluster 2026-03-09T15:58:03.547456+0000 mon.a (mon.0) 1604 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:03.557311+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:03 vm01 bash[28152]: audit 2026-03-09T15:58:03.557311+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: cluster 2026-03-09T15:58:02.701072+0000 mgr.y (mgr.14520) 187 : cluster [DBG] pgmap v191: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: cluster 2026-03-09T15:58:02.701072+0000 mgr.y (mgr.14520) 187 : cluster [DBG] pgmap v191: 372 pgs: 8 creating+peering, 32 unknown, 332 active+clean; 472 KiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:02.785946+0000 mon.c (mon.2) 147 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:02.785946+0000 mon.c (mon.2) 147 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:02.786374+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:02.786374+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:03.390356+0000 mon.a (mon.0) 1602 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:03.390356+0000 mon.a (mon.0) 1602 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:03.531711+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:03.531711+0000 mon.a (mon.0) 1603 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:03.539036+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:03.539036+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: cluster 2026-03-09T15:58:03.547456+0000 mon.a (mon.0) 1604 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T15:58:03.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: cluster 2026-03-09T15:58:03.547456+0000 mon.a (mon.0) 1604 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T15:58:03.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:03.557311+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:03.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:03 vm01 bash[20728]: audit 2026-03-09T15:58:03.557311+0000 mon.a (mon.0) 1605 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:03.559366+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:03.559366+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: cluster 2026-03-09T15:58:03.783883+0000 mon.a (mon.0) 1607 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: cluster 2026-03-09T15:58:03.783883+0000 mon.a (mon.0) 1607 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.391225+0000 mon.a (mon.0) 1608 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.391225+0000 mon.a (mon.0) 1608 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: cluster 2026-03-09T15:58:04.532071+0000 mon.a (mon.0) 1609 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: cluster 2026-03-09T15:58:04.532071+0000 mon.a (mon.0) 1609 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.534967+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.534967+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.535041+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.535041+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.535084+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.535084+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.546824+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.101:0/1569389062' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.546824+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.101:0/1569389062' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: cluster 2026-03-09T15:58:04.549675+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: cluster 2026-03-09T15:58:04.549675+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.560639+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.560639+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.560905+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:04 vm09 bash[22983]: audit 2026-03-09T15:58:04.560905+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:03.559366+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:03.559366+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: cluster 2026-03-09T15:58:03.783883+0000 mon.a (mon.0) 1607 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: cluster 2026-03-09T15:58:03.783883+0000 mon.a (mon.0) 1607 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.391225+0000 mon.a (mon.0) 1608 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.391225+0000 mon.a (mon.0) 1608 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: cluster 2026-03-09T15:58:04.532071+0000 mon.a (mon.0) 1609 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: cluster 2026-03-09T15:58:04.532071+0000 mon.a (mon.0) 1609 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.534967+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.534967+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.535041+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.535041+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.535084+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.535084+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.546824+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.101:0/1569389062' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.546824+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.101:0/1569389062' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: cluster 2026-03-09T15:58:04.549675+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: cluster 2026-03-09T15:58:04.549675+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.560639+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.560639+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.560905+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:04 vm01 bash[28152]: audit 2026-03-09T15:58:04.560905+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:03.559366+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:03.559366+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: cluster 2026-03-09T15:58:03.783883+0000 mon.a (mon.0) 1607 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: cluster 2026-03-09T15:58:03.783883+0000 mon.a (mon.0) 1607 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.391225+0000 mon.a (mon.0) 1608 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.391225+0000 mon.a (mon.0) 1608 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: cluster 2026-03-09T15:58:04.532071+0000 mon.a (mon.0) 1609 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: cluster 2026-03-09T15:58:04.532071+0000 mon.a (mon.0) 1609 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.534967+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.534967+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm01-59602-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.535041+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.535041+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-13"}]': finished 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.535084+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.535084+0000 mon.a (mon.0) 1612 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.546824+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.101:0/1569389062' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.546824+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.101:0/1569389062' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: cluster 2026-03-09T15:58:04.549675+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: cluster 2026-03-09T15:58:04.549675+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.560639+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.560639+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.560905+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:04.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:04 vm01 bash[20728]: audit 2026-03-09T15:58:04.560905+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: cluster 2026-03-09T15:58:04.701541+0000 mgr.y (mgr.14520) 188 : cluster [DBG] pgmap v194: 371 pgs: 40 unknown, 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 1 peering, 323 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 9.2 KiB/s rd, 10 KiB/s wr, 22 op/s 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: cluster 2026-03-09T15:58:04.701541+0000 mgr.y (mgr.14520) 188 : cluster [DBG] pgmap v194: 371 pgs: 40 unknown, 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 1 peering, 323 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 9.2 KiB/s rd, 10 KiB/s wr, 22 op/s 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: audit 2026-03-09T15:58:05.392121+0000 mon.a (mon.0) 1616 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: audit 2026-03-09T15:58:05.392121+0000 mon.a (mon.0) 1616 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: audit 2026-03-09T15:58:05.539130+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: audit 2026-03-09T15:58:05.539130+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: audit 2026-03-09T15:58:05.539245+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: audit 2026-03-09T15:58:05.539245+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: cluster 2026-03-09T15:58:05.549840+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:05 vm01 bash[28152]: cluster 2026-03-09T15:58:05.549840+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T15:58:05.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: cluster 2026-03-09T15:58:04.701541+0000 mgr.y (mgr.14520) 188 : cluster [DBG] pgmap v194: 371 pgs: 40 unknown, 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 1 peering, 323 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 9.2 KiB/s rd, 10 KiB/s wr, 22 op/s 2026-03-09T15:58:05.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: cluster 2026-03-09T15:58:04.701541+0000 mgr.y (mgr.14520) 188 : cluster [DBG] pgmap v194: 371 pgs: 40 unknown, 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 1 peering, 323 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 9.2 KiB/s rd, 10 KiB/s wr, 22 op/s 2026-03-09T15:58:05.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: audit 2026-03-09T15:58:05.392121+0000 mon.a (mon.0) 1616 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:05.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: audit 2026-03-09T15:58:05.392121+0000 mon.a (mon.0) 1616 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:05.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: audit 2026-03-09T15:58:05.539130+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:05.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: audit 2026-03-09T15:58:05.539130+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:05.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: audit 2026-03-09T15:58:05.539245+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:05.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: audit 2026-03-09T15:58:05.539245+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:05.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: cluster 2026-03-09T15:58:05.549840+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T15:58:05.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:05 vm01 bash[20728]: cluster 2026-03-09T15:58:05.549840+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: cluster 2026-03-09T15:58:04.701541+0000 mgr.y (mgr.14520) 188 : cluster [DBG] pgmap v194: 371 pgs: 40 unknown, 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 1 peering, 323 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 9.2 KiB/s rd, 10 KiB/s wr, 22 op/s 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: cluster 2026-03-09T15:58:04.701541+0000 mgr.y (mgr.14520) 188 : cluster [DBG] pgmap v194: 371 pgs: 40 unknown, 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 1 peering, 323 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 9.2 KiB/s rd, 10 KiB/s wr, 22 op/s 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: audit 2026-03-09T15:58:05.392121+0000 mon.a (mon.0) 1616 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: audit 2026-03-09T15:58:05.392121+0000 mon.a (mon.0) 1616 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: audit 2026-03-09T15:58:05.539130+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: audit 2026-03-09T15:58:05.539130+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.101:0/1806339044' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59801-24"}]': finished 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: audit 2026-03-09T15:58:05.539245+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: audit 2026-03-09T15:58:05.539245+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm01-59610-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: cluster 2026-03-09T15:58:05.549840+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T15:58:06.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:05 vm09 bash[22983]: cluster 2026-03-09T15:58:05.549840+0000 mon.a (mon.0) 1619 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T15:58:06.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:58:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.393035+0000 mon.a (mon.0) 1620 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.393035+0000 mon.a (mon.0) 1620 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.562245+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.101:0/4013608752' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.562245+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.101:0/4013608752' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.567681+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.567681+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.569574+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.569574+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: cluster 2026-03-09T15:58:06.570113+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: cluster 2026-03-09T15:58:06.570113+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.570661+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.570661+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.570838+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.570838+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.572532+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:06 vm01 bash[28152]: audit 2026-03-09T15:58:06.572532+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.393035+0000 mon.a (mon.0) 1620 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.393035+0000 mon.a (mon.0) 1620 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.562245+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.101:0/4013608752' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.562245+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.101:0/4013608752' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.567681+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.567681+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.569574+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.569574+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: cluster 2026-03-09T15:58:06.570113+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T15:58:06.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: cluster 2026-03-09T15:58:06.570113+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T15:58:06.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.570661+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:06.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.570661+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:06.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.570838+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.570838+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.572532+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:06.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:06 vm01 bash[20728]: audit 2026-03-09T15:58:06.572532+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.393035+0000 mon.a (mon.0) 1620 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.393035+0000 mon.a (mon.0) 1620 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.562245+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.101:0/4013608752' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.562245+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.101:0/4013608752' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.567681+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.567681+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.569574+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.569574+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: cluster 2026-03-09T15:58:06.570113+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: cluster 2026-03-09T15:58:06.570113+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.570661+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.570661+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.570838+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.570838+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.572532+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:06 vm09 bash[22983]: audit 2026-03-09T15:58:06.572532+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:06.302691+0000 mgr.y (mgr.14520) 189 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:06.302691+0000 mgr.y (mgr.14520) 189 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: cluster 2026-03-09T15:58:06.702028+0000 mgr.y (mgr.14520) 190 : cluster [DBG] pgmap v197: 363 pgs: 64 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 295 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: cluster 2026-03-09T15:58:06.702028+0000 mgr.y (mgr.14520) 190 : cluster [DBG] pgmap v197: 363 pgs: 64 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 295 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.393907+0000 mon.a (mon.0) 1625 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.393907+0000 mon.a (mon.0) 1625 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.605042+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.605042+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.605155+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.605155+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.605272+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.605272+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.619420+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.101:0/4193883109' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.619420+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.101:0/4193883109' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: cluster 2026-03-09T15:58:07.623454+0000 mon.a (mon.0) 1629 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: cluster 2026-03-09T15:58:07.623454+0000 mon.a (mon.0) 1629 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.625198+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.625198+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.643707+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.643707+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.644504+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:07 vm01 bash[28152]: audit 2026-03-09T15:58:07.644504+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:06.302691+0000 mgr.y (mgr.14520) 189 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:06.302691+0000 mgr.y (mgr.14520) 189 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: cluster 2026-03-09T15:58:06.702028+0000 mgr.y (mgr.14520) 190 : cluster [DBG] pgmap v197: 363 pgs: 64 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 295 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: cluster 2026-03-09T15:58:06.702028+0000 mgr.y (mgr.14520) 190 : cluster [DBG] pgmap v197: 363 pgs: 64 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 295 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.393907+0000 mon.a (mon.0) 1625 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.393907+0000 mon.a (mon.0) 1625 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.605042+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.605042+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.605155+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.605155+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.605272+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.605272+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.619420+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.101:0/4193883109' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.619420+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.101:0/4193883109' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: cluster 2026-03-09T15:58:07.623454+0000 mon.a (mon.0) 1629 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: cluster 2026-03-09T15:58:07.623454+0000 mon.a (mon.0) 1629 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.625198+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.625198+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.643707+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.643707+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.644504+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:07.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:07 vm01 bash[20728]: audit 2026-03-09T15:58:07.644504+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:06.302691+0000 mgr.y (mgr.14520) 189 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:06.302691+0000 mgr.y (mgr.14520) 189 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: cluster 2026-03-09T15:58:06.702028+0000 mgr.y (mgr.14520) 190 : cluster [DBG] pgmap v197: 363 pgs: 64 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 295 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: cluster 2026-03-09T15:58:06.702028+0000 mgr.y (mgr.14520) 190 : cluster [DBG] pgmap v197: 363 pgs: 64 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 295 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.393907+0000 mon.a (mon.0) 1625 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.393907+0000 mon.a (mon.0) 1625 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.605042+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.605042+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.605155+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.605155+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm01-59801-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.605272+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.605272+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.619420+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.101:0/4193883109' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.619420+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.101:0/4193883109' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: cluster 2026-03-09T15:58:07.623454+0000 mon.a (mon.0) 1629 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: cluster 2026-03-09T15:58:07.623454+0000 mon.a (mon.0) 1629 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.625198+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.625198+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.643707+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.643707+0000 mon.c (mon.2) 151 : audit [INF] from='client.? 192.168.123.101:0/3928072372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.644504+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:07 vm09 bash[22983]: audit 2026-03-09T15:58:07.644504+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.394619+0000 mon.a (mon.0) 1632 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.394619+0000 mon.a (mon.0) 1632 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.608672+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.608672+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.608811+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.608811+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: cluster 2026-03-09T15:58:08.624318+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: cluster 2026-03-09T15:58:08.624318+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.637452+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.637452+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.647536+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.647536+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.650286+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.650286+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.650533+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.650533+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.652919+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.652919+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.654856+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:08 vm01 bash[28152]: audit 2026-03-09T15:58:08.654856+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.394619+0000 mon.a (mon.0) 1632 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.394619+0000 mon.a (mon.0) 1632 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:08.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.608672+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.608672+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.608811+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.608811+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: cluster 2026-03-09T15:58:08.624318+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: cluster 2026-03-09T15:58:08.624318+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.637452+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.637452+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.647536+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.647536+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.650286+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.650286+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.650533+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.650533+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.652919+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.652919+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.654856+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:08.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:08 vm01 bash[20728]: audit 2026-03-09T15:58:08.654856+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.394619+0000 mon.a (mon.0) 1632 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.394619+0000 mon.a (mon.0) 1632 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.608672+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.608672+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm01-59610-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.608811+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.608811+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm01-59602-28"}]': finished 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: cluster 2026-03-09T15:58:08.624318+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: cluster 2026-03-09T15:58:08.624318+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.637452+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.637452+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.647536+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.647536+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.650286+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.650286+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.650533+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.650533+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.652919+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.652919+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.654856+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:08 vm09 bash[22983]: audit 2026-03-09T15:58:08.654856+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:08.686477+0000 mon.c (mon.2) 152 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:08.686477+0000 mon.c (mon.2) 152 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:08.686718+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:08.686718+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: cluster 2026-03-09T15:58:08.702537+0000 mgr.y (mgr.14520) 191 : cluster [DBG] pgmap v200: 363 pgs: 2 creating+activating, 54 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 303 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: cluster 2026-03-09T15:58:08.702537+0000 mgr.y (mgr.14520) 191 : cluster [DBG] pgmap v200: 363 pgs: 2 creating+activating, 54 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 303 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: cluster 2026-03-09T15:58:08.784519+0000 mon.a (mon.0) 1640 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: cluster 2026-03-09T15:58:08.784519+0000 mon.a (mon.0) 1640 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.395366+0000 mon.a (mon.0) 1641 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.395366+0000 mon.a (mon.0) 1641 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.611291+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.611291+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.611379+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.611379+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.612194+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.612194+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.612752+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.101:0/4123908643' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.612752+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.101:0/4123908643' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: cluster 2026-03-09T15:58:09.614964+0000 mon.a (mon.0) 1644 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: cluster 2026-03-09T15:58:09.614964+0000 mon.a (mon.0) 1644 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.616371+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.616371+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.616765+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.616765+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.634902+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.634902+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.654943+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:09.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:09 vm01 bash[28152]: audit 2026-03-09T15:58:09.654943+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:08.686477+0000 mon.c (mon.2) 152 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:08.686477+0000 mon.c (mon.2) 152 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:08.686718+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:08.686718+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: cluster 2026-03-09T15:58:08.702537+0000 mgr.y (mgr.14520) 191 : cluster [DBG] pgmap v200: 363 pgs: 2 creating+activating, 54 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 303 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: cluster 2026-03-09T15:58:08.702537+0000 mgr.y (mgr.14520) 191 : cluster [DBG] pgmap v200: 363 pgs: 2 creating+activating, 54 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 303 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: cluster 2026-03-09T15:58:08.784519+0000 mon.a (mon.0) 1640 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: cluster 2026-03-09T15:58:08.784519+0000 mon.a (mon.0) 1640 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.395366+0000 mon.a (mon.0) 1641 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.395366+0000 mon.a (mon.0) 1641 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.611291+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.611291+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.611379+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.611379+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.612194+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.612194+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.612752+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.101:0/4123908643' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.612752+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.101:0/4123908643' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: cluster 2026-03-09T15:58:09.614964+0000 mon.a (mon.0) 1644 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: cluster 2026-03-09T15:58:09.614964+0000 mon.a (mon.0) 1644 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.616371+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.616371+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.616765+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.616765+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.634902+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.634902+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.654943+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:09.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:09 vm01 bash[20728]: audit 2026-03-09T15:58:09.654943+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:08.686477+0000 mon.c (mon.2) 152 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:08.686477+0000 mon.c (mon.2) 152 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:08.686718+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:08.686718+0000 mon.a (mon.0) 1639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: cluster 2026-03-09T15:58:08.702537+0000 mgr.y (mgr.14520) 191 : cluster [DBG] pgmap v200: 363 pgs: 2 creating+activating, 54 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 303 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: cluster 2026-03-09T15:58:08.702537+0000 mgr.y (mgr.14520) 191 : cluster [DBG] pgmap v200: 363 pgs: 2 creating+activating, 54 unknown, 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 1 peering, 303 active+clean; 482 KiB data, 729 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: cluster 2026-03-09T15:58:08.784519+0000 mon.a (mon.0) 1640 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: cluster 2026-03-09T15:58:08.784519+0000 mon.a (mon.0) 1640 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.395366+0000 mon.a (mon.0) 1641 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.395366+0000 mon.a (mon.0) 1641 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.611291+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.611291+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm01-59602-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.611379+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.611379+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.612194+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.612194+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.612752+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.101:0/4123908643' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.612752+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.101:0/4123908643' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: cluster 2026-03-09T15:58:09.614964+0000 mon.a (mon.0) 1644 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: cluster 2026-03-09T15:58:09.614964+0000 mon.a (mon.0) 1644 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.616371+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.616371+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.616765+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.616765+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.634902+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.634902+0000 mon.c (mon.2) 153 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.654943+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:10.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:09 vm09 bash[22983]: audit 2026-03-09T15:58:09.654943+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.396205+0000 mon.a (mon.0) 1648 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.396205+0000 mon.a (mon.0) 1648 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.615455+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.615455+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.615618+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.615618+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.633180+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.633180+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: cluster 2026-03-09T15:58:10.639309+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: cluster 2026-03-09T15:58:10.639309+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.644557+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.101:0/4164384167' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.644557+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.101:0/4164384167' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.647494+0000 mon.c (mon.2) 155 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.647494+0000 mon.c (mon.2) 155 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.661810+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.661810+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.662135+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.662135+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.662380+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:10 vm09 bash[22983]: audit 2026-03-09T15:58:10.662380+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.396205+0000 mon.a (mon.0) 1648 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.396205+0000 mon.a (mon.0) 1648 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.615455+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.615455+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.615618+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.615618+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.633180+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.633180+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: cluster 2026-03-09T15:58:10.639309+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: cluster 2026-03-09T15:58:10.639309+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.644557+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.101:0/4164384167' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.644557+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.101:0/4164384167' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.647494+0000 mon.c (mon.2) 155 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.647494+0000 mon.c (mon.2) 155 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.661810+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.661810+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.662135+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.662135+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.662380+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:10 vm01 bash[28152]: audit 2026-03-09T15:58:10.662380+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.396205+0000 mon.a (mon.0) 1648 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.396205+0000 mon.a (mon.0) 1648 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.615455+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.615455+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm01-59801-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.615618+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.615618+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.633180+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.633180+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: cluster 2026-03-09T15:58:10.639309+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: cluster 2026-03-09T15:58:10.639309+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.644557+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.101:0/4164384167' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.644557+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.101:0/4164384167' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.647494+0000 mon.c (mon.2) 155 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.647494+0000 mon.c (mon.2) 155 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.661810+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.661810+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.662135+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.662135+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.662380+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:11.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:10 vm01 bash[20728]: audit 2026-03-09T15:58:10.662380+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: cluster 2026-03-09T15:58:10.702907+0000 mgr.y (mgr.14520) 192 : cluster [DBG] pgmap v203: 387 pgs: 64 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: cluster 2026-03-09T15:58:10.702907+0000 mgr.y (mgr.14520) 192 : cluster [DBG] pgmap v203: 387 pgs: 64 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.397054+0000 mon.a (mon.0) 1655 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.397054+0000 mon.a (mon.0) 1655 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: cluster 2026-03-09T15:58:11.615839+0000 mon.a (mon.0) 1656 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: cluster 2026-03-09T15:58:11.615839+0000 mon.a (mon.0) 1656 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.637526+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.637526+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.637752+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.637752+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.637880+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]': finished 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.637880+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]': finished 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.637993+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.637993+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.651578+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.651578+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: cluster 2026-03-09T15:58:11.653444+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: cluster 2026-03-09T15:58:11.653444+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.661099+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:11 vm09 bash[22983]: audit 2026-03-09T15:58:11.661099+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: cluster 2026-03-09T15:58:10.702907+0000 mgr.y (mgr.14520) 192 : cluster [DBG] pgmap v203: 387 pgs: 64 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: cluster 2026-03-09T15:58:10.702907+0000 mgr.y (mgr.14520) 192 : cluster [DBG] pgmap v203: 387 pgs: 64 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.397054+0000 mon.a (mon.0) 1655 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.397054+0000 mon.a (mon.0) 1655 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: cluster 2026-03-09T15:58:11.615839+0000 mon.a (mon.0) 1656 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: cluster 2026-03-09T15:58:11.615839+0000 mon.a (mon.0) 1656 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.637526+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.637526+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.637752+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:12.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.637752+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.637880+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.637880+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.637993+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.637993+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.651578+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.651578+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: cluster 2026-03-09T15:58:11.653444+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: cluster 2026-03-09T15:58:11.653444+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.661099+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:11 vm01 bash[28152]: audit 2026-03-09T15:58:11.661099+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: cluster 2026-03-09T15:58:10.702907+0000 mgr.y (mgr.14520) 192 : cluster [DBG] pgmap v203: 387 pgs: 64 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: cluster 2026-03-09T15:58:10.702907+0000 mgr.y (mgr.14520) 192 : cluster [DBG] pgmap v203: 387 pgs: 64 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.397054+0000 mon.a (mon.0) 1655 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.397054+0000 mon.a (mon.0) 1655 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: cluster 2026-03-09T15:58:11.615839+0000 mon.a (mon.0) 1656 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: cluster 2026-03-09T15:58:11.615839+0000 mon.a (mon.0) 1656 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.637526+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.637526+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm01-59602-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.637752+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.637752+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm01-59610-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.637880+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.637880+0000 mon.a (mon.0) 1659 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-15", "mode": "writeback"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.637993+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.637993+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.651578+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.651578+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.101:0/1228567487' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: cluster 2026-03-09T15:58:11.653444+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: cluster 2026-03-09T15:58:11.653444+0000 mon.a (mon.0) 1661 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.661099+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:11 vm01 bash[20728]: audit 2026-03-09T15:58:11.661099+0000 mon.a (mon.0) 1662 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]: dispatch 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: RUN ] LibRadosSnapshotsECPP.SnapGetNamePP 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapGetNamePP (2171 ms) 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP (9080 ms total) 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.SnapPP 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.SnapPP (4253 ms) 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.RollbackPP 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.RollbackPP (3935 ms) 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.Bug11677 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.Bug11677 (4095 ms) 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP (12283 ms total) 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [----------] Global test environment tear-down 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [==========] 21 tests from 5 test suites ran. (98426 ms total) 2026-03-09T15:58:12.685 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ PASSED ] 20 tests. 2026-03-09T15:58:12.686 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ SKIPPED ] 1 test, listed below: 2026-03-09T15:58:12.686 INFO:tasks.workunit.client.0.vm01.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: audit 2026-03-09T15:58:12.397895+0000 mon.a (mon.0) 1663 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: audit 2026-03-09T15:58:12.397895+0000 mon.a (mon.0) 1663 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: audit 2026-03-09T15:58:12.641361+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: audit 2026-03-09T15:58:12.641361+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: cluster 2026-03-09T15:58:12.667397+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: cluster 2026-03-09T15:58:12.667397+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: audit 2026-03-09T15:58:12.670013+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.101:0/2804144770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: audit 2026-03-09T15:58:12.670013+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.101:0/2804144770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: audit 2026-03-09T15:58:12.674267+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:12 vm09 bash[22983]: audit 2026-03-09T15:58:12.674267+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: audit 2026-03-09T15:58:12.397895+0000 mon.a (mon.0) 1663 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: audit 2026-03-09T15:58:12.397895+0000 mon.a (mon.0) 1663 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: audit 2026-03-09T15:58:12.641361+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: audit 2026-03-09T15:58:12.641361+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: cluster 2026-03-09T15:58:12.667397+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: cluster 2026-03-09T15:58:12.667397+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: audit 2026-03-09T15:58:12.670013+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.101:0/2804144770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: audit 2026-03-09T15:58:12.670013+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.101:0/2804144770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: audit 2026-03-09T15:58:12.674267+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:12 vm01 bash[20728]: audit 2026-03-09T15:58:12.674267+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: audit 2026-03-09T15:58:12.397895+0000 mon.a (mon.0) 1663 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: audit 2026-03-09T15:58:12.397895+0000 mon.a (mon.0) 1663 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: audit 2026-03-09T15:58:12.641361+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: audit 2026-03-09T15:58:12.641361+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm01-59908-21"}]': finished 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: cluster 2026-03-09T15:58:12.667397+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: cluster 2026-03-09T15:58:12.667397+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: audit 2026-03-09T15:58:12.670013+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.101:0/2804144770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: audit 2026-03-09T15:58:12.670013+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.101:0/2804144770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: audit 2026-03-09T15:58:12.674267+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:12 vm01 bash[28152]: audit 2026-03-09T15:58:12.674267+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:13.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:58:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:58:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: cluster 2026-03-09T15:58:12.703327+0000 mgr.y (mgr.14520) 193 : cluster [DBG] pgmap v206: 363 pgs: 40 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: cluster 2026-03-09T15:58:12.703327+0000 mgr.y (mgr.14520) 193 : cluster [DBG] pgmap v206: 363 pgs: 40 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:12.739128+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:12.739128+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:12.739590+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:12.739590+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.398845+0000 mon.a (mon.0) 1668 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.398845+0000 mon.a (mon.0) 1668 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.656628+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.656628+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.656666+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.656666+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: cluster 2026-03-09T15:58:13.662679+0000 mon.a (mon.0) 1671 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: cluster 2026-03-09T15:58:13.662679+0000 mon.a (mon.0) 1671 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.669164+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.669164+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.669280+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.101:0/912189034' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.669280+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.101:0/912189034' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.672848+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.672848+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.672945+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.672945+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.689293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.689293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.690118+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:13 vm09 bash[22983]: audit 2026-03-09T15:58:13.690118+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: cluster 2026-03-09T15:58:12.703327+0000 mgr.y (mgr.14520) 193 : cluster [DBG] pgmap v206: 363 pgs: 40 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: cluster 2026-03-09T15:58:12.703327+0000 mgr.y (mgr.14520) 193 : cluster [DBG] pgmap v206: 363 pgs: 40 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:12.739128+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:12.739128+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:12.739590+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:12.739590+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.398845+0000 mon.a (mon.0) 1668 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.398845+0000 mon.a (mon.0) 1668 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.656628+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.656628+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.656666+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.656666+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: cluster 2026-03-09T15:58:13.662679+0000 mon.a (mon.0) 1671 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: cluster 2026-03-09T15:58:13.662679+0000 mon.a (mon.0) 1671 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.669164+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.669164+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.669280+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.101:0/912189034' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.669280+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.101:0/912189034' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.672848+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.672848+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.672945+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.672945+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.689293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.689293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.690118+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:13 vm01 bash[20728]: audit 2026-03-09T15:58:13.690118+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: cluster 2026-03-09T15:58:12.703327+0000 mgr.y (mgr.14520) 193 : cluster [DBG] pgmap v206: 363 pgs: 40 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: cluster 2026-03-09T15:58:12.703327+0000 mgr.y (mgr.14520) 193 : cluster [DBG] pgmap v206: 363 pgs: 40 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:12.739128+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:12.739128+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:12.739590+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:12.739590+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.398845+0000 mon.a (mon.0) 1668 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.398845+0000 mon.a (mon.0) 1668 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.656628+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.656628+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm01-59801-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.656666+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.656666+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: cluster 2026-03-09T15:58:13.662679+0000 mon.a (mon.0) 1671 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: cluster 2026-03-09T15:58:13.662679+0000 mon.a (mon.0) 1671 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.669164+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.669164+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.669280+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.101:0/912189034' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.669280+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.101:0/912189034' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.672848+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.672848+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.672945+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.672945+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.689293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.689293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.690118+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:14.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:13 vm01 bash[28152]: audit 2026-03-09T15:58:13.690118+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: cluster 2026-03-09T15:58:13.785158+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: cluster 2026-03-09T15:58:13.785158+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.089039+0000 mon.a (mon.0) 1676 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.089039+0000 mon.a (mon.0) 1676 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.399741+0000 mon.a (mon.0) 1677 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.399741+0000 mon.a (mon.0) 1677 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: cluster 2026-03-09T15:58:14.657073+0000 mon.a (mon.0) 1678 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: cluster 2026-03-09T15:58:14.657073+0000 mon.a (mon.0) 1678 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.660838+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.660838+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.660896+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.660896+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.660937+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.660937+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: cluster 2026-03-09T15:58:14.666164+0000 mon.a (mon.0) 1682 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: cluster 2026-03-09T15:58:14.666164+0000 mon.a (mon.0) 1682 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.669018+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.669018+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.685498+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.685498+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.691127+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.691127+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.694953+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.694953+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.695476+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:15.134 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:14 vm09 bash[22983]: audit 2026-03-09T15:58:14.695476+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: cluster 2026-03-09T15:58:13.785158+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: cluster 2026-03-09T15:58:13.785158+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.089039+0000 mon.a (mon.0) 1676 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.089039+0000 mon.a (mon.0) 1676 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.399741+0000 mon.a (mon.0) 1677 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.399741+0000 mon.a (mon.0) 1677 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: cluster 2026-03-09T15:58:14.657073+0000 mon.a (mon.0) 1678 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: cluster 2026-03-09T15:58:14.657073+0000 mon.a (mon.0) 1678 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.660838+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.660838+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.660896+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.660896+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.660937+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.660937+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: cluster 2026-03-09T15:58:14.666164+0000 mon.a (mon.0) 1682 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: cluster 2026-03-09T15:58:14.666164+0000 mon.a (mon.0) 1682 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.669018+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.669018+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.685498+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.685498+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.691127+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.691127+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.694953+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.694953+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.695476+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:14 vm01 bash[20728]: audit 2026-03-09T15:58:14.695476+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: cluster 2026-03-09T15:58:13.785158+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: cluster 2026-03-09T15:58:13.785158+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.089039+0000 mon.a (mon.0) 1676 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.089039+0000 mon.a (mon.0) 1676 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.399741+0000 mon.a (mon.0) 1677 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.399741+0000 mon.a (mon.0) 1677 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: cluster 2026-03-09T15:58:14.657073+0000 mon.a (mon.0) 1678 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: cluster 2026-03-09T15:58:14.657073+0000 mon.a (mon.0) 1678 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.660838+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.660838+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.660896+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.660896+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm01-59610-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.660937+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.660937+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-15"}]': finished 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: cluster 2026-03-09T15:58:14.666164+0000 mon.a (mon.0) 1682 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: cluster 2026-03-09T15:58:14.666164+0000 mon.a (mon.0) 1682 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.669018+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.669018+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.101:0/1220230449' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.685498+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.685498+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.691127+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.691127+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.694953+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.694953+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.695476+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:15.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:14 vm01 bash[28152]: audit 2026-03-09T15:58:14.695476+0000 mon.a (mon.0) 1686 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: Running main() from gmock_main.cc 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [==========] Running 57 tests from 4 test suites. 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] Global test environment set-up. 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.TooBigPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.TooBigPP (2930 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolQuotaPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolQuotaPP (18629 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleWritePP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleWritePP (6157 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.WaitForSafePP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.WaitForSafePP (3024 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP (3098 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP2 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP2 (3008 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP3 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP3 (4010 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripSparseReadPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripSparseReadPP (3227 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsCompletePP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.IsCompletePP (2650 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsSafePP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.IsSafePP (2969 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.ReturnValuePP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.ReturnValuePP (3086 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushPP (3166 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushAsyncPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushAsyncPP (2984 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP (3016 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP2 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP2 (3083 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP (3009 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP2 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP2 (2729 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPPNS 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPPNS (3110 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPP (3245 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime (3218 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime2 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime2 (3018 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.StatRemovePP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.StatRemovePP (3156 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.ExecuteClassPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.ExecuteClassPP (3008 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.OmapPP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.OmapPP (3021 ms) 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiWritePP 2026-03-09T15:58:15.688 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiWritePP (3045 ms) 2026-03-09T15:58:15.689 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.AioUnlockPP 2026-03-09T15:58:15.689 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.AioUnlockPP (3053 ms) 2026-03-09T15:58:15.689 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripAppendPP 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: cluster 2026-03-09T15:58:14.703714+0000 mgr.y (mgr.14520) 194 : cluster [DBG] pgmap v209: 355 pgs: 32 creating+peering, 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: cluster 2026-03-09T15:58:14.703714+0000 mgr.y (mgr.14520) 194 : cluster [DBG] pgmap v209: 355 pgs: 32 creating+peering, 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.400549+0000 mon.a (mon.0) 1687 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.400549+0000 mon.a (mon.0) 1687 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.671929+0000 mon.a (mon.0) 1688 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.671929+0000 mon.a (mon.0) 1688 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.671961+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.671961+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: cluster 2026-03-09T15:58:15.679169+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: cluster 2026-03-09T15:58:15.679169+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.686711+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.686711+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.697696+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.697696+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.702444+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.702444+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.704572+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.704572+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.705504+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.705504+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.707511+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.707511+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.715745+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:15 vm09 bash[22983]: audit 2026-03-09T15:58:15.715745+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: cluster 2026-03-09T15:58:14.703714+0000 mgr.y (mgr.14520) 194 : cluster [DBG] pgmap v209: 355 pgs: 32 creating+peering, 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: cluster 2026-03-09T15:58:14.703714+0000 mgr.y (mgr.14520) 194 : cluster [DBG] pgmap v209: 355 pgs: 32 creating+peering, 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.400549+0000 mon.a (mon.0) 1687 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.400549+0000 mon.a (mon.0) 1687 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.671929+0000 mon.a (mon.0) 1688 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.671929+0000 mon.a (mon.0) 1688 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.671961+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.671961+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: cluster 2026-03-09T15:58:15.679169+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: cluster 2026-03-09T15:58:15.679169+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.686711+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.686711+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.697696+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.697696+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.702444+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.702444+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.704572+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.704572+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.705504+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.705504+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.707511+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.707511+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.715745+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:15 vm01 bash[20728]: audit 2026-03-09T15:58:15.715745+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: cluster 2026-03-09T15:58:14.703714+0000 mgr.y (mgr.14520) 194 : cluster [DBG] pgmap v209: 355 pgs: 32 creating+peering, 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: cluster 2026-03-09T15:58:14.703714+0000 mgr.y (mgr.14520) 194 : cluster [DBG] pgmap v209: 355 pgs: 32 creating+peering, 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.400549+0000 mon.a (mon.0) 1687 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.400549+0000 mon.a (mon.0) 1687 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.671929+0000 mon.a (mon.0) 1688 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.671929+0000 mon.a (mon.0) 1688 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm01-59602-29"}]': finished 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.671961+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.671961+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: cluster 2026-03-09T15:58:15.679169+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: cluster 2026-03-09T15:58:15.679169+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.686711+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.686711+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.697696+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.697696+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.702444+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.702444+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.704572+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.704572+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.705504+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.705504+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.707511+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.707511+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.715745+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:15 vm01 bash[28152]: audit 2026-03-09T15:58:15.715745+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:16.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:58:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.401515+0000 mon.a (mon.0) 1695 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.401515+0000 mon.a (mon.0) 1695 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.676689+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.676689+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.689214+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.689214+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: cluster 2026-03-09T15:58:16.689965+0000 mon.a (mon.0) 1697 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: cluster 2026-03-09T15:58:16.689965+0000 mon.a (mon.0) 1697 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.693021+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.693021+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.694225+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.101:0/3428711347' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.694225+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.101:0/3428711347' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.696600+0000 mon.c (mon.2) 159 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.696600+0000 mon.c (mon.2) 159 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.697577+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.697577+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.698689+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:16 vm09 bash[22983]: audit 2026-03-09T15:58:16.698689+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.401515+0000 mon.a (mon.0) 1695 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.401515+0000 mon.a (mon.0) 1695 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.676689+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.676689+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.689214+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.689214+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: cluster 2026-03-09T15:58:16.689965+0000 mon.a (mon.0) 1697 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: cluster 2026-03-09T15:58:16.689965+0000 mon.a (mon.0) 1697 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.693021+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.693021+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.694225+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.101:0/3428711347' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.694225+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.101:0/3428711347' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.696600+0000 mon.c (mon.2) 159 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.696600+0000 mon.c (mon.2) 159 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.697577+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.697577+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.698689+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:16 vm01 bash[20728]: audit 2026-03-09T15:58:16.698689+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.401515+0000 mon.a (mon.0) 1695 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.401515+0000 mon.a (mon.0) 1695 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.676689+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.676689+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm01-59602-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.689214+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.689214+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: cluster 2026-03-09T15:58:16.689965+0000 mon.a (mon.0) 1697 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: cluster 2026-03-09T15:58:16.689965+0000 mon.a (mon.0) 1697 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.693021+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.693021+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.694225+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.101:0/3428711347' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.694225+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.101:0/3428711347' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.696600+0000 mon.c (mon.2) 159 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.696600+0000 mon.c (mon.2) 159 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.697577+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.697577+0000 mon.a (mon.0) 1699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.698689+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:17.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:16 vm01 bash[28152]: audit 2026-03-09T15:58:16.698689+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:16.313003+0000 mgr.y (mgr.14520) 195 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:16.313003+0000 mgr.y (mgr.14520) 195 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: cluster 2026-03-09T15:58:16.704121+0000 mgr.y (mgr.14520) 196 : cluster [DBG] pgmap v212: 355 pgs: 64 unknown, 2 active+clean+snaptrim, 289 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: cluster 2026-03-09T15:58:16.704121+0000 mgr.y (mgr.14520) 196 : cluster [DBG] pgmap v212: 355 pgs: 64 unknown, 2 active+clean+snaptrim, 289 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.402361+0000 mon.a (mon.0) 1701 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.402361+0000 mon.a (mon.0) 1701 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.687288+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.687288+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.687362+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.687362+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.687535+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.687535+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: cluster 2026-03-09T15:58:17.740036+0000 mon.a (mon.0) 1705 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: cluster 2026-03-09T15:58:17.740036+0000 mon.a (mon.0) 1705 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.750552+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.750552+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.751779+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:17 vm09 bash[22983]: audit 2026-03-09T15:58:17.751779+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:16.313003+0000 mgr.y (mgr.14520) 195 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:16.313003+0000 mgr.y (mgr.14520) 195 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: cluster 2026-03-09T15:58:16.704121+0000 mgr.y (mgr.14520) 196 : cluster [DBG] pgmap v212: 355 pgs: 64 unknown, 2 active+clean+snaptrim, 289 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: cluster 2026-03-09T15:58:16.704121+0000 mgr.y (mgr.14520) 196 : cluster [DBG] pgmap v212: 355 pgs: 64 unknown, 2 active+clean+snaptrim, 289 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.402361+0000 mon.a (mon.0) 1701 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.402361+0000 mon.a (mon.0) 1701 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.687288+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.687288+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.687362+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.687362+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.687535+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.687535+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: cluster 2026-03-09T15:58:17.740036+0000 mon.a (mon.0) 1705 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: cluster 2026-03-09T15:58:17.740036+0000 mon.a (mon.0) 1705 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.750552+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.750552+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.751779+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:17 vm01 bash[28152]: audit 2026-03-09T15:58:17.751779+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:16.313003+0000 mgr.y (mgr.14520) 195 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:16.313003+0000 mgr.y (mgr.14520) 195 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: cluster 2026-03-09T15:58:16.704121+0000 mgr.y (mgr.14520) 196 : cluster [DBG] pgmap v212: 355 pgs: 64 unknown, 2 active+clean+snaptrim, 289 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: cluster 2026-03-09T15:58:16.704121+0000 mgr.y (mgr.14520) 196 : cluster [DBG] pgmap v212: 355 pgs: 64 unknown, 2 active+clean+snaptrim, 289 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.402361+0000 mon.a (mon.0) 1701 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.402361+0000 mon.a (mon.0) 1701 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.687288+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.687288+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm01-59801-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.687362+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.687362+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.687535+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.687535+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm01-59610-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: cluster 2026-03-09T15:58:17.740036+0000 mon.a (mon.0) 1705 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: cluster 2026-03-09T15:58:17.740036+0000 mon.a (mon.0) 1705 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.750552+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.750552+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.751779+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:17 vm01 bash[20728]: audit 2026-03-09T15:58:17.751779+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.403171+0000 mon.a (mon.0) 1707 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.403171+0000 mon.a (mon.0) 1707 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.729078+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.729078+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.729148+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.729148+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: cluster 2026-03-09T15:58:18.734415+0000 mon.a (mon.0) 1710 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: cluster 2026-03-09T15:58:18.734415+0000 mon.a (mon.0) 1710 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.749342+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.749342+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.749750+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:18 vm09 bash[22983]: audit 2026-03-09T15:58:18.749750+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.403171+0000 mon.a (mon.0) 1707 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.403171+0000 mon.a (mon.0) 1707 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.729078+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.729078+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.729148+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.729148+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: cluster 2026-03-09T15:58:18.734415+0000 mon.a (mon.0) 1710 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: cluster 2026-03-09T15:58:18.734415+0000 mon.a (mon.0) 1710 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.749342+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.749342+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.749750+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:18 vm01 bash[28152]: audit 2026-03-09T15:58:18.749750+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.403171+0000 mon.a (mon.0) 1707 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.403171+0000 mon.a (mon.0) 1707 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.729078+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.729078+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm01-59602-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.729148+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.729148+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: cluster 2026-03-09T15:58:18.734415+0000 mon.a (mon.0) 1710 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: cluster 2026-03-09T15:58:18.734415+0000 mon.a (mon.0) 1710 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.749342+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.749342+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.749750+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:18 vm01 bash[20728]: audit 2026-03-09T15:58:18.749750+0000 mon.a (mon.0) 1711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: cluster 2026-03-09T15:58:18.704795+0000 mgr.y (mgr.14520) 197 : cluster [DBG] pgmap v214: 363 pgs: 1 creating+peering, 65 unknown, 2 active+clean+snaptrim, 295 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: cluster 2026-03-09T15:58:18.704795+0000 mgr.y (mgr.14520) 197 : cluster [DBG] pgmap v214: 363 pgs: 1 creating+peering, 65 unknown, 2 active+clean+snaptrim, 295 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: cluster 2026-03-09T15:58:18.795361+0000 mon.a (mon.0) 1712 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: cluster 2026-03-09T15:58:18.795361+0000 mon.a (mon.0) 1712 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:18.819475+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:18.819475+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:18.838644+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:18.838644+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: cluster 2026-03-09T15:58:18.850500+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: cluster 2026-03-09T15:58:18.850500+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:18.856927+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:18.856927+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:18.857512+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:18.857512+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:19.404003+0000 mon.a (mon.0) 1717 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:19 vm09 bash[22983]: audit 2026-03-09T15:58:19.404003+0000 mon.a (mon.0) 1717 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: cluster 2026-03-09T15:58:18.704795+0000 mgr.y (mgr.14520) 197 : cluster [DBG] pgmap v214: 363 pgs: 1 creating+peering, 65 unknown, 2 active+clean+snaptrim, 295 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: cluster 2026-03-09T15:58:18.704795+0000 mgr.y (mgr.14520) 197 : cluster [DBG] pgmap v214: 363 pgs: 1 creating+peering, 65 unknown, 2 active+clean+snaptrim, 295 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: cluster 2026-03-09T15:58:18.795361+0000 mon.a (mon.0) 1712 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: cluster 2026-03-09T15:58:18.795361+0000 mon.a (mon.0) 1712 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:18.819475+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:18.819475+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:18.838644+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:18.838644+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: cluster 2026-03-09T15:58:18.850500+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: cluster 2026-03-09T15:58:18.850500+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:18.856927+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:18.856927+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:18.857512+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:18.857512+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:19.404003+0000 mon.a (mon.0) 1717 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:19 vm01 bash[20728]: audit 2026-03-09T15:58:19.404003+0000 mon.a (mon.0) 1717 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: cluster 2026-03-09T15:58:18.704795+0000 mgr.y (mgr.14520) 197 : cluster [DBG] pgmap v214: 363 pgs: 1 creating+peering, 65 unknown, 2 active+clean+snaptrim, 295 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: cluster 2026-03-09T15:58:18.704795+0000 mgr.y (mgr.14520) 197 : cluster [DBG] pgmap v214: 363 pgs: 1 creating+peering, 65 unknown, 2 active+clean+snaptrim, 295 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: cluster 2026-03-09T15:58:18.795361+0000 mon.a (mon.0) 1712 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: cluster 2026-03-09T15:58:18.795361+0000 mon.a (mon.0) 1712 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:18.819475+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:18.819475+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:18.838644+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:18.838644+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: cluster 2026-03-09T15:58:18.850500+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: cluster 2026-03-09T15:58:18.850500+0000 mon.a (mon.0) 1714 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:18.856927+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:18.856927+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]: dispatch 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:18.857512+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:18.857512+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:19.404003+0000 mon.a (mon.0) 1717 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:19 vm01 bash[28152]: audit 2026-03-09T15:58:19.404003+0000 mon.a (mon.0) 1717 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: cluster 2026-03-09T15:58:19.842608+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: cluster 2026-03-09T15:58:19.842608+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.859295+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]': finished 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.859295+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]': finished 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.859436+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.859436+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.877882+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.877882+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: cluster 2026-03-09T15:58:19.879681+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: cluster 2026-03-09T15:58:19.879681+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.901288+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.901288+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.903856+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.903856+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.953671+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.953671+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.954000+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:19.954000+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:20.404876+0000 mon.a (mon.0) 1725 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:21.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:20 vm09 bash[22983]: audit 2026-03-09T15:58:20.404876+0000 mon.a (mon.0) 1725 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: cluster 2026-03-09T15:58:19.842608+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: cluster 2026-03-09T15:58:19.842608+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.859295+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]': finished 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.859295+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]': finished 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.859436+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.859436+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.877882+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.877882+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: cluster 2026-03-09T15:58:19.879681+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: cluster 2026-03-09T15:58:19.879681+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.901288+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.901288+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.903856+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.903856+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.953671+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.953671+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.954000+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:19.954000+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:20.404876+0000 mon.a (mon.0) 1725 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:20 vm01 bash[28152]: audit 2026-03-09T15:58:20.404876+0000 mon.a (mon.0) 1725 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: cluster 2026-03-09T15:58:19.842608+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: cluster 2026-03-09T15:58:19.842608+0000 mon.a (mon.0) 1718 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:21.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.859295+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]': finished 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.859295+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-17", "mode": "writeback"}]': finished 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.859436+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.859436+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? 192.168.123.101:0/3481880041' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm01-59610-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.877882+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.877882+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: cluster 2026-03-09T15:58:19.879681+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: cluster 2026-03-09T15:58:19.879681+0000 mon.a (mon.0) 1721 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.901288+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.901288+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.903856+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.903856+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.953671+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.953671+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.954000+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:19.954000+0000 mon.a (mon.0) 1724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:20.404876+0000 mon.a (mon.0) 1725 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:21.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:20 vm01 bash[20728]: audit 2026-03-09T15:58:20.404876+0000 mon.a (mon.0) 1725 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:21.928 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAiodosMiscPP (78398 ms total) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosTwoPoolsECPP.CopyFrom 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosTwoPoolsECPP.CopyFrom (207 ms) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP (207 ms total) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)0, Checksummer::xxhash32, ceph_le > 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Subset 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Subset (52 ms) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Chunked 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Chunked (19 ms) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0 (72 ms total) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)1, Checksummer::xxhash64, ceph_le > 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Subset 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Subset (50 ms) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Chunked 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Chunked (7 ms) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1 (57 ms total) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)2, Checksummer::crc32c, ceph_le > 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Subset 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Subset (44 ms) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Chunked 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Chunked (14 ms) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2 (58 ms total) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ RUN ] LibRadosMiscECPP.CompareExtentRange 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ OK ] LibRadosMiscECPP.CompareExtentRange (1047 ms) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP (1047 ms total) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [----------] Global test environment tear-down 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [==========] 31 tests from 7 test suites ran. (107738 ms total) 2026-03-09T15:58:21.929 INFO:tasks.workunit.client.0.vm01.stdout: api_misc_pp: [ PASSED ] 31 tests. 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: cluster 2026-03-09T15:58:20.705285+0000 mgr.y (mgr.14520) 198 : cluster [DBG] pgmap v218: 355 pgs: 26 creating+peering, 6 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: cluster 2026-03-09T15:58:20.705285+0000 mgr.y (mgr.14520) 198 : cluster [DBG] pgmap v218: 355 pgs: 26 creating+peering, 6 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.868589+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.868589+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.868675+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.868675+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.868863+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.868863+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: cluster 2026-03-09T15:58:20.875385+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: cluster 2026-03-09T15:58:20.875385+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.875909+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.875909+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.878095+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.878095+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.887581+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.887581+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.890422+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.890422+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.890837+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:20.890837+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:21.405742+0000 mon.a (mon.0) 1733 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:21 vm01 bash[20728]: audit 2026-03-09T15:58:21.405742+0000 mon.a (mon.0) 1733 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: cluster 2026-03-09T15:58:20.705285+0000 mgr.y (mgr.14520) 198 : cluster [DBG] pgmap v218: 355 pgs: 26 creating+peering, 6 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: cluster 2026-03-09T15:58:20.705285+0000 mgr.y (mgr.14520) 198 : cluster [DBG] pgmap v218: 355 pgs: 26 creating+peering, 6 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.868589+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.868589+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.868675+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.868675+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.868863+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.868863+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: cluster 2026-03-09T15:58:20.875385+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: cluster 2026-03-09T15:58:20.875385+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.875909+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.875909+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.878095+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.878095+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.887581+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.887581+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.890422+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.890422+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.890837+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:20.890837+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:21.405742+0000 mon.a (mon.0) 1733 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:22.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:21 vm01 bash[28152]: audit 2026-03-09T15:58:21.405742+0000 mon.a (mon.0) 1733 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: cluster 2026-03-09T15:58:20.705285+0000 mgr.y (mgr.14520) 198 : cluster [DBG] pgmap v218: 355 pgs: 26 creating+peering, 6 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: cluster 2026-03-09T15:58:20.705285+0000 mgr.y (mgr.14520) 198 : cluster [DBG] pgmap v218: 355 pgs: 26 creating+peering, 6 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.868589+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.868589+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.868675+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.868675+0000 mon.a (mon.0) 1727 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.868863+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.868863+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: cluster 2026-03-09T15:58:20.875385+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: cluster 2026-03-09T15:58:20.875385+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.875909+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:22.390 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.875909+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.878095+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.878095+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.101:0/120706813' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.887581+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.887581+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.890422+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.890422+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.890837+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:20.890837+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:21.405742+0000 mon.a (mon.0) 1733 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:22.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:21 vm09 bash[22983]: audit 2026-03-09T15:58:21.405742+0000 mon.a (mon.0) 1733 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: cluster 2026-03-09T15:58:21.869565+0000 mon.a (mon.0) 1734 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: cluster 2026-03-09T15:58:21.869565+0000 mon.a (mon.0) 1734 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.873327+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.873327+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.873785+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.873785+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.874124+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.874124+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: cluster 2026-03-09T15:58:21.893250+0000 mon.a (mon.0) 1738 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: cluster 2026-03-09T15:58:21.893250+0000 mon.a (mon.0) 1738 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.894497+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.101:0/2440966413' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.894497+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.101:0/2440966413' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.902563+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.902563+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.929444+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.929444+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.930102+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.930102+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.932029+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.932029+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.932155+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.932155+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.933064+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.933064+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.933697+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:21.933697+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.406653+0000 mon.a (mon.0) 1743 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.406653+0000 mon.a (mon.0) 1743 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.877491+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.877491+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.877654+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.877654+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.883366+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.883366+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: cluster 2026-03-09T15:58:22.887092+0000 mon.a (mon.0) 1746 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: cluster 2026-03-09T15:58:22.887092+0000 mon.a (mon.0) 1746 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.901941+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:22 vm01 bash[28152]: audit 2026-03-09T15:58:22.901941+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:58:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:58:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: cluster 2026-03-09T15:58:21.869565+0000 mon.a (mon.0) 1734 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: cluster 2026-03-09T15:58:21.869565+0000 mon.a (mon.0) 1734 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.873327+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.873327+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.873785+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.873785+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.874124+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.874124+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: cluster 2026-03-09T15:58:21.893250+0000 mon.a (mon.0) 1738 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: cluster 2026-03-09T15:58:21.893250+0000 mon.a (mon.0) 1738 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.894497+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.101:0/2440966413' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.894497+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.101:0/2440966413' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.902563+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.902563+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.929444+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.929444+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.930102+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.930102+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.932029+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.932029+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.932155+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.932155+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.933064+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.933064+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.933697+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:21.933697+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.406653+0000 mon.a (mon.0) 1743 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.406653+0000 mon.a (mon.0) 1743 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.877491+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.877491+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.877654+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.877654+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.883366+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.883366+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: cluster 2026-03-09T15:58:22.887092+0000 mon.a (mon.0) 1746 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: cluster 2026-03-09T15:58:22.887092+0000 mon.a (mon.0) 1746 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.901941+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:22 vm01 bash[20728]: audit 2026-03-09T15:58:22.901941+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: cluster 2026-03-09T15:58:21.869565+0000 mon.a (mon.0) 1734 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: cluster 2026-03-09T15:58:21.869565+0000 mon.a (mon.0) 1734 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.873327+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.873327+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? 192.168.123.101:0/3907833177' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm01-59801-36"}]': finished 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.873785+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.873785+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm01-59602-30"}]': finished 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.874124+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.874124+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-17"}]': finished 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: cluster 2026-03-09T15:58:21.893250+0000 mon.a (mon.0) 1738 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: cluster 2026-03-09T15:58:21.893250+0000 mon.a (mon.0) 1738 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.894497+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.101:0/2440966413' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.894497+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.101:0/2440966413' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.902563+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.902563+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.929444+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.929444+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.930102+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.930102+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.932029+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.932029+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.932155+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.932155+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.933064+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.933064+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.933697+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:21.933697+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.406653+0000 mon.a (mon.0) 1743 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:23.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.406653+0000 mon.a (mon.0) 1743 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.877491+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.877491+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm01-59610-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.877654+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.877654+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm01-59602-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.883366+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.883366+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: cluster 2026-03-09T15:58:22.887092+0000 mon.a (mon.0) 1746 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: cluster 2026-03-09T15:58:22.887092+0000 mon.a (mon.0) 1746 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.901941+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:23.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:22 vm09 bash[22983]: audit 2026-03-09T15:58:22.901941+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:24.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:24 vm09 bash[22983]: cluster 2026-03-09T15:58:22.705736+0000 mgr.y (mgr.14520) 199 : cluster [DBG] pgmap v221: 355 pgs: 32 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 257 B/s wr, 1 op/s 2026-03-09T15:58:24.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:24 vm09 bash[22983]: cluster 2026-03-09T15:58:22.705736+0000 mgr.y (mgr.14520) 199 : cluster [DBG] pgmap v221: 355 pgs: 32 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 257 B/s wr, 1 op/s 2026-03-09T15:58:24.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:24 vm09 bash[22983]: audit 2026-03-09T15:58:23.407604+0000 mon.a (mon.0) 1748 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:24.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:24 vm09 bash[22983]: audit 2026-03-09T15:58:23.407604+0000 mon.a (mon.0) 1748 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:24.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:24 vm01 bash[28152]: cluster 2026-03-09T15:58:22.705736+0000 mgr.y (mgr.14520) 199 : cluster [DBG] pgmap v221: 355 pgs: 32 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 257 B/s wr, 1 op/s 2026-03-09T15:58:24.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:24 vm01 bash[28152]: cluster 2026-03-09T15:58:22.705736+0000 mgr.y (mgr.14520) 199 : cluster [DBG] pgmap v221: 355 pgs: 32 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 257 B/s wr, 1 op/s 2026-03-09T15:58:24.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:24 vm01 bash[28152]: audit 2026-03-09T15:58:23.407604+0000 mon.a (mon.0) 1748 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:24.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:24 vm01 bash[28152]: audit 2026-03-09T15:58:23.407604+0000 mon.a (mon.0) 1748 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:24 vm01 bash[20728]: cluster 2026-03-09T15:58:22.705736+0000 mgr.y (mgr.14520) 199 : cluster [DBG] pgmap v221: 355 pgs: 32 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 257 B/s wr, 1 op/s 2026-03-09T15:58:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:24 vm01 bash[20728]: cluster 2026-03-09T15:58:22.705736+0000 mgr.y (mgr.14520) 199 : cluster [DBG] pgmap v221: 355 pgs: 32 unknown, 2 active+clean+snaptrim, 321 active+clean; 458 KiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 257 B/s wr, 1 op/s 2026-03-09T15:58:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:24 vm01 bash[20728]: audit 2026-03-09T15:58:23.407604+0000 mon.a (mon.0) 1748 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:24 vm01 bash[20728]: audit 2026-03-09T15:58:23.407604+0000 mon.a (mon.0) 1748 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: cluster 2026-03-09T15:58:24.089796+0000 mon.a (mon.0) 1749 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: cluster 2026-03-09T15:58:24.089796+0000 mon.a (mon.0) 1749 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:24.091741+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:24.091741+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:24.101213+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:24.101213+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:24.408456+0000 mon.a (mon.0) 1751 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:24.408456+0000 mon.a (mon.0) 1751 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: cluster 2026-03-09T15:58:24.706488+0000 mgr.y (mgr.14520) 200 : cluster [DBG] pgmap v224: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: cluster 2026-03-09T15:58:24.706488+0000 mgr.y (mgr.14520) 200 : cluster [DBG] pgmap v224: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.067293+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.067293+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.067360+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.067360+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: cluster 2026-03-09T15:58:25.072388+0000 mon.a (mon.0) 1754 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: cluster 2026-03-09T15:58:25.072388+0000 mon.a (mon.0) 1754 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.092274+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.092274+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.092523+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.092523+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.096808+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.096808+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.097723+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:25 vm09 bash[22983]: audit 2026-03-09T15:58:25.097723+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: cluster 2026-03-09T15:58:24.089796+0000 mon.a (mon.0) 1749 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: cluster 2026-03-09T15:58:24.089796+0000 mon.a (mon.0) 1749 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:24.091741+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:24.091741+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:24.101213+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:24.101213+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:24.408456+0000 mon.a (mon.0) 1751 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:24.408456+0000 mon.a (mon.0) 1751 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: cluster 2026-03-09T15:58:24.706488+0000 mgr.y (mgr.14520) 200 : cluster [DBG] pgmap v224: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: cluster 2026-03-09T15:58:24.706488+0000 mgr.y (mgr.14520) 200 : cluster [DBG] pgmap v224: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.067293+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.067293+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.067360+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.067360+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: cluster 2026-03-09T15:58:25.072388+0000 mon.a (mon.0) 1754 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: cluster 2026-03-09T15:58:25.072388+0000 mon.a (mon.0) 1754 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.092274+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.092274+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.092523+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.092523+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.096808+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.096808+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.097723+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:25 vm01 bash[28152]: audit 2026-03-09T15:58:25.097723+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: cluster 2026-03-09T15:58:24.089796+0000 mon.a (mon.0) 1749 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: cluster 2026-03-09T15:58:24.089796+0000 mon.a (mon.0) 1749 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:24.091741+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:24.091741+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:24.101213+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:24.101213+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:24.408456+0000 mon.a (mon.0) 1751 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:24.408456+0000 mon.a (mon.0) 1751 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: cluster 2026-03-09T15:58:24.706488+0000 mgr.y (mgr.14520) 200 : cluster [DBG] pgmap v224: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: cluster 2026-03-09T15:58:24.706488+0000 mgr.y (mgr.14520) 200 : cluster [DBG] pgmap v224: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.067293+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.067293+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm01-59602-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.067360+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.067360+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: cluster 2026-03-09T15:58:25.072388+0000 mon.a (mon.0) 1754 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: cluster 2026-03-09T15:58:25.072388+0000 mon.a (mon.0) 1754 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.092274+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.092274+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.092523+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.092523+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.096808+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.096808+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.097723+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:25.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:25 vm01 bash[20728]: audit 2026-03-09T15:58:25.097723+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:26.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:58:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:25.409265+0000 mon.a (mon.0) 1757 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:25.409265+0000 mon.a (mon.0) 1757 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:26.070459+0000 mon.a (mon.0) 1758 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:26.070459+0000 mon.a (mon.0) 1758 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:26.070646+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:26.070646+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:26.079495+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:26.079495+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: cluster 2026-03-09T15:58:26.081374+0000 mon.a (mon.0) 1760 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: cluster 2026-03-09T15:58:26.081374+0000 mon.a (mon.0) 1760 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:26.096113+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:26 vm09 bash[22983]: audit 2026-03-09T15:58:26.096113+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:25.409265+0000 mon.a (mon.0) 1757 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:25.409265+0000 mon.a (mon.0) 1757 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:26.070459+0000 mon.a (mon.0) 1758 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:26.070459+0000 mon.a (mon.0) 1758 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:26.070646+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:26.070646+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:26.079495+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:26.079495+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: cluster 2026-03-09T15:58:26.081374+0000 mon.a (mon.0) 1760 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: cluster 2026-03-09T15:58:26.081374+0000 mon.a (mon.0) 1760 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:26.096113+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:26 vm01 bash[28152]: audit 2026-03-09T15:58:26.096113+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:25.409265+0000 mon.a (mon.0) 1757 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:25.409265+0000 mon.a (mon.0) 1757 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:26.070459+0000 mon.a (mon.0) 1758 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:26.070459+0000 mon.a (mon.0) 1758 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:26.070646+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:26.070646+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm01-59610-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:26.079495+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:26.079495+0000 mon.c (mon.2) 167 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: cluster 2026-03-09T15:58:26.081374+0000 mon.a (mon.0) 1760 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: cluster 2026-03-09T15:58:26.081374+0000 mon.a (mon.0) 1760 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:26.096113+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:26 vm01 bash[20728]: audit 2026-03-09T15:58:26.096113+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.322593+0000 mgr.y (mgr.14520) 201 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.322593+0000 mgr.y (mgr.14520) 201 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.409892+0000 mon.a (mon.0) 1762 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.409892+0000 mon.a (mon.0) 1762 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575817+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575817+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575851+0000 mon.b (mon.1) 158 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575851+0000 mon.b (mon.1) 158 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575863+0000 mon.b (mon.1) 159 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575863+0000 mon.b (mon.1) 159 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575876+0000 mon.b (mon.1) 160 : audit [INF] "var": "eio", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575876+0000 mon.b (mon.1) 160 : audit [INF] "var": "eio", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575892+0000 mon.b (mon.1) 161 : audit [INF] "val": "true" 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575892+0000 mon.b (mon.1) 161 : audit [INF] "val": "true" 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575902+0000 mon.b (mon.1) 162 : audit [INF] }]: dispatch 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.575902+0000 mon.b (mon.1) 162 : audit [INF] }]: dispatch 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.579946+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.579946+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580006+0000 mon.a (mon.0) 1764 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580006+0000 mon.a (mon.0) 1764 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580019+0000 mon.a (mon.0) 1765 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580019+0000 mon.a (mon.0) 1765 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580043+0000 mon.a (mon.0) 1766 : audit [INF] "var": "eio", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580043+0000 mon.a (mon.0) 1766 : audit [INF] "var": "eio", 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580052+0000 mon.a (mon.0) 1767 : audit [INF] "val": "true" 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580052+0000 mon.a (mon.0) 1767 : audit [INF] "val": "true" 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580062+0000 mon.a (mon.0) 1768 : audit [INF] }]: dispatch 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:26.580062+0000 mon.a (mon.0) 1768 : audit [INF] }]: dispatch 2026-03-09T15:58:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: cluster 2026-03-09T15:58:26.706861+0000 mgr.y (mgr.14520) 202 : cluster [DBG] pgmap v227: 363 pgs: 11 creating+peering, 1 active+clean+snaptrim, 61 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: cluster 2026-03-09T15:58:26.706861+0000 mgr.y (mgr.14520) 202 : cluster [DBG] pgmap v227: 363 pgs: 11 creating+peering, 1 active+clean+snaptrim, 61 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073604+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073604+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073705+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073705+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073744+0000 mon.a (mon.0) 1771 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073744+0000 mon.a (mon.0) 1771 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073782+0000 mon.a (mon.0) 1772 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073782+0000 mon.a (mon.0) 1772 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073821+0000 mon.a (mon.0) 1773 : audit [INF] "var": "eio", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073821+0000 mon.a (mon.0) 1773 : audit [INF] "var": "eio", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073860+0000 mon.a (mon.0) 1774 : audit [INF] "val": "true" 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073860+0000 mon.a (mon.0) 1774 : audit [INF] "val": "true" 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073900+0000 mon.a (mon.0) 1775 : audit [INF] }]': finished 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.073900+0000 mon.a (mon.0) 1775 : audit [INF] }]': finished 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: cluster 2026-03-09T15:58:27.078333+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: cluster 2026-03-09T15:58:27.078333+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.079637+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.079637+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.079888+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.079888+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.081150+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.081150+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.084814+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:27 vm01 bash[28152]: audit 2026-03-09T15:58:27.084814+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.322593+0000 mgr.y (mgr.14520) 201 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.322593+0000 mgr.y (mgr.14520) 201 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.409892+0000 mon.a (mon.0) 1762 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.409892+0000 mon.a (mon.0) 1762 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575817+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575817+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575851+0000 mon.b (mon.1) 158 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575851+0000 mon.b (mon.1) 158 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575863+0000 mon.b (mon.1) 159 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575863+0000 mon.b (mon.1) 159 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575876+0000 mon.b (mon.1) 160 : audit [INF] "var": "eio", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575876+0000 mon.b (mon.1) 160 : audit [INF] "var": "eio", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575892+0000 mon.b (mon.1) 161 : audit [INF] "val": "true" 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575892+0000 mon.b (mon.1) 161 : audit [INF] "val": "true" 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575902+0000 mon.b (mon.1) 162 : audit [INF] }]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.575902+0000 mon.b (mon.1) 162 : audit [INF] }]: dispatch 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.579946+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.579946+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580006+0000 mon.a (mon.0) 1764 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580006+0000 mon.a (mon.0) 1764 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580019+0000 mon.a (mon.0) 1765 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580019+0000 mon.a (mon.0) 1765 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580043+0000 mon.a (mon.0) 1766 : audit [INF] "var": "eio", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580043+0000 mon.a (mon.0) 1766 : audit [INF] "var": "eio", 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580052+0000 mon.a (mon.0) 1767 : audit [INF] "val": "true" 2026-03-09T15:58:27.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580052+0000 mon.a (mon.0) 1767 : audit [INF] "val": "true" 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580062+0000 mon.a (mon.0) 1768 : audit [INF] }]: dispatch 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:26.580062+0000 mon.a (mon.0) 1768 : audit [INF] }]: dispatch 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: cluster 2026-03-09T15:58:26.706861+0000 mgr.y (mgr.14520) 202 : cluster [DBG] pgmap v227: 363 pgs: 11 creating+peering, 1 active+clean+snaptrim, 61 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: cluster 2026-03-09T15:58:26.706861+0000 mgr.y (mgr.14520) 202 : cluster [DBG] pgmap v227: 363 pgs: 11 creating+peering, 1 active+clean+snaptrim, 61 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073604+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073604+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073705+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073705+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073744+0000 mon.a (mon.0) 1771 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073744+0000 mon.a (mon.0) 1771 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073782+0000 mon.a (mon.0) 1772 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073782+0000 mon.a (mon.0) 1772 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073821+0000 mon.a (mon.0) 1773 : audit [INF] "var": "eio", 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073821+0000 mon.a (mon.0) 1773 : audit [INF] "var": "eio", 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073860+0000 mon.a (mon.0) 1774 : audit [INF] "val": "true" 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073860+0000 mon.a (mon.0) 1774 : audit [INF] "val": "true" 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073900+0000 mon.a (mon.0) 1775 : audit [INF] }]': finished 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.073900+0000 mon.a (mon.0) 1775 : audit [INF] }]': finished 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: cluster 2026-03-09T15:58:27.078333+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: cluster 2026-03-09T15:58:27.078333+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.079637+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.079637+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.079888+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.079888+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.081150+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.081150+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.084814+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:27 vm01 bash[20728]: audit 2026-03-09T15:58:27.084814+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.322593+0000 mgr.y (mgr.14520) 201 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.322593+0000 mgr.y (mgr.14520) 201 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.409892+0000 mon.a (mon.0) 1762 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.409892+0000 mon.a (mon.0) 1762 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575817+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575817+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.101:0/507077350' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575851+0000 mon.b (mon.1) 158 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575851+0000 mon.b (mon.1) 158 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575863+0000 mon.b (mon.1) 159 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575863+0000 mon.b (mon.1) 159 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575876+0000 mon.b (mon.1) 160 : audit [INF] "var": "eio", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575876+0000 mon.b (mon.1) 160 : audit [INF] "var": "eio", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575892+0000 mon.b (mon.1) 161 : audit [INF] "val": "true" 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575892+0000 mon.b (mon.1) 161 : audit [INF] "val": "true" 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575902+0000 mon.b (mon.1) 162 : audit [INF] }]: dispatch 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.575902+0000 mon.b (mon.1) 162 : audit [INF] }]: dispatch 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.579946+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.579946+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580006+0000 mon.a (mon.0) 1764 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580006+0000 mon.a (mon.0) 1764 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580019+0000 mon.a (mon.0) 1765 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580019+0000 mon.a (mon.0) 1765 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580043+0000 mon.a (mon.0) 1766 : audit [INF] "var": "eio", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580043+0000 mon.a (mon.0) 1766 : audit [INF] "var": "eio", 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580052+0000 mon.a (mon.0) 1767 : audit [INF] "val": "true" 2026-03-09T15:58:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580052+0000 mon.a (mon.0) 1767 : audit [INF] "val": "true" 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580062+0000 mon.a (mon.0) 1768 : audit [INF] }]: dispatch 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:26.580062+0000 mon.a (mon.0) 1768 : audit [INF] }]: dispatch 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: cluster 2026-03-09T15:58:26.706861+0000 mgr.y (mgr.14520) 202 : cluster [DBG] pgmap v227: 363 pgs: 11 creating+peering, 1 active+clean+snaptrim, 61 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: cluster 2026-03-09T15:58:26.706861+0000 mgr.y (mgr.14520) 202 : cluster [DBG] pgmap v227: 363 pgs: 11 creating+peering, 1 active+clean+snaptrim, 61 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073604+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073604+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073705+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073705+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073744+0000 mon.a (mon.0) 1771 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073744+0000 mon.a (mon.0) 1771 : audit [INF] "prefix": "osd pool set", 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073782+0000 mon.a (mon.0) 1772 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073782+0000 mon.a (mon.0) 1772 : audit [INF] "pool": "PoolEIOFlag_vm01-59610-33", 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073821+0000 mon.a (mon.0) 1773 : audit [INF] "var": "eio", 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073821+0000 mon.a (mon.0) 1773 : audit [INF] "var": "eio", 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073860+0000 mon.a (mon.0) 1774 : audit [INF] "val": "true" 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073860+0000 mon.a (mon.0) 1774 : audit [INF] "val": "true" 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073900+0000 mon.a (mon.0) 1775 : audit [INF] }]': finished 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.073900+0000 mon.a (mon.0) 1775 : audit [INF] }]': finished 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: cluster 2026-03-09T15:58:27.078333+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: cluster 2026-03-09T15:58:27.078333+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.079637+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.079637+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.079888+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.079888+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]: dispatch 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.081150+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.081150+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.084814+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:27.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:27 vm09 bash[22983]: audit 2026-03-09T15:58:27.084814+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:27.410569+0000 mon.a (mon.0) 1779 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:27.410569+0000 mon.a (mon.0) 1779 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: cluster 2026-03-09T15:58:28.073711+0000 mon.a (mon.0) 1780 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: cluster 2026-03-09T15:58:28.073711+0000 mon.a (mon.0) 1780 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:28.076400+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]': finished 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:28.076400+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]': finished 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:28.076498+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:28.076498+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:28.076833+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:28.076833+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: cluster 2026-03-09T15:58:28.081268+0000 mon.a (mon.0) 1783 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: cluster 2026-03-09T15:58:28.081268+0000 mon.a (mon.0) 1783 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:28.089701+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:28 vm01 bash[20728]: audit 2026-03-09T15:58:28.089701+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:27.410569+0000 mon.a (mon.0) 1779 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:27.410569+0000 mon.a (mon.0) 1779 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: cluster 2026-03-09T15:58:28.073711+0000 mon.a (mon.0) 1780 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: cluster 2026-03-09T15:58:28.073711+0000 mon.a (mon.0) 1780 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:28.076400+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]': finished 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:28.076400+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]': finished 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:28.076498+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:28.076498+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:28.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:28.076833+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:28.076833+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: cluster 2026-03-09T15:58:28.081268+0000 mon.a (mon.0) 1783 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T15:58:28.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: cluster 2026-03-09T15:58:28.081268+0000 mon.a (mon.0) 1783 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T15:58:28.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:28.089701+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:28 vm01 bash[28152]: audit 2026-03-09T15:58:28.089701+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:27.410569+0000 mon.a (mon.0) 1779 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:27.410569+0000 mon.a (mon.0) 1779 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: cluster 2026-03-09T15:58:28.073711+0000 mon.a (mon.0) 1780 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: cluster 2026-03-09T15:58:28.073711+0000 mon.a (mon.0) 1780 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:28.076400+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]': finished 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:28.076400+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-19", "mode": "writeback"}]': finished 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:28.076498+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:28.076498+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:28.076833+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:28.076833+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.101:0/3449032261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: cluster 2026-03-09T15:58:28.081268+0000 mon.a (mon.0) 1783 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: cluster 2026-03-09T15:58:28.081268+0000 mon.a (mon.0) 1783 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:28.089701+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:28 vm09 bash[22983]: audit 2026-03-09T15:58:28.089701+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.142641+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.142641+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.143676+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.143676+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.411321+0000 mon.a (mon.0) 1786 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.411321+0000 mon.a (mon.0) 1786 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: cluster 2026-03-09T15:58:28.707374+0000 mgr.y (mgr.14520) 203 : cluster [DBG] pgmap v230: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: cluster 2026-03-09T15:58:28.707374+0000 mgr.y (mgr.14520) 203 : cluster [DBG] pgmap v230: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.802634+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.802634+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.802729+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.802729+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: cluster 2026-03-09T15:58:28.806438+0000 mon.a (mon.0) 1789 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: cluster 2026-03-09T15:58:28.806438+0000 mon.a (mon.0) 1789 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.809766+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.809766+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.814071+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.814071+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.818160+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.101:0/1861027271' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.818160+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.101:0/1861027271' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.822717+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.822717+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.826321+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.826321+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.831881+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.831881+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.831900+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.831900+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.833392+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.833392+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.835771+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.835771+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.839210+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:28.839210+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:29.099041+0000 mon.a (mon.0) 1795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:29 vm01 bash[20728]: audit 2026-03-09T15:58:29.099041+0000 mon.a (mon.0) 1795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.142641+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.142641+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.143676+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.143676+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.411321+0000 mon.a (mon.0) 1786 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.411321+0000 mon.a (mon.0) 1786 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: cluster 2026-03-09T15:58:28.707374+0000 mgr.y (mgr.14520) 203 : cluster [DBG] pgmap v230: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: cluster 2026-03-09T15:58:28.707374+0000 mgr.y (mgr.14520) 203 : cluster [DBG] pgmap v230: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.802634+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.802634+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.802729+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.802729+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: cluster 2026-03-09T15:58:28.806438+0000 mon.a (mon.0) 1789 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: cluster 2026-03-09T15:58:28.806438+0000 mon.a (mon.0) 1789 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.809766+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.809766+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.814071+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.814071+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.818160+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.101:0/1861027271' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.818160+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.101:0/1861027271' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.822717+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.822717+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.826321+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.826321+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.831881+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.831881+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.831900+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.831900+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.833392+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.833392+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.835771+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.835771+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.839210+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:28.839210+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:29.099041+0000 mon.a (mon.0) 1795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:29.429 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:29 vm01 bash[28152]: audit 2026-03-09T15:58:29.099041+0000 mon.a (mon.0) 1795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.142641+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.142641+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.143676+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.143676+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.411321+0000 mon.a (mon.0) 1786 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.411321+0000 mon.a (mon.0) 1786 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: cluster 2026-03-09T15:58:28.707374+0000 mgr.y (mgr.14520) 203 : cluster [DBG] pgmap v230: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: cluster 2026-03-09T15:58:28.707374+0000 mgr.y (mgr.14520) 203 : cluster [DBG] pgmap v230: 323 pgs: 11 creating+peering, 1 active+clean+snaptrim, 21 unknown, 290 active+clean; 458 KiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.802634+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.802634+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm01-59602-31"}]': finished 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.802729+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.802729+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: cluster 2026-03-09T15:58:28.806438+0000 mon.a (mon.0) 1789 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: cluster 2026-03-09T15:58:28.806438+0000 mon.a (mon.0) 1789 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.809766+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.809766+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.814071+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.814071+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.818160+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.101:0/1861027271' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.818160+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.101:0/1861027271' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.822717+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.822717+0000 mon.a (mon.0) 1791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.826321+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.826321+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.831881+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.831881+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.831900+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.831900+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.833392+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.833392+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.835771+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.835771+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.839210+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:28.839210+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:29.099041+0000 mon.a (mon.0) 1795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:29.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:29 vm09 bash[22983]: audit 2026-03-09T15:58:29.099041+0000 mon.a (mon.0) 1795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: cluster 2026-03-09T15:58:29.145271+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: cluster 2026-03-09T15:58:29.145271+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.412119+0000 mon.a (mon.0) 1797 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.412119+0000 mon.a (mon.0) 1797 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: cluster 2026-03-09T15:58:29.802882+0000 mon.a (mon.0) 1798 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: cluster 2026-03-09T15:58:29.802882+0000 mon.a (mon.0) 1798 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.807057+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.807057+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.807262+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.807262+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.807313+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.807313+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.808288+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.808288+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: cluster 2026-03-09T15:58:29.817620+0000 mon.a (mon.0) 1802 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: cluster 2026-03-09T15:58:29.817620+0000 mon.a (mon.0) 1802 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.824688+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:30 vm01 bash[20728]: audit 2026-03-09T15:58:29.824688+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: cluster 2026-03-09T15:58:29.145271+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: cluster 2026-03-09T15:58:29.145271+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.412119+0000 mon.a (mon.0) 1797 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.412119+0000 mon.a (mon.0) 1797 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: cluster 2026-03-09T15:58:29.802882+0000 mon.a (mon.0) 1798 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: cluster 2026-03-09T15:58:29.802882+0000 mon.a (mon.0) 1798 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.807057+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.807057+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.807262+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.807262+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.807313+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.807313+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.808288+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.808288+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: cluster 2026-03-09T15:58:29.817620+0000 mon.a (mon.0) 1802 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T15:58:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: cluster 2026-03-09T15:58:29.817620+0000 mon.a (mon.0) 1802 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T15:58:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.824688+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:30 vm01 bash[28152]: audit 2026-03-09T15:58:29.824688+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: cluster 2026-03-09T15:58:29.145271+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: cluster 2026-03-09T15:58:29.145271+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.412119+0000 mon.a (mon.0) 1797 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.412119+0000 mon.a (mon.0) 1797 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: cluster 2026-03-09T15:58:29.802882+0000 mon.a (mon.0) 1798 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: cluster 2026-03-09T15:58:29.802882+0000 mon.a (mon.0) 1798 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.807057+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.807057+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-19"}]': finished 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.807262+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.807262+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm01-59610-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.807313+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.807313+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm01-59602-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.808288+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.808288+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: cluster 2026-03-09T15:58:29.817620+0000 mon.a (mon.0) 1802 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: cluster 2026-03-09T15:58:29.817620+0000 mon.a (mon.0) 1802 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.824688+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:30 vm09 bash[22983]: audit 2026-03-09T15:58:29.824688+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:31 vm01 bash[28152]: audit 2026-03-09T15:58:30.412990+0000 mon.a (mon.0) 1804 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:31 vm01 bash[28152]: audit 2026-03-09T15:58:30.412990+0000 mon.a (mon.0) 1804 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:31 vm01 bash[28152]: cluster 2026-03-09T15:58:30.707772+0000 mgr.y (mgr.14520) 204 : cluster [DBG] pgmap v233: 355 pgs: 14 creating+peering, 1 active+clean+snaptrim, 18 unknown, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:31 vm01 bash[28152]: cluster 2026-03-09T15:58:30.707772+0000 mgr.y (mgr.14520) 204 : cluster [DBG] pgmap v233: 355 pgs: 14 creating+peering, 1 active+clean+snaptrim, 18 unknown, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:31 vm01 bash[28152]: cluster 2026-03-09T15:58:30.828276+0000 mon.a (mon.0) 1805 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:31 vm01 bash[28152]: cluster 2026-03-09T15:58:30.828276+0000 mon.a (mon.0) 1805 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:31 vm01 bash[20728]: audit 2026-03-09T15:58:30.412990+0000 mon.a (mon.0) 1804 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:31 vm01 bash[20728]: audit 2026-03-09T15:58:30.412990+0000 mon.a (mon.0) 1804 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:31 vm01 bash[20728]: cluster 2026-03-09T15:58:30.707772+0000 mgr.y (mgr.14520) 204 : cluster [DBG] pgmap v233: 355 pgs: 14 creating+peering, 1 active+clean+snaptrim, 18 unknown, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:31 vm01 bash[20728]: cluster 2026-03-09T15:58:30.707772+0000 mgr.y (mgr.14520) 204 : cluster [DBG] pgmap v233: 355 pgs: 14 creating+peering, 1 active+clean+snaptrim, 18 unknown, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:31 vm01 bash[20728]: cluster 2026-03-09T15:58:30.828276+0000 mon.a (mon.0) 1805 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T15:58:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:31 vm01 bash[20728]: cluster 2026-03-09T15:58:30.828276+0000 mon.a (mon.0) 1805 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T15:58:31.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:31 vm09 bash[22983]: audit 2026-03-09T15:58:30.412990+0000 mon.a (mon.0) 1804 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:31.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:31 vm09 bash[22983]: audit 2026-03-09T15:58:30.412990+0000 mon.a (mon.0) 1804 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:31.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:31 vm09 bash[22983]: cluster 2026-03-09T15:58:30.707772+0000 mgr.y (mgr.14520) 204 : cluster [DBG] pgmap v233: 355 pgs: 14 creating+peering, 1 active+clean+snaptrim, 18 unknown, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:31.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:31 vm09 bash[22983]: cluster 2026-03-09T15:58:30.707772+0000 mgr.y (mgr.14520) 204 : cluster [DBG] pgmap v233: 355 pgs: 14 creating+peering, 1 active+clean+snaptrim, 18 unknown, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:31.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:31 vm09 bash[22983]: cluster 2026-03-09T15:58:30.828276+0000 mon.a (mon.0) 1805 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T15:58:31.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:31 vm09 bash[22983]: cluster 2026-03-09T15:58:30.828276+0000 mon.a (mon.0) 1805 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.413889+0000 mon.a (mon.0) 1806 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.413889+0000 mon.a (mon.0) 1806 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.814045+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.814045+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: cluster 2026-03-09T15:58:31.826014+0000 mon.a (mon.0) 1808 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: cluster 2026-03-09T15:58:31.826014+0000 mon.a (mon.0) 1808 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.840599+0000 mon.c (mon.2) 171 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.840599+0000 mon.c (mon.2) 171 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.840861+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.840861+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.841404+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.101:0/2184660242' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.841404+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.101:0/2184660242' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.841704+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.841704+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.892495+0000 mon.a (mon.0) 1811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:32 vm01 bash[20728]: audit 2026-03-09T15:58:31.892495+0000 mon.a (mon.0) 1811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.413889+0000 mon.a (mon.0) 1806 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.413889+0000 mon.a (mon.0) 1806 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.814045+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.814045+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: cluster 2026-03-09T15:58:31.826014+0000 mon.a (mon.0) 1808 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T15:58:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: cluster 2026-03-09T15:58:31.826014+0000 mon.a (mon.0) 1808 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.840599+0000 mon.c (mon.2) 171 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.840599+0000 mon.c (mon.2) 171 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.840861+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.840861+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.841404+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.101:0/2184660242' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.841404+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.101:0/2184660242' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.841704+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.841704+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.892495+0000 mon.a (mon.0) 1811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:58:32.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:32 vm01 bash[28152]: audit 2026-03-09T15:58:31.892495+0000 mon.a (mon.0) 1811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.413889+0000 mon.a (mon.0) 1806 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.413889+0000 mon.a (mon.0) 1806 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.814045+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.814045+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm01-59602-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: cluster 2026-03-09T15:58:31.826014+0000 mon.a (mon.0) 1808 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: cluster 2026-03-09T15:58:31.826014+0000 mon.a (mon.0) 1808 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.840599+0000 mon.c (mon.2) 171 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.840599+0000 mon.c (mon.2) 171 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.840861+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.840861+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.841404+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.101:0/2184660242' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.841404+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.101:0/2184660242' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.841704+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.841704+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.892495+0000 mon.a (mon.0) 1811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:58:32.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:32 vm09 bash[22983]: audit 2026-03-09T15:58:31.892495+0000 mon.a (mon.0) 1811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:58:33.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:58:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:58:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.188972+0000 mon.a (mon.0) 1812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.188972+0000 mon.a (mon.0) 1812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.196413+0000 mon.a (mon.0) 1813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.196413+0000 mon.a (mon.0) 1813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.414879+0000 mon.a (mon.0) 1814 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.414879+0000 mon.a (mon.0) 1814 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.536900+0000 mon.a (mon.0) 1815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.536900+0000 mon.a (mon.0) 1815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.537610+0000 mon.a (mon.0) 1816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.537610+0000 mon.a (mon.0) 1816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.542860+0000 mon.a (mon.0) 1817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.542860+0000 mon.a (mon.0) 1817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: cluster 2026-03-09T15:58:32.708143+0000 mgr.y (mgr.14520) 205 : cluster [DBG] pgmap v236: 363 pgs: 72 unknown, 1 active+clean+snaptrim, 290 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: cluster 2026-03-09T15:58:32.708143+0000 mgr.y (mgr.14520) 205 : cluster [DBG] pgmap v236: 363 pgs: 72 unknown, 1 active+clean+snaptrim, 290 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.817595+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.817595+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.817628+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.817628+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: cluster 2026-03-09T15:58:32.821103+0000 mon.a (mon.0) 1820 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: cluster 2026-03-09T15:58:32.821103+0000 mon.a (mon.0) 1820 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.895294+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.895294+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.895546+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:33 vm09 bash[22983]: audit 2026-03-09T15:58:32.895546+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.188972+0000 mon.a (mon.0) 1812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.188972+0000 mon.a (mon.0) 1812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.196413+0000 mon.a (mon.0) 1813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.196413+0000 mon.a (mon.0) 1813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.414879+0000 mon.a (mon.0) 1814 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.414879+0000 mon.a (mon.0) 1814 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.536900+0000 mon.a (mon.0) 1815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.536900+0000 mon.a (mon.0) 1815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.537610+0000 mon.a (mon.0) 1816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.537610+0000 mon.a (mon.0) 1816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.542860+0000 mon.a (mon.0) 1817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.542860+0000 mon.a (mon.0) 1817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: cluster 2026-03-09T15:58:32.708143+0000 mgr.y (mgr.14520) 205 : cluster [DBG] pgmap v236: 363 pgs: 72 unknown, 1 active+clean+snaptrim, 290 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: cluster 2026-03-09T15:58:32.708143+0000 mgr.y (mgr.14520) 205 : cluster [DBG] pgmap v236: 363 pgs: 72 unknown, 1 active+clean+snaptrim, 290 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.817595+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.817595+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.817628+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.817628+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: cluster 2026-03-09T15:58:32.821103+0000 mon.a (mon.0) 1820 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: cluster 2026-03-09T15:58:32.821103+0000 mon.a (mon.0) 1820 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.895294+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.895294+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.895546+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:33 vm01 bash[28152]: audit 2026-03-09T15:58:32.895546+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.188972+0000 mon.a (mon.0) 1812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.188972+0000 mon.a (mon.0) 1812 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.196413+0000 mon.a (mon.0) 1813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.196413+0000 mon.a (mon.0) 1813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.414879+0000 mon.a (mon.0) 1814 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.414879+0000 mon.a (mon.0) 1814 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.536900+0000 mon.a (mon.0) 1815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.536900+0000 mon.a (mon.0) 1815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.537610+0000 mon.a (mon.0) 1816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.537610+0000 mon.a (mon.0) 1816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.542860+0000 mon.a (mon.0) 1817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.542860+0000 mon.a (mon.0) 1817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: cluster 2026-03-09T15:58:32.708143+0000 mgr.y (mgr.14520) 205 : cluster [DBG] pgmap v236: 363 pgs: 72 unknown, 1 active+clean+snaptrim, 290 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: cluster 2026-03-09T15:58:32.708143+0000 mgr.y (mgr.14520) 205 : cluster [DBG] pgmap v236: 363 pgs: 72 unknown, 1 active+clean+snaptrim, 290 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.817595+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.817595+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.817628+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.817628+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm01-59610-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: cluster 2026-03-09T15:58:32.821103+0000 mon.a (mon.0) 1820 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: cluster 2026-03-09T15:58:32.821103+0000 mon.a (mon.0) 1820 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.895294+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.895294+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.895546+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:33.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:33 vm01 bash[20728]: audit 2026-03-09T15:58:32.895546+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.415691+0000 mon.a (mon.0) 1822 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.415691+0000 mon.a (mon.0) 1822 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.821034+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.821034+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.831471+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.831471+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.834004+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.834004+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: cluster 2026-03-09T15:58:33.836543+0000 mon.a (mon.0) 1824 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: cluster 2026-03-09T15:58:33.836543+0000 mon.a (mon.0) 1824 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.845214+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.845214+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.845305+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:34 vm09 bash[22983]: audit 2026-03-09T15:58:33.845305+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.415691+0000 mon.a (mon.0) 1822 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.415691+0000 mon.a (mon.0) 1822 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.821034+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.821034+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.831471+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.831471+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.834004+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.834004+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: cluster 2026-03-09T15:58:33.836543+0000 mon.a (mon.0) 1824 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: cluster 2026-03-09T15:58:33.836543+0000 mon.a (mon.0) 1824 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.845214+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.845214+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.845305+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:34 vm01 bash[28152]: audit 2026-03-09T15:58:33.845305+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.415691+0000 mon.a (mon.0) 1822 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.415691+0000 mon.a (mon.0) 1822 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.821034+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.821034+0000 mon.a (mon.0) 1823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.831471+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.831471+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.834004+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.834004+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: cluster 2026-03-09T15:58:33.836543+0000 mon.a (mon.0) 1824 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: cluster 2026-03-09T15:58:33.836543+0000 mon.a (mon.0) 1824 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.845214+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.845214+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.845305+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:34.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:34 vm01 bash[20728]: audit 2026-03-09T15:58:33.845305+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.416646+0000 mon.a (mon.0) 1827 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.416646+0000 mon.a (mon.0) 1827 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: cluster 2026-03-09T15:58:34.708849+0000 mgr.y (mgr.14520) 206 : cluster [DBG] pgmap v239: 323 pgs: 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: cluster 2026-03-09T15:58:34.708849+0000 mgr.y (mgr.14520) 206 : cluster [DBG] pgmap v239: 323 pgs: 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.709577+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.709577+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.824197+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.824197+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.824256+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.824256+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.824281+0000 mon.a (mon.0) 1831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.824281+0000 mon.a (mon.0) 1831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T15:58:35.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: cluster 2026-03-09T15:58:34.828558+0000 mon.a (mon.0) 1832 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: cluster 2026-03-09T15:58:34.828558+0000 mon.a (mon.0) 1832 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.834550+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.834550+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.834638+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.834638+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.838482+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.838482+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.839334+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.839334+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.842235+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.101:0/3087417066' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.842235+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.101:0/3087417066' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.844258+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:35 vm09 bash[22983]: audit 2026-03-09T15:58:34.844258+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.416646+0000 mon.a (mon.0) 1827 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.416646+0000 mon.a (mon.0) 1827 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: cluster 2026-03-09T15:58:34.708849+0000 mgr.y (mgr.14520) 206 : cluster [DBG] pgmap v239: 323 pgs: 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: cluster 2026-03-09T15:58:34.708849+0000 mgr.y (mgr.14520) 206 : cluster [DBG] pgmap v239: 323 pgs: 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.709577+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.709577+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.824197+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.824197+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.824256+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.824256+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.824281+0000 mon.a (mon.0) 1831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.824281+0000 mon.a (mon.0) 1831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: cluster 2026-03-09T15:58:34.828558+0000 mon.a (mon.0) 1832 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: cluster 2026-03-09T15:58:34.828558+0000 mon.a (mon.0) 1832 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.834550+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.834550+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.834638+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.834638+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.838482+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.838482+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.839334+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.839334+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.842235+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.101:0/3087417066' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.842235+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.101:0/3087417066' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.844258+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:35 vm01 bash[20728]: audit 2026-03-09T15:58:34.844258+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.416646+0000 mon.a (mon.0) 1827 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.416646+0000 mon.a (mon.0) 1827 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: cluster 2026-03-09T15:58:34.708849+0000 mgr.y (mgr.14520) 206 : cluster [DBG] pgmap v239: 323 pgs: 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: cluster 2026-03-09T15:58:34.708849+0000 mgr.y (mgr.14520) 206 : cluster [DBG] pgmap v239: 323 pgs: 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.709577+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.709577+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.824197+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.824197+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.824256+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.824256+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.824281+0000 mon.a (mon.0) 1831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.824281+0000 mon.a (mon.0) 1831 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: cluster 2026-03-09T15:58:34.828558+0000 mon.a (mon.0) 1832 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: cluster 2026-03-09T15:58:34.828558+0000 mon.a (mon.0) 1832 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.834550+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.834550+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.101:0/3932077313' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.834638+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.834638+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.838482+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.838482+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.839334+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.839334+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.842235+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.101:0/3087417066' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.842235+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.101:0/3087417066' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.844258+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:35.678 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:35 vm01 bash[28152]: audit 2026-03-09T15:58:34.844258+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:58:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: cluster 2026-03-09T15:58:35.192419+0000 mon.a (mon.0) 1836 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: cluster 2026-03-09T15:58:35.192419+0000 mon.a (mon.0) 1836 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.417543+0000 mon.a (mon.0) 1837 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.417543+0000 mon.a (mon.0) 1837 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: cluster 2026-03-09T15:58:35.824644+0000 mon.a (mon.0) 1838 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: cluster 2026-03-09T15:58:35.824644+0000 mon.a (mon.0) 1838 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.833319+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.833319+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.833425+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]': finished 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.833425+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]': finished 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.833461+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.833461+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: cluster 2026-03-09T15:58:35.841477+0000 mon.a (mon.0) 1842 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: cluster 2026-03-09T15:58:35.841477+0000 mon.a (mon.0) 1842 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.868277+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.868277+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.869085+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.869085+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.869686+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.869686+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.949172+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.949172+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.949884+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:36 vm09 bash[22983]: audit 2026-03-09T15:58:35.949884+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: cluster 2026-03-09T15:58:35.192419+0000 mon.a (mon.0) 1836 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: cluster 2026-03-09T15:58:35.192419+0000 mon.a (mon.0) 1836 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.417543+0000 mon.a (mon.0) 1837 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.417543+0000 mon.a (mon.0) 1837 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: cluster 2026-03-09T15:58:35.824644+0000 mon.a (mon.0) 1838 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: cluster 2026-03-09T15:58:35.824644+0000 mon.a (mon.0) 1838 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.833319+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.833319+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.833425+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]': finished 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.833425+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]': finished 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.833461+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.833461+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: cluster 2026-03-09T15:58:35.841477+0000 mon.a (mon.0) 1842 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: cluster 2026-03-09T15:58:35.841477+0000 mon.a (mon.0) 1842 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.868277+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.868277+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.869085+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.869085+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.869686+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.869686+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.949172+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.949172+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.949884+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:36 vm01 bash[28152]: audit 2026-03-09T15:58:35.949884+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: cluster 2026-03-09T15:58:35.192419+0000 mon.a (mon.0) 1836 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: cluster 2026-03-09T15:58:35.192419+0000 mon.a (mon.0) 1836 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.417543+0000 mon.a (mon.0) 1837 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:36.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.417543+0000 mon.a (mon.0) 1837 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: cluster 2026-03-09T15:58:35.824644+0000 mon.a (mon.0) 1838 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: cluster 2026-03-09T15:58:35.824644+0000 mon.a (mon.0) 1838 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.833319+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.833319+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm01-59602-32"}]': finished 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.833425+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]': finished 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.833425+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-21", "mode": "writeback"}]': finished 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.833461+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.833461+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: cluster 2026-03-09T15:58:35.841477+0000 mon.a (mon.0) 1842 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: cluster 2026-03-09T15:58:35.841477+0000 mon.a (mon.0) 1842 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.868277+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.868277+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.869085+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.869085+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.869686+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.869686+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.949172+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.949172+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.949884+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:36.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:36 vm01 bash[20728]: audit 2026-03-09T15:58:35.949884+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.333229+0000 mgr.y (mgr.14520) 207 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.333229+0000 mgr.y (mgr.14520) 207 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.418577+0000 mon.a (mon.0) 1847 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.418577+0000 mon.a (mon.0) 1847 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: cluster 2026-03-09T15:58:36.709280+0000 mgr.y (mgr.14520) 208 : cluster [DBG] pgmap v242: 355 pgs: 32 unknown, 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: cluster 2026-03-09T15:58:36.709280+0000 mgr.y (mgr.14520) 208 : cluster [DBG] pgmap v242: 355 pgs: 32 unknown, 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.836589+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.836589+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.836637+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.836637+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: cluster 2026-03-09T15:58:36.840764+0000 mon.a (mon.0) 1850 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: cluster 2026-03-09T15:58:36.840764+0000 mon.a (mon.0) 1850 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.843337+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.843337+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.848929+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.848929+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.851695+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.101:0/4230419230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.851695+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.101:0/4230419230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.853059+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.853059+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.853181+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:37 vm09 bash[22983]: audit 2026-03-09T15:58:36.853181+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.333229+0000 mgr.y (mgr.14520) 207 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.333229+0000 mgr.y (mgr.14520) 207 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.418577+0000 mon.a (mon.0) 1847 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.418577+0000 mon.a (mon.0) 1847 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: cluster 2026-03-09T15:58:36.709280+0000 mgr.y (mgr.14520) 208 : cluster [DBG] pgmap v242: 355 pgs: 32 unknown, 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: cluster 2026-03-09T15:58:36.709280+0000 mgr.y (mgr.14520) 208 : cluster [DBG] pgmap v242: 355 pgs: 32 unknown, 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.836589+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.836589+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.836637+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.836637+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: cluster 2026-03-09T15:58:36.840764+0000 mon.a (mon.0) 1850 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: cluster 2026-03-09T15:58:36.840764+0000 mon.a (mon.0) 1850 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.843337+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.843337+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.848929+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.848929+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.851695+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.101:0/4230419230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.851695+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.101:0/4230419230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.853059+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.853059+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.853181+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:37 vm01 bash[28152]: audit 2026-03-09T15:58:36.853181+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.333229+0000 mgr.y (mgr.14520) 207 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.333229+0000 mgr.y (mgr.14520) 207 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.418577+0000 mon.a (mon.0) 1847 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.418577+0000 mon.a (mon.0) 1847 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: cluster 2026-03-09T15:58:36.709280+0000 mgr.y (mgr.14520) 208 : cluster [DBG] pgmap v242: 355 pgs: 32 unknown, 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: cluster 2026-03-09T15:58:36.709280+0000 mgr.y (mgr.14520) 208 : cluster [DBG] pgmap v242: 355 pgs: 32 unknown, 1 active+clean+snaptrim, 322 active+clean; 458 KiB data, 757 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.836589+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.836589+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm01-59602-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.836637+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.836637+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: cluster 2026-03-09T15:58:36.840764+0000 mon.a (mon.0) 1850 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: cluster 2026-03-09T15:58:36.840764+0000 mon.a (mon.0) 1850 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.843337+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.843337+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.848929+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.848929+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.851695+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.101:0/4230419230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.851695+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.101:0/4230419230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.853059+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.853059+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.853181+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:37.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:37 vm01 bash[20728]: audit 2026-03-09T15:58:36.853181+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: audit 2026-03-09T15:58:37.419667+0000 mon.a (mon.0) 1854 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: audit 2026-03-09T15:58:37.419667+0000 mon.a (mon.0) 1854 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: cluster 2026-03-09T15:58:37.837026+0000 mon.a (mon.0) 1855 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: cluster 2026-03-09T15:58:37.837026+0000 mon.a (mon.0) 1855 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: audit 2026-03-09T15:58:37.840372+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: audit 2026-03-09T15:58:37.840372+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: audit 2026-03-09T15:58:37.840502+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: audit 2026-03-09T15:58:37.840502+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: cluster 2026-03-09T15:58:37.863127+0000 mon.a (mon.0) 1858 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T15:58:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:38 vm09 bash[22983]: cluster 2026-03-09T15:58:37.863127+0000 mon.a (mon.0) 1858 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: audit 2026-03-09T15:58:37.419667+0000 mon.a (mon.0) 1854 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: audit 2026-03-09T15:58:37.419667+0000 mon.a (mon.0) 1854 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: cluster 2026-03-09T15:58:37.837026+0000 mon.a (mon.0) 1855 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: cluster 2026-03-09T15:58:37.837026+0000 mon.a (mon.0) 1855 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: audit 2026-03-09T15:58:37.840372+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: audit 2026-03-09T15:58:37.840372+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: audit 2026-03-09T15:58:37.840502+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: audit 2026-03-09T15:58:37.840502+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: cluster 2026-03-09T15:58:37.863127+0000 mon.a (mon.0) 1858 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:38 vm01 bash[28152]: cluster 2026-03-09T15:58:37.863127+0000 mon.a (mon.0) 1858 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: audit 2026-03-09T15:58:37.419667+0000 mon.a (mon.0) 1854 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: audit 2026-03-09T15:58:37.419667+0000 mon.a (mon.0) 1854 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: cluster 2026-03-09T15:58:37.837026+0000 mon.a (mon.0) 1855 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: cluster 2026-03-09T15:58:37.837026+0000 mon.a (mon.0) 1855 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: audit 2026-03-09T15:58:37.840372+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: audit 2026-03-09T15:58:37.840372+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-21"}]': finished 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: audit 2026-03-09T15:58:37.840502+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: audit 2026-03-09T15:58:37.840502+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: cluster 2026-03-09T15:58:37.863127+0000 mon.a (mon.0) 1858 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T15:58:38.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:38 vm01 bash[20728]: cluster 2026-03-09T15:58:37.863127+0000 mon.a (mon.0) 1858 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.420499+0000 mon.a (mon.0) 1859 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.420499+0000 mon.a (mon.0) 1859 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: cluster 2026-03-09T15:58:38.709839+0000 mgr.y (mgr.14520) 209 : cluster [DBG] pgmap v245: 387 pgs: 59 unknown, 1 active+clean+snaptrim, 327 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: cluster 2026-03-09T15:58:38.709839+0000 mgr.y (mgr.14520) 209 : cluster [DBG] pgmap v245: 387 pgs: 59 unknown, 1 active+clean+snaptrim, 327 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.710679+0000 mon.a (mon.0) 1860 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.710679+0000 mon.a (mon.0) 1860 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.805370+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.805370+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.805428+0000 mon.a (mon.0) 1862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.805428+0000 mon.a (mon.0) 1862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: cluster 2026-03-09T15:58:38.831566+0000 mon.a (mon.0) 1863 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: cluster 2026-03-09T15:58:38.831566+0000 mon.a (mon.0) 1863 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.831641+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.101:0/1984617455' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.831641+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.101:0/1984617455' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.838756+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:39 vm09 bash[22983]: audit 2026-03-09T15:58:38.838756+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.420499+0000 mon.a (mon.0) 1859 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.420499+0000 mon.a (mon.0) 1859 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: cluster 2026-03-09T15:58:38.709839+0000 mgr.y (mgr.14520) 209 : cluster [DBG] pgmap v245: 387 pgs: 59 unknown, 1 active+clean+snaptrim, 327 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: cluster 2026-03-09T15:58:38.709839+0000 mgr.y (mgr.14520) 209 : cluster [DBG] pgmap v245: 387 pgs: 59 unknown, 1 active+clean+snaptrim, 327 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.710679+0000 mon.a (mon.0) 1860 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.710679+0000 mon.a (mon.0) 1860 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.805370+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.805370+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.805428+0000 mon.a (mon.0) 1862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.805428+0000 mon.a (mon.0) 1862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: cluster 2026-03-09T15:58:38.831566+0000 mon.a (mon.0) 1863 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: cluster 2026-03-09T15:58:38.831566+0000 mon.a (mon.0) 1863 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.831641+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.101:0/1984617455' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.831641+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.101:0/1984617455' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.838756+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:39 vm01 bash[28152]: audit 2026-03-09T15:58:38.838756+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.420499+0000 mon.a (mon.0) 1859 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.420499+0000 mon.a (mon.0) 1859 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: cluster 2026-03-09T15:58:38.709839+0000 mgr.y (mgr.14520) 209 : cluster [DBG] pgmap v245: 387 pgs: 59 unknown, 1 active+clean+snaptrim, 327 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: cluster 2026-03-09T15:58:38.709839+0000 mgr.y (mgr.14520) 209 : cluster [DBG] pgmap v245: 387 pgs: 59 unknown, 1 active+clean+snaptrim, 327 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.710679+0000 mon.a (mon.0) 1860 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.710679+0000 mon.a (mon.0) 1860 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.805370+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.805370+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm01-59602-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.805428+0000 mon.a (mon.0) 1862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.805428+0000 mon.a (mon.0) 1862 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: cluster 2026-03-09T15:58:38.831566+0000 mon.a (mon.0) 1863 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: cluster 2026-03-09T15:58:38.831566+0000 mon.a (mon.0) 1863 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.831641+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.101:0/1984617455' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.831641+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.101:0/1984617455' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.838756+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:39.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:39 vm01 bash[20728]: audit 2026-03-09T15:58:38.838756+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: audit 2026-03-09T15:58:39.421279+0000 mon.a (mon.0) 1865 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: audit 2026-03-09T15:58:39.421279+0000 mon.a (mon.0) 1865 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: audit 2026-03-09T15:58:39.808935+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: audit 2026-03-09T15:58:39.808935+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: cluster 2026-03-09T15:58:39.837216+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: cluster 2026-03-09T15:58:39.837216+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: audit 2026-03-09T15:58:39.840011+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: audit 2026-03-09T15:58:39.840011+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: audit 2026-03-09T15:58:39.849270+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:40 vm09 bash[22983]: audit 2026-03-09T15:58:39.849270+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: audit 2026-03-09T15:58:39.421279+0000 mon.a (mon.0) 1865 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: audit 2026-03-09T15:58:39.421279+0000 mon.a (mon.0) 1865 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: audit 2026-03-09T15:58:39.808935+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: audit 2026-03-09T15:58:39.808935+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: cluster 2026-03-09T15:58:39.837216+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: cluster 2026-03-09T15:58:39.837216+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: audit 2026-03-09T15:58:39.840011+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: audit 2026-03-09T15:58:39.840011+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: audit 2026-03-09T15:58:39.849270+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:40 vm01 bash[28152]: audit 2026-03-09T15:58:39.849270+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: audit 2026-03-09T15:58:39.421279+0000 mon.a (mon.0) 1865 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: audit 2026-03-09T15:58:39.421279+0000 mon.a (mon.0) 1865 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: audit 2026-03-09T15:58:39.808935+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: audit 2026-03-09T15:58:39.808935+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm01-59610-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: cluster 2026-03-09T15:58:39.837216+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: cluster 2026-03-09T15:58:39.837216+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: audit 2026-03-09T15:58:39.840011+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: audit 2026-03-09T15:58:39.840011+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: audit 2026-03-09T15:58:39.849270+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:40.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:40 vm01 bash[20728]: audit 2026-03-09T15:58:39.849270+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: audit 2026-03-09T15:58:40.422103+0000 mon.a (mon.0) 1869 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: audit 2026-03-09T15:58:40.422103+0000 mon.a (mon.0) 1869 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: cluster 2026-03-09T15:58:40.710365+0000 mgr.y (mgr.14520) 210 : cluster [DBG] pgmap v248: 427 pgs: 22 creating+peering, 1 active+clean+snaptrim, 50 unknown, 354 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: cluster 2026-03-09T15:58:40.710365+0000 mgr.y (mgr.14520) 210 : cluster [DBG] pgmap v248: 427 pgs: 22 creating+peering, 1 active+clean+snaptrim, 50 unknown, 354 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: audit 2026-03-09T15:58:40.884148+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: audit 2026-03-09T15:58:40.884148+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: cluster 2026-03-09T15:58:40.902468+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: cluster 2026-03-09T15:58:40.902468+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: audit 2026-03-09T15:58:40.907355+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:41 vm09 bash[22983]: audit 2026-03-09T15:58:40.907355+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: audit 2026-03-09T15:58:40.422103+0000 mon.a (mon.0) 1869 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: audit 2026-03-09T15:58:40.422103+0000 mon.a (mon.0) 1869 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: cluster 2026-03-09T15:58:40.710365+0000 mgr.y (mgr.14520) 210 : cluster [DBG] pgmap v248: 427 pgs: 22 creating+peering, 1 active+clean+snaptrim, 50 unknown, 354 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: cluster 2026-03-09T15:58:40.710365+0000 mgr.y (mgr.14520) 210 : cluster [DBG] pgmap v248: 427 pgs: 22 creating+peering, 1 active+clean+snaptrim, 50 unknown, 354 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: audit 2026-03-09T15:58:40.884148+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: audit 2026-03-09T15:58:40.884148+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: cluster 2026-03-09T15:58:40.902468+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: cluster 2026-03-09T15:58:40.902468+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: audit 2026-03-09T15:58:40.907355+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:41 vm01 bash[28152]: audit 2026-03-09T15:58:40.907355+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: audit 2026-03-09T15:58:40.422103+0000 mon.a (mon.0) 1869 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: audit 2026-03-09T15:58:40.422103+0000 mon.a (mon.0) 1869 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:41.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: cluster 2026-03-09T15:58:40.710365+0000 mgr.y (mgr.14520) 210 : cluster [DBG] pgmap v248: 427 pgs: 22 creating+peering, 1 active+clean+snaptrim, 50 unknown, 354 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:41.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: cluster 2026-03-09T15:58:40.710365+0000 mgr.y (mgr.14520) 210 : cluster [DBG] pgmap v248: 427 pgs: 22 creating+peering, 1 active+clean+snaptrim, 50 unknown, 354 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:41.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: audit 2026-03-09T15:58:40.884148+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:41.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: audit 2026-03-09T15:58:40.884148+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:41.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: cluster 2026-03-09T15:58:40.902468+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T15:58:41.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: cluster 2026-03-09T15:58:40.902468+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T15:58:41.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: audit 2026-03-09T15:58:40.907355+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:41.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:41 vm01 bash[20728]: audit 2026-03-09T15:58:40.907355+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: cluster 2026-03-09T15:58:41.284199+0000 mon.a (mon.0) 1873 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: cluster 2026-03-09T15:58:41.284199+0000 mon.a (mon.0) 1873 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.422995+0000 mon.a (mon.0) 1874 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.422995+0000 mon.a (mon.0) 1874 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.887758+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.887758+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: cluster 2026-03-09T15:58:41.903116+0000 mon.a (mon.0) 1876 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: cluster 2026-03-09T15:58:41.903116+0000 mon.a (mon.0) 1876 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.911087+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.911087+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.949116+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.949116+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.949666+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:42 vm09 bash[22983]: audit 2026-03-09T15:58:41.949666+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: cluster 2026-03-09T15:58:41.284199+0000 mon.a (mon.0) 1873 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: cluster 2026-03-09T15:58:41.284199+0000 mon.a (mon.0) 1873 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.422995+0000 mon.a (mon.0) 1874 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.422995+0000 mon.a (mon.0) 1874 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.887758+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.887758+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: cluster 2026-03-09T15:58:41.903116+0000 mon.a (mon.0) 1876 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: cluster 2026-03-09T15:58:41.903116+0000 mon.a (mon.0) 1876 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.911087+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.911087+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.949116+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.949116+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.949666+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:42 vm01 bash[28152]: audit 2026-03-09T15:58:41.949666+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: cluster 2026-03-09T15:58:41.284199+0000 mon.a (mon.0) 1873 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: cluster 2026-03-09T15:58:41.284199+0000 mon.a (mon.0) 1873 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.422995+0000 mon.a (mon.0) 1874 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.422995+0000 mon.a (mon.0) 1874 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.887758+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.887758+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: cluster 2026-03-09T15:58:41.903116+0000 mon.a (mon.0) 1876 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: cluster 2026-03-09T15:58:41.903116+0000 mon.a (mon.0) 1876 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.911087+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.911087+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.949116+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.949116+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.949666+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:42 vm01 bash[20728]: audit 2026-03-09T15:58:41.949666+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:43.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:58:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:58:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.423694+0000 mon.a (mon.0) 1879 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.423694+0000 mon.a (mon.0) 1879 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: cluster 2026-03-09T15:58:42.710712+0000 mgr.y (mgr.14520) 211 : cluster [DBG] pgmap v251: 354 pgs: 1 active+clean+snaptrim, 32 unknown, 321 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: cluster 2026-03-09T15:58:42.710712+0000 mgr.y (mgr.14520) 211 : cluster [DBG] pgmap v251: 354 pgs: 1 active+clean+snaptrim, 32 unknown, 321 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.891884+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.891884+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.891943+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.891943+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: cluster 2026-03-09T15:58:42.895813+0000 mon.a (mon.0) 1882 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: cluster 2026-03-09T15:58:42.895813+0000 mon.a (mon.0) 1882 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.898797+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.898797+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.909994+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.909994+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.920773+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.920773+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.930572+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.930572+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.931054+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.931054+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.931339+0000 mon.a (mon.0) 1885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.931339+0000 mon.a (mon.0) 1885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.931767+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.931767+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.932016+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:43 vm09 bash[22983]: audit 2026-03-09T15:58:42.932016+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.423694+0000 mon.a (mon.0) 1879 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.423694+0000 mon.a (mon.0) 1879 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: cluster 2026-03-09T15:58:42.710712+0000 mgr.y (mgr.14520) 211 : cluster [DBG] pgmap v251: 354 pgs: 1 active+clean+snaptrim, 32 unknown, 321 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: cluster 2026-03-09T15:58:42.710712+0000 mgr.y (mgr.14520) 211 : cluster [DBG] pgmap v251: 354 pgs: 1 active+clean+snaptrim, 32 unknown, 321 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.891884+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.891884+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.891943+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.891943+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: cluster 2026-03-09T15:58:42.895813+0000 mon.a (mon.0) 1882 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: cluster 2026-03-09T15:58:42.895813+0000 mon.a (mon.0) 1882 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.898797+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.898797+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.909994+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.909994+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.920773+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.920773+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.930572+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.930572+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.931054+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.931054+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.931339+0000 mon.a (mon.0) 1885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.931339+0000 mon.a (mon.0) 1885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.931767+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.931767+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.932016+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:43 vm01 bash[28152]: audit 2026-03-09T15:58:42.932016+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.423694+0000 mon.a (mon.0) 1879 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.423694+0000 mon.a (mon.0) 1879 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: cluster 2026-03-09T15:58:42.710712+0000 mgr.y (mgr.14520) 211 : cluster [DBG] pgmap v251: 354 pgs: 1 active+clean+snaptrim, 32 unknown, 321 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: cluster 2026-03-09T15:58:42.710712+0000 mgr.y (mgr.14520) 211 : cluster [DBG] pgmap v251: 354 pgs: 1 active+clean+snaptrim, 32 unknown, 321 active+clean; 458 KiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.891884+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.891884+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? 192.168.123.101:0/3426751391' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm01-59602-33"}]': finished 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.891943+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.891943+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: cluster 2026-03-09T15:58:42.895813+0000 mon.a (mon.0) 1882 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: cluster 2026-03-09T15:58:42.895813+0000 mon.a (mon.0) 1882 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.898797+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.898797+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.909994+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.909994+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.920773+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.920773+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.930572+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.930572+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.931054+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.931054+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.931339+0000 mon.a (mon.0) 1885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.931339+0000 mon.a (mon.0) 1885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.931767+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.931767+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.932016+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:43.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:43 vm01 bash[20728]: audit 2026-03-09T15:58:42.932016+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.424471+0000 mon.a (mon.0) 1887 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.424471+0000 mon.a (mon.0) 1887 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.895163+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.895163+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.895280+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.895280+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: cluster 2026-03-09T15:58:43.899181+0000 mon.a (mon.0) 1890 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: cluster 2026-03-09T15:58:43.899181+0000 mon.a (mon.0) 1890 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.914338+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.914338+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.914681+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.914681+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.918467+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.101:0/3140582967' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.918467+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.101:0/3140582967' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.919053+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.919053+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.919173+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.919173+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.919535+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:43.919535+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:44.110344+0000 mon.a (mon.0) 1894 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:44.110344+0000 mon.a (mon.0) 1894 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:44.111147+0000 mon.a (mon.0) 1895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:44 vm09 bash[22983]: audit 2026-03-09T15:58:44.111147+0000 mon.a (mon.0) 1895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.424471+0000 mon.a (mon.0) 1887 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.424471+0000 mon.a (mon.0) 1887 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.895163+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.895163+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.895280+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.895280+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: cluster 2026-03-09T15:58:43.899181+0000 mon.a (mon.0) 1890 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: cluster 2026-03-09T15:58:43.899181+0000 mon.a (mon.0) 1890 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.914338+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.914338+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.914681+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.914681+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.918467+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.101:0/3140582967' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.918467+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.101:0/3140582967' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.919053+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.919053+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.919173+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.919173+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.919535+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:43.919535+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:44.110344+0000 mon.a (mon.0) 1894 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:44.110344+0000 mon.a (mon.0) 1894 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:44.111147+0000 mon.a (mon.0) 1895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:44 vm01 bash[28152]: audit 2026-03-09T15:58:44.111147+0000 mon.a (mon.0) 1895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.424471+0000 mon.a (mon.0) 1887 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.424471+0000 mon.a (mon.0) 1887 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.895163+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.895163+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.895280+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.895280+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm01-59602-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: cluster 2026-03-09T15:58:43.899181+0000 mon.a (mon.0) 1890 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: cluster 2026-03-09T15:58:43.899181+0000 mon.a (mon.0) 1890 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.914338+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.914338+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.914681+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.914681+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.918467+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.101:0/3140582967' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.918467+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.101:0/3140582967' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.919053+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.919053+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.919173+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.919173+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.919535+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:43.919535+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:44.110344+0000 mon.a (mon.0) 1894 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:44.110344+0000 mon.a (mon.0) 1894 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:44.111147+0000 mon.a (mon.0) 1895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:44.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:44 vm01 bash[20728]: audit 2026-03-09T15:58:44.111147+0000 mon.a (mon.0) 1895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: audit 2026-03-09T15:58:44.425530+0000 mon.a (mon.0) 1896 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: audit 2026-03-09T15:58:44.425530+0000 mon.a (mon.0) 1896 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: cluster 2026-03-09T15:58:44.711387+0000 mgr.y (mgr.14520) 212 : cluster [DBG] pgmap v254: 354 pgs: 15 creating+peering, 17 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: cluster 2026-03-09T15:58:44.711387+0000 mgr.y (mgr.14520) 212 : cluster [DBG] pgmap v254: 354 pgs: 15 creating+peering, 17 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: cluster 2026-03-09T15:58:44.895935+0000 mon.a (mon.0) 1897 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: cluster 2026-03-09T15:58:44.895935+0000 mon.a (mon.0) 1897 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: audit 2026-03-09T15:58:44.899639+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]': finished 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: audit 2026-03-09T15:58:44.899639+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]': finished 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: audit 2026-03-09T15:58:44.899716+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: audit 2026-03-09T15:58:44.899716+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: cluster 2026-03-09T15:58:44.914020+0000 mon.a (mon.0) 1900 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T15:58:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:45 vm09 bash[22983]: cluster 2026-03-09T15:58:44.914020+0000 mon.a (mon.0) 1900 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: audit 2026-03-09T15:58:44.425530+0000 mon.a (mon.0) 1896 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: audit 2026-03-09T15:58:44.425530+0000 mon.a (mon.0) 1896 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: cluster 2026-03-09T15:58:44.711387+0000 mgr.y (mgr.14520) 212 : cluster [DBG] pgmap v254: 354 pgs: 15 creating+peering, 17 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: cluster 2026-03-09T15:58:44.711387+0000 mgr.y (mgr.14520) 212 : cluster [DBG] pgmap v254: 354 pgs: 15 creating+peering, 17 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: cluster 2026-03-09T15:58:44.895935+0000 mon.a (mon.0) 1897 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: cluster 2026-03-09T15:58:44.895935+0000 mon.a (mon.0) 1897 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: audit 2026-03-09T15:58:44.899639+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]': finished 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: audit 2026-03-09T15:58:44.899639+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]': finished 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: audit 2026-03-09T15:58:44.899716+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: audit 2026-03-09T15:58:44.899716+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: cluster 2026-03-09T15:58:44.914020+0000 mon.a (mon.0) 1900 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:45 vm01 bash[28152]: cluster 2026-03-09T15:58:44.914020+0000 mon.a (mon.0) 1900 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: audit 2026-03-09T15:58:44.425530+0000 mon.a (mon.0) 1896 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: audit 2026-03-09T15:58:44.425530+0000 mon.a (mon.0) 1896 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: cluster 2026-03-09T15:58:44.711387+0000 mgr.y (mgr.14520) 212 : cluster [DBG] pgmap v254: 354 pgs: 15 creating+peering, 17 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: cluster 2026-03-09T15:58:44.711387+0000 mgr.y (mgr.14520) 212 : cluster [DBG] pgmap v254: 354 pgs: 15 creating+peering, 17 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: cluster 2026-03-09T15:58:44.895935+0000 mon.a (mon.0) 1897 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: cluster 2026-03-09T15:58:44.895935+0000 mon.a (mon.0) 1897 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: audit 2026-03-09T15:58:44.899639+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]': finished 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: audit 2026-03-09T15:58:44.899639+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-23", "mode": "writeback"}]': finished 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: audit 2026-03-09T15:58:44.899716+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: audit 2026-03-09T15:58:44.899716+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: cluster 2026-03-09T15:58:44.914020+0000 mon.a (mon.0) 1900 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T15:58:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:45 vm01 bash[20728]: cluster 2026-03-09T15:58:44.914020+0000 mon.a (mon.0) 1900 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T15:58:46.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:58:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.426441+0000 mon.a (mon.0) 1901 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.426441+0000 mon.a (mon.0) 1901 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.903225+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.903225+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: cluster 2026-03-09T15:58:45.906105+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: cluster 2026-03-09T15:58:45.906105+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.918549+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.101:0/1510516275' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.918549+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.101:0/1510516275' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.923424+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.923424+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.950549+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.950549+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.950814+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:46 vm09 bash[22983]: audit 2026-03-09T15:58:45.950814+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.426441+0000 mon.a (mon.0) 1901 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.426441+0000 mon.a (mon.0) 1901 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.903225+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.903225+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: cluster 2026-03-09T15:58:45.906105+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: cluster 2026-03-09T15:58:45.906105+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.918549+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.101:0/1510516275' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.918549+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.101:0/1510516275' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.923424+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.923424+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.950549+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.950549+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.950814+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:46 vm01 bash[28152]: audit 2026-03-09T15:58:45.950814+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.426441+0000 mon.a (mon.0) 1901 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.426441+0000 mon.a (mon.0) 1901 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.903225+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.903225+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm01-59602-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: cluster 2026-03-09T15:58:45.906105+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: cluster 2026-03-09T15:58:45.906105+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.918549+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.101:0/1510516275' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.918549+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.101:0/1510516275' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.923424+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.923424+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.950549+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.950549+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.950814+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:46 vm01 bash[20728]: audit 2026-03-09T15:58:45.950814+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.344001+0000 mgr.y (mgr.14520) 213 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.344001+0000 mgr.y (mgr.14520) 213 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.427092+0000 mon.a (mon.0) 1906 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.427092+0000 mon.a (mon.0) 1906 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: cluster 2026-03-09T15:58:46.711823+0000 mgr.y (mgr.14520) 214 : cluster [DBG] pgmap v257: 394 pgs: 15 creating+peering, 57 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: cluster 2026-03-09T15:58:46.711823+0000 mgr.y (mgr.14520) 214 : cluster [DBG] pgmap v257: 394 pgs: 15 creating+peering, 57 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.907015+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.907015+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.907053+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.907053+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: cluster 2026-03-09T15:58:46.910514+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: cluster 2026-03-09T15:58:46.910514+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.922539+0000 mon.c (mon.2) 190 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.922539+0000 mon.c (mon.2) 190 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.922777+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:47 vm09 bash[22983]: audit 2026-03-09T15:58:46.922777+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.344001+0000 mgr.y (mgr.14520) 213 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.344001+0000 mgr.y (mgr.14520) 213 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.427092+0000 mon.a (mon.0) 1906 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.427092+0000 mon.a (mon.0) 1906 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: cluster 2026-03-09T15:58:46.711823+0000 mgr.y (mgr.14520) 214 : cluster [DBG] pgmap v257: 394 pgs: 15 creating+peering, 57 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: cluster 2026-03-09T15:58:46.711823+0000 mgr.y (mgr.14520) 214 : cluster [DBG] pgmap v257: 394 pgs: 15 creating+peering, 57 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.907015+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.907015+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.907053+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.907053+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: cluster 2026-03-09T15:58:46.910514+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: cluster 2026-03-09T15:58:46.910514+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.922539+0000 mon.c (mon.2) 190 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.922539+0000 mon.c (mon.2) 190 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.922777+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:47 vm01 bash[28152]: audit 2026-03-09T15:58:46.922777+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.344001+0000 mgr.y (mgr.14520) 213 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.344001+0000 mgr.y (mgr.14520) 213 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.427092+0000 mon.a (mon.0) 1906 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.427092+0000 mon.a (mon.0) 1906 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: cluster 2026-03-09T15:58:46.711823+0000 mgr.y (mgr.14520) 214 : cluster [DBG] pgmap v257: 394 pgs: 15 creating+peering, 57 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: cluster 2026-03-09T15:58:46.711823+0000 mgr.y (mgr.14520) 214 : cluster [DBG] pgmap v257: 394 pgs: 15 creating+peering, 57 unknown, 322 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.907015+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.907015+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.907053+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.907053+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: cluster 2026-03-09T15:58:46.910514+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: cluster 2026-03-09T15:58:46.910514+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.922539+0000 mon.c (mon.2) 190 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.922539+0000 mon.c (mon.2) 190 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.922777+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:47 vm01 bash[20728]: audit 2026-03-09T15:58:46.922777+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]: dispatch 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: cluster 2026-03-09T15:58:47.351579+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: cluster 2026-03-09T15:58:47.351579+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.427790+0000 mon.a (mon.0) 1912 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.427790+0000 mon.a (mon.0) 1912 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: cluster 2026-03-09T15:58:47.907454+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: cluster 2026-03-09T15:58:47.907454+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.916089+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.916089+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: cluster 2026-03-09T15:58:47.921585+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: cluster 2026-03-09T15:58:47.921585+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.922740+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.922740+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.952605+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.952605+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.952922+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:48 vm09 bash[22983]: audit 2026-03-09T15:58:47.952922+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: cluster 2026-03-09T15:58:47.351579+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: cluster 2026-03-09T15:58:47.351579+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.427790+0000 mon.a (mon.0) 1912 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.427790+0000 mon.a (mon.0) 1912 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: cluster 2026-03-09T15:58:47.907454+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: cluster 2026-03-09T15:58:47.907454+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.916089+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.916089+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: cluster 2026-03-09T15:58:47.921585+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: cluster 2026-03-09T15:58:47.921585+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.922740+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.922740+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.952605+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.952605+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.952922+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:48 vm01 bash[28152]: audit 2026-03-09T15:58:47.952922+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: cluster 2026-03-09T15:58:47.351579+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: cluster 2026-03-09T15:58:47.351579+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.427790+0000 mon.a (mon.0) 1912 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.427790+0000 mon.a (mon.0) 1912 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: cluster 2026-03-09T15:58:47.907454+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: cluster 2026-03-09T15:58:47.907454+0000 mon.a (mon.0) 1913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.916089+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.916089+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-23"}]': finished 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: cluster 2026-03-09T15:58:47.921585+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: cluster 2026-03-09T15:58:47.921585+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.922740+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.922740+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.952605+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.952605+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.952922+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:48.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:48 vm01 bash[20728]: audit 2026-03-09T15:58:47.952922+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.428445+0000 mon.a (mon.0) 1918 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.428445+0000 mon.a (mon.0) 1918 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: cluster 2026-03-09T15:58:48.712248+0000 mgr.y (mgr.14520) 215 : cluster [DBG] pgmap v260: 418 pgs: 17 creating+peering, 69 unknown, 332 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: cluster 2026-03-09T15:58:48.712248+0000 mgr.y (mgr.14520) 215 : cluster [DBG] pgmap v260: 418 pgs: 17 creating+peering, 69 unknown, 332 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.807955+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.807955+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.808025+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.808025+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: cluster 2026-03-09T15:58:48.814377+0000 mon.a (mon.0) 1921 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: cluster 2026-03-09T15:58:48.814377+0000 mon.a (mon.0) 1921 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.830439+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.830439+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.863538+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:49 vm09 bash[22983]: audit 2026-03-09T15:58:48.863538+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.428445+0000 mon.a (mon.0) 1918 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.428445+0000 mon.a (mon.0) 1918 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: cluster 2026-03-09T15:58:48.712248+0000 mgr.y (mgr.14520) 215 : cluster [DBG] pgmap v260: 418 pgs: 17 creating+peering, 69 unknown, 332 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: cluster 2026-03-09T15:58:48.712248+0000 mgr.y (mgr.14520) 215 : cluster [DBG] pgmap v260: 418 pgs: 17 creating+peering, 69 unknown, 332 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.807955+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.807955+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.808025+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.808025+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: cluster 2026-03-09T15:58:48.814377+0000 mon.a (mon.0) 1921 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: cluster 2026-03-09T15:58:48.814377+0000 mon.a (mon.0) 1921 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.830439+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.830439+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.863538+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:49 vm01 bash[28152]: audit 2026-03-09T15:58:48.863538+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.428445+0000 mon.a (mon.0) 1918 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.428445+0000 mon.a (mon.0) 1918 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: cluster 2026-03-09T15:58:48.712248+0000 mgr.y (mgr.14520) 215 : cluster [DBG] pgmap v260: 418 pgs: 17 creating+peering, 69 unknown, 332 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: cluster 2026-03-09T15:58:48.712248+0000 mgr.y (mgr.14520) 215 : cluster [DBG] pgmap v260: 418 pgs: 17 creating+peering, 69 unknown, 332 active+clean; 458 KiB data, 767 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.807955+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.807955+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? 192.168.123.101:0/3037705887' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.808025+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.808025+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: cluster 2026-03-09T15:58:48.814377+0000 mon.a (mon.0) 1921 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: cluster 2026-03-09T15:58:48.814377+0000 mon.a (mon.0) 1921 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.830439+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.830439+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.101:0/1224792168' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.863538+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:49 vm01 bash[20728]: audit 2026-03-09T15:58:48.863538+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]: dispatch 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.429066+0000 mon.a (mon.0) 1923 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.429066+0000 mon.a (mon.0) 1923 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.811848+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.811848+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: cluster 2026-03-09T15:58:49.828763+0000 mon.a (mon.0) 1925 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: cluster 2026-03-09T15:58:49.828763+0000 mon.a (mon.0) 1925 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.847422+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.847422+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.847768+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.847768+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.853534+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.101:0/1960693196' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.853534+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.101:0/1960693196' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.885 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.857036+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.886 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.857036+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.886 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.864591+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.886 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.864591+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.886 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.865080+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.886 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.865080+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.886 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.865293+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:50.886 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:49.865293+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:50.886 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:50.429972+0000 mon.a (mon.0) 1931 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.886 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:50 vm09 bash[22983]: audit 2026-03-09T15:58:50.429972+0000 mon.a (mon.0) 1931 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.429066+0000 mon.a (mon.0) 1923 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.429066+0000 mon.a (mon.0) 1923 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.811848+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.811848+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: cluster 2026-03-09T15:58:49.828763+0000 mon.a (mon.0) 1925 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: cluster 2026-03-09T15:58:49.828763+0000 mon.a (mon.0) 1925 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.847422+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.847422+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.847768+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.847768+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.853534+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.101:0/1960693196' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.853534+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.101:0/1960693196' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.857036+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.857036+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.864591+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.864591+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.865080+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.865080+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.865293+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:49.865293+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:50.429972+0000 mon.a (mon.0) 1931 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:50 vm01 bash[20728]: audit 2026-03-09T15:58:50.429972+0000 mon.a (mon.0) 1931 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.429066+0000 mon.a (mon.0) 1923 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.429066+0000 mon.a (mon.0) 1923 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.811848+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.811848+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm01-59602-34"}]': finished 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: cluster 2026-03-09T15:58:49.828763+0000 mon.a (mon.0) 1925 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: cluster 2026-03-09T15:58:49.828763+0000 mon.a (mon.0) 1925 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.847422+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.847422+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.847768+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.847768+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.853534+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.101:0/1960693196' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.853534+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.101:0/1960693196' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.857036+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.857036+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.864591+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.864591+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.865080+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.865080+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.865293+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:49.865293+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:50.429972+0000 mon.a (mon.0) 1931 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:50.928 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:50 vm01 bash[28152]: audit 2026-03-09T15:58:50.429972+0000 mon.a (mon.0) 1931 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: cluster 2026-03-09T15:58:50.712715+0000 mgr.y (mgr.14520) 216 : cluster [DBG] pgmap v263: 450 pgs: 64 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: cluster 2026-03-09T15:58:50.712715+0000 mgr.y (mgr.14520) 216 : cluster [DBG] pgmap v263: 450 pgs: 64 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:50.815398+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:50.815398+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:50.815473+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:50.815473+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:50.815520+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:50.815520+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: cluster 2026-03-09T15:58:50.822101+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: cluster 2026-03-09T15:58:50.822101+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:50.830367+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:50.830367+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:51.430779+0000 mon.a (mon.0) 1937 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:51 vm09 bash[22983]: audit 2026-03-09T15:58:51.430779+0000 mon.a (mon.0) 1937 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: cluster 2026-03-09T15:58:50.712715+0000 mgr.y (mgr.14520) 216 : cluster [DBG] pgmap v263: 450 pgs: 64 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: cluster 2026-03-09T15:58:50.712715+0000 mgr.y (mgr.14520) 216 : cluster [DBG] pgmap v263: 450 pgs: 64 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:50.815398+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:50.815398+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:50.815473+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:50.815473+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:50.815520+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:50.815520+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: cluster 2026-03-09T15:58:50.822101+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: cluster 2026-03-09T15:58:50.822101+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:50.830367+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:50.830367+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:51.430779+0000 mon.a (mon.0) 1937 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:51 vm01 bash[20728]: audit 2026-03-09T15:58:51.430779+0000 mon.a (mon.0) 1937 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: cluster 2026-03-09T15:58:50.712715+0000 mgr.y (mgr.14520) 216 : cluster [DBG] pgmap v263: 450 pgs: 64 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: cluster 2026-03-09T15:58:50.712715+0000 mgr.y (mgr.14520) 216 : cluster [DBG] pgmap v263: 450 pgs: 64 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:50.815398+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:50.815398+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:50.815473+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:50.815473+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:50.815520+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:50.815520+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm01-59602-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: cluster 2026-03-09T15:58:50.822101+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: cluster 2026-03-09T15:58:50.822101+0000 mon.a (mon.0) 1935 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:50.830367+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:50.830367+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:51.430779+0000 mon.a (mon.0) 1937 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:52.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:51 vm01 bash[28152]: audit 2026-03-09T15:58:51.430779+0000 mon.a (mon.0) 1937 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:58:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:58:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:51.857080+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.101:0/3267241440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:51.857080+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.101:0/3267241440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: cluster 2026-03-09T15:58:51.859716+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: cluster 2026-03-09T15:58:51.859716+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:51.865533+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:51.865533+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:51.867203+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:51.867203+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:51.867338+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:51.867338+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.431477+0000 mon.a (mon.0) 1941 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.431477+0000 mon.a (mon.0) 1941 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.822996+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.822996+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.823105+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.823105+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.823507+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.823507+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: cluster 2026-03-09T15:58:52.828677+0000 mon.a (mon.0) 1945 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: cluster 2026-03-09T15:58:52.828677+0000 mon.a (mon.0) 1945 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.829164+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.829164+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.858109+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:52 vm01 bash[28152]: audit 2026-03-09T15:58:52.858109+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:51.857080+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.101:0/3267241440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:51.857080+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.101:0/3267241440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: cluster 2026-03-09T15:58:51.859716+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: cluster 2026-03-09T15:58:51.859716+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:51.865533+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:51.865533+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:51.867203+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:51.867203+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:51.867338+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:51.867338+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.431477+0000 mon.a (mon.0) 1941 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.431477+0000 mon.a (mon.0) 1941 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.822996+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.822996+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.823105+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.823105+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.823507+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.823507+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: cluster 2026-03-09T15:58:52.828677+0000 mon.a (mon.0) 1945 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: cluster 2026-03-09T15:58:52.828677+0000 mon.a (mon.0) 1945 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.829164+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.829164+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.858109+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:52 vm01 bash[20728]: audit 2026-03-09T15:58:52.858109+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:52 vm09 bash[22983]: audit 2026-03-09T15:58:51.857080+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.101:0/3267241440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:52 vm09 bash[22983]: audit 2026-03-09T15:58:51.857080+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.101:0/3267241440' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:52 vm09 bash[22983]: cluster 2026-03-09T15:58:51.859716+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:52 vm09 bash[22983]: cluster 2026-03-09T15:58:51.859716+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:52 vm09 bash[22983]: audit 2026-03-09T15:58:51.865533+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:52 vm09 bash[22983]: audit 2026-03-09T15:58:51.865533+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:52 vm09 bash[22983]: audit 2026-03-09T15:58:51.867203+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:52 vm09 bash[22983]: audit 2026-03-09T15:58:51.867203+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:51.867338+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:51.867338+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.431477+0000 mon.a (mon.0) 1941 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.431477+0000 mon.a (mon.0) 1941 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.822996+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.822996+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm01-59602-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.823105+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.823105+0000 mon.a (mon.0) 1943 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.823507+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.823507+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm01-59610-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: cluster 2026-03-09T15:58:52.828677+0000 mon.a (mon.0) 1945 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: cluster 2026-03-09T15:58:52.828677+0000 mon.a (mon.0) 1945 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.829164+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.829164+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.858109+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:52.858109+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: cluster 2026-03-09T15:58:52.713173+0000 mgr.y (mgr.14520) 217 : cluster [DBG] pgmap v266: 482 pgs: 96 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: cluster 2026-03-09T15:58:52.713173+0000 mgr.y (mgr.14520) 217 : cluster [DBG] pgmap v266: 482 pgs: 96 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: cluster 2026-03-09T15:58:52.878702+0000 mon.a (mon.0) 1947 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: cluster 2026-03-09T15:58:52.878702+0000 mon.a (mon.0) 1947 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:53.432231+0000 mon.a (mon.0) 1948 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:53.432231+0000 mon.a (mon.0) 1948 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:53.810915+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:53.810915+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:53.817370+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:53.817370+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: cluster 2026-03-09T15:58:53.820381+0000 mon.a (mon.0) 1950 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: cluster 2026-03-09T15:58:53.820381+0000 mon.a (mon.0) 1950 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:53.822849+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:53 vm09 bash[22983]: audit 2026-03-09T15:58:53.822849+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: cluster 2026-03-09T15:58:52.713173+0000 mgr.y (mgr.14520) 217 : cluster [DBG] pgmap v266: 482 pgs: 96 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: cluster 2026-03-09T15:58:52.713173+0000 mgr.y (mgr.14520) 217 : cluster [DBG] pgmap v266: 482 pgs: 96 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: cluster 2026-03-09T15:58:52.878702+0000 mon.a (mon.0) 1947 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: cluster 2026-03-09T15:58:52.878702+0000 mon.a (mon.0) 1947 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: audit 2026-03-09T15:58:53.432231+0000 mon.a (mon.0) 1948 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: audit 2026-03-09T15:58:53.432231+0000 mon.a (mon.0) 1948 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: audit 2026-03-09T15:58:53.810915+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: audit 2026-03-09T15:58:53.810915+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: audit 2026-03-09T15:58:53.817370+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: audit 2026-03-09T15:58:53.817370+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: cluster 2026-03-09T15:58:53.820381+0000 mon.a (mon.0) 1950 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: cluster 2026-03-09T15:58:53.820381+0000 mon.a (mon.0) 1950 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: audit 2026-03-09T15:58:53.822849+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:53 vm01 bash[28152]: audit 2026-03-09T15:58:53.822849+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: cluster 2026-03-09T15:58:52.713173+0000 mgr.y (mgr.14520) 217 : cluster [DBG] pgmap v266: 482 pgs: 96 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: cluster 2026-03-09T15:58:52.713173+0000 mgr.y (mgr.14520) 217 : cluster [DBG] pgmap v266: 482 pgs: 96 unknown, 12 creating+peering, 374 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: cluster 2026-03-09T15:58:52.878702+0000 mon.a (mon.0) 1947 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: cluster 2026-03-09T15:58:52.878702+0000 mon.a (mon.0) 1947 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: audit 2026-03-09T15:58:53.432231+0000 mon.a (mon.0) 1948 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: audit 2026-03-09T15:58:53.432231+0000 mon.a (mon.0) 1948 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: audit 2026-03-09T15:58:53.810915+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: audit 2026-03-09T15:58:53.810915+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: audit 2026-03-09T15:58:53.817370+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: audit 2026-03-09T15:58:53.817370+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: cluster 2026-03-09T15:58:53.820381+0000 mon.a (mon.0) 1950 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: cluster 2026-03-09T15:58:53.820381+0000 mon.a (mon.0) 1950 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: audit 2026-03-09T15:58:53.822849+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:53 vm01 bash[20728]: audit 2026-03-09T15:58:53.822849+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]: dispatch 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.433053+0000 mon.a (mon.0) 1952 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.433053+0000 mon.a (mon.0) 1952 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: cluster 2026-03-09T15:58:54.811157+0000 mon.a (mon.0) 1953 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: cluster 2026-03-09T15:58:54.811157+0000 mon.a (mon.0) 1953 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.869420+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]': finished 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.869420+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]': finished 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: cluster 2026-03-09T15:58:54.884688+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: cluster 2026-03-09T15:58:54.884688+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.890891+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.890891+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.955934+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.955934+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.956322+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:55 vm09 bash[22983]: audit 2026-03-09T15:58:54.956322+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.433053+0000 mon.a (mon.0) 1952 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.433053+0000 mon.a (mon.0) 1952 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: cluster 2026-03-09T15:58:54.811157+0000 mon.a (mon.0) 1953 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: cluster 2026-03-09T15:58:54.811157+0000 mon.a (mon.0) 1953 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.869420+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]': finished 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.869420+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]': finished 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: cluster 2026-03-09T15:58:54.884688+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: cluster 2026-03-09T15:58:54.884688+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.890891+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.890891+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.955934+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.955934+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.956322+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:55 vm01 bash[28152]: audit 2026-03-09T15:58:54.956322+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.433053+0000 mon.a (mon.0) 1952 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.433053+0000 mon.a (mon.0) 1952 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: cluster 2026-03-09T15:58:54.811157+0000 mon.a (mon.0) 1953 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: cluster 2026-03-09T15:58:54.811157+0000 mon.a (mon.0) 1953 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.869420+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]': finished 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.869420+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-25", "mode": "writeback"}]': finished 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: cluster 2026-03-09T15:58:54.884688+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: cluster 2026-03-09T15:58:54.884688+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.890891+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.890891+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.955934+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.955934+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.956322+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:55.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:55 vm01 bash[20728]: audit 2026-03-09T15:58:54.956322+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:58:56.349 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: cluster 2026-03-09T15:58:54.714219+0000 mgr.y (mgr.14520) 218 : cluster [DBG] pgmap v269: 458 pgs: 3 creating+activating, 3 creating+peering, 452 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: cluster 2026-03-09T15:58:54.714219+0000 mgr.y (mgr.14520) 218 : cluster [DBG] pgmap v269: 458 pgs: 3 creating+activating, 3 creating+peering, 452 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.433952+0000 mon.a (mon.0) 1958 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.433952+0000 mon.a (mon.0) 1958 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.953812+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.953812+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.953947+0000 mon.a (mon.0) 1960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.953947+0000 mon.a (mon.0) 1960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: cluster 2026-03-09T15:58:55.959640+0000 mon.a (mon.0) 1961 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: cluster 2026-03-09T15:58:55.959640+0000 mon.a (mon.0) 1961 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.961719+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.961719+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.971630+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.971630+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.973256+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.350 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:56 vm09 bash[22983]: audit 2026-03-09T15:58:55.973256+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: cluster 2026-03-09T15:58:54.714219+0000 mgr.y (mgr.14520) 218 : cluster [DBG] pgmap v269: 458 pgs: 3 creating+activating, 3 creating+peering, 452 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: cluster 2026-03-09T15:58:54.714219+0000 mgr.y (mgr.14520) 218 : cluster [DBG] pgmap v269: 458 pgs: 3 creating+activating, 3 creating+peering, 452 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.433952+0000 mon.a (mon.0) 1958 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.433952+0000 mon.a (mon.0) 1958 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.953812+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.953812+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.953947+0000 mon.a (mon.0) 1960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.953947+0000 mon.a (mon.0) 1960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: cluster 2026-03-09T15:58:55.959640+0000 mon.a (mon.0) 1961 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: cluster 2026-03-09T15:58:55.959640+0000 mon.a (mon.0) 1961 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.961719+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.961719+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.971630+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.971630+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.973256+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:56 vm01 bash[28152]: audit 2026-03-09T15:58:55.973256+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: cluster 2026-03-09T15:58:54.714219+0000 mgr.y (mgr.14520) 218 : cluster [DBG] pgmap v269: 458 pgs: 3 creating+activating, 3 creating+peering, 452 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: cluster 2026-03-09T15:58:54.714219+0000 mgr.y (mgr.14520) 218 : cluster [DBG] pgmap v269: 458 pgs: 3 creating+activating, 3 creating+peering, 452 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.433952+0000 mon.a (mon.0) 1958 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.433952+0000 mon.a (mon.0) 1958 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.953812+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.953812+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.953947+0000 mon.a (mon.0) 1960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.953947+0000 mon.a (mon.0) 1960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: cluster 2026-03-09T15:58:55.959640+0000 mon.a (mon.0) 1961 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: cluster 2026-03-09T15:58:55.959640+0000 mon.a (mon.0) 1961 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.961719+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.961719+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.971630+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.971630+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.973256+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:56 vm01 bash[20728]: audit 2026-03-09T15:58:55.973256+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]: dispatch 2026-03-09T15:58:56.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:58:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.434657+0000 mon.a (mon.0) 1964 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.434657+0000 mon.a (mon.0) 1964 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.715422+0000 mon.a (mon.0) 1965 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.715422+0000 mon.a (mon.0) 1965 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: cluster 2026-03-09T15:58:56.954616+0000 mon.a (mon.0) 1966 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: cluster 2026-03-09T15:58:56.954616+0000 mon.a (mon.0) 1966 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.959539+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.959539+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.959602+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.959602+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.959632+0000 mon.a (mon.0) 1969 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.959632+0000 mon.a (mon.0) 1969 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: cluster 2026-03-09T15:58:56.971610+0000 mon.a (mon.0) 1970 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: cluster 2026-03-09T15:58:56.971610+0000 mon.a (mon.0) 1970 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.986439+0000 mon.c (mon.2) 200 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.986439+0000 mon.c (mon.2) 200 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.987259+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.987259+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.987704+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.987704+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.987938+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.987938+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.988253+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.988253+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.988415+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:57 vm09 bash[22983]: audit 2026-03-09T15:58:56.988415+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.434657+0000 mon.a (mon.0) 1964 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.434657+0000 mon.a (mon.0) 1964 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.715422+0000 mon.a (mon.0) 1965 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.715422+0000 mon.a (mon.0) 1965 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: cluster 2026-03-09T15:58:56.954616+0000 mon.a (mon.0) 1966 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: cluster 2026-03-09T15:58:56.954616+0000 mon.a (mon.0) 1966 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.959539+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.959539+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.959602+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.959602+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.959632+0000 mon.a (mon.0) 1969 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.959632+0000 mon.a (mon.0) 1969 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: cluster 2026-03-09T15:58:56.971610+0000 mon.a (mon.0) 1970 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: cluster 2026-03-09T15:58:56.971610+0000 mon.a (mon.0) 1970 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.986439+0000 mon.c (mon.2) 200 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.986439+0000 mon.c (mon.2) 200 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.434657+0000 mon.a (mon.0) 1964 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.434657+0000 mon.a (mon.0) 1964 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.715422+0000 mon.a (mon.0) 1965 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.715422+0000 mon.a (mon.0) 1965 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: cluster 2026-03-09T15:58:56.954616+0000 mon.a (mon.0) 1966 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: cluster 2026-03-09T15:58:56.954616+0000 mon.a (mon.0) 1966 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.959539+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.959539+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? 192.168.123.101:0/2870554016' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm01-59602-35"}]': finished 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.959602+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.959602+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-25"}]': finished 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.959632+0000 mon.a (mon.0) 1969 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.959632+0000 mon.a (mon.0) 1969 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: cluster 2026-03-09T15:58:56.971610+0000 mon.a (mon.0) 1970 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: cluster 2026-03-09T15:58:56.971610+0000 mon.a (mon.0) 1970 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.986439+0000 mon.c (mon.2) 200 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.986439+0000 mon.c (mon.2) 200 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.987259+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.987259+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.987704+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.987704+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.987938+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.987938+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.988253+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.988253+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.988415+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:57 vm01 bash[28152]: audit 2026-03-09T15:58:56.988415+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.987259+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.987259+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.987704+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.987704+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.987938+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.987938+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.988253+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.988253+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.988415+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:57.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:57 vm01 bash[20728]: audit 2026-03-09T15:58:56.988415+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:56.352397+0000 mgr.y (mgr.14520) 219 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:56.352397+0000 mgr.y (mgr.14520) 219 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: cluster 2026-03-09T15:58:56.714655+0000 mgr.y (mgr.14520) 220 : cluster [DBG] pgmap v272: 386 pgs: 386 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: cluster 2026-03-09T15:58:56.714655+0000 mgr.y (mgr.14520) 220 : cluster [DBG] pgmap v272: 386 pgs: 386 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.435382+0000 mon.a (mon.0) 1974 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.435382+0000 mon.a (mon.0) 1974 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.435823+0000 mon.a (mon.0) 1975 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.435823+0000 mon.a (mon.0) 1975 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.436136+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.436136+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.962237+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.962237+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.962410+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.962410+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: cluster 2026-03-09T15:58:57.968756+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: cluster 2026-03-09T15:58:57.968756+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.973045+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.973045+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.977004+0000 mon.a (mon.0) 1980 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.977004+0000 mon.a (mon.0) 1980 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.979128+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:58 vm09 bash[22983]: audit 2026-03-09T15:58:57.979128+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:56.352397+0000 mgr.y (mgr.14520) 219 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:56.352397+0000 mgr.y (mgr.14520) 219 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: cluster 2026-03-09T15:58:56.714655+0000 mgr.y (mgr.14520) 220 : cluster [DBG] pgmap v272: 386 pgs: 386 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: cluster 2026-03-09T15:58:56.714655+0000 mgr.y (mgr.14520) 220 : cluster [DBG] pgmap v272: 386 pgs: 386 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.435382+0000 mon.a (mon.0) 1974 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.435382+0000 mon.a (mon.0) 1974 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.435823+0000 mon.a (mon.0) 1975 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.435823+0000 mon.a (mon.0) 1975 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.436136+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.436136+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.962237+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.962237+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.962410+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.962410+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: cluster 2026-03-09T15:58:57.968756+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: cluster 2026-03-09T15:58:57.968756+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.973045+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.973045+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.977004+0000 mon.a (mon.0) 1980 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.977004+0000 mon.a (mon.0) 1980 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.979128+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:58 vm01 bash[28152]: audit 2026-03-09T15:58:57.979128+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:56.352397+0000 mgr.y (mgr.14520) 219 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:56.352397+0000 mgr.y (mgr.14520) 219 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: cluster 2026-03-09T15:58:56.714655+0000 mgr.y (mgr.14520) 220 : cluster [DBG] pgmap v272: 386 pgs: 386 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: cluster 2026-03-09T15:58:56.714655+0000 mgr.y (mgr.14520) 220 : cluster [DBG] pgmap v272: 386 pgs: 386 active+clean; 458 KiB data, 768 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T15:58:58.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.435382+0000 mon.a (mon.0) 1974 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.435382+0000 mon.a (mon.0) 1974 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.435823+0000 mon.a (mon.0) 1975 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.435823+0000 mon.a (mon.0) 1975 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.436136+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.436136+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.962237+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.962237+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm01-59602-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.962410+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.962410+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm01-59696-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: cluster 2026-03-09T15:58:57.968756+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: cluster 2026-03-09T15:58:57.968756+0000 mon.a (mon.0) 1979 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.973045+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.973045+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.977004+0000 mon.a (mon.0) 1980 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.977004+0000 mon.a (mon.0) 1980 : audit [DBG] from='client.? 192.168.123.101:0/3706343887' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.979128+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:58.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:58 vm01 bash[20728]: audit 2026-03-09T15:58:57.979128+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:58.716081+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:58.716081+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: cluster 2026-03-09T15:58:58.809022+0000 mon.a (mon.0) 1983 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: cluster 2026-03-09T15:58:58.809022+0000 mon.a (mon.0) 1983 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.007690+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.007690+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.028228+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.101:0/3597294896' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.028228+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.101:0/3597294896' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: cluster 2026-03-09T15:58:59.029843+0000 mon.a (mon.0) 1985 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: cluster 2026-03-09T15:58:59.029843+0000 mon.a (mon.0) 1985 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.031045+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.031045+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.038902+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.038902+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.038994+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:58:59 vm09 bash[22983]: audit 2026-03-09T15:58:59.038994+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:58.716081+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:58.716081+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: cluster 2026-03-09T15:58:58.809022+0000 mon.a (mon.0) 1983 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: cluster 2026-03-09T15:58:58.809022+0000 mon.a (mon.0) 1983 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.007690+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.007690+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.028228+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.101:0/3597294896' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.028228+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.101:0/3597294896' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: cluster 2026-03-09T15:58:59.029843+0000 mon.a (mon.0) 1985 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: cluster 2026-03-09T15:58:59.029843+0000 mon.a (mon.0) 1985 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.031045+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.031045+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.038902+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.038902+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.038994+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:58:59 vm01 bash[28152]: audit 2026-03-09T15:58:59.038994+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:58.716081+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:58.716081+0000 mon.a (mon.0) 1982 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: cluster 2026-03-09T15:58:58.809022+0000 mon.a (mon.0) 1983 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: cluster 2026-03-09T15:58:58.809022+0000 mon.a (mon.0) 1983 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.007690+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.007690+0000 mon.a (mon.0) 1984 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm01-59696-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.028228+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.101:0/3597294896' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.028228+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.101:0/3597294896' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: cluster 2026-03-09T15:58:59.029843+0000 mon.a (mon.0) 1985 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: cluster 2026-03-09T15:58:59.029843+0000 mon.a (mon.0) 1985 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.031045+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.031045+0000 mon.c (mon.2) 204 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.038902+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.038902+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.038994+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:58:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:58:59 vm01 bash[20728]: audit 2026-03-09T15:58:59.038994+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: cluster 2026-03-09T15:58:58.715182+0000 mgr.y (mgr.14520) 221 : cluster [DBG] pgmap v275: 290 pgs: 290 active+clean; 458 KiB data, 769 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: cluster 2026-03-09T15:58:58.715182+0000 mgr.y (mgr.14520) 221 : cluster [DBG] pgmap v275: 290 pgs: 290 active+clean; 458 KiB data, 769 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.051085+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.051085+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.052252+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.052252+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.053343+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.053343+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.076487+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.076487+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.077399+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.077399+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.086019+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.086019+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.117781+0000 mon.a (mon.0) 1991 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:58:59.117781+0000 mon.a (mon.0) 1991 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.011617+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.011617+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.011770+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.011770+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.011922+0000 mon.a (mon.0) 1994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.011922+0000 mon.a (mon.0) 1994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.012065+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.012065+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.028803+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.028803+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: cluster 2026-03-09T15:59:00.035896+0000 mon.a (mon.0) 1996 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: cluster 2026-03-09T15:59:00.035896+0000 mon.a (mon.0) 1996 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.051646+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:00 vm09 bash[22983]: audit 2026-03-09T15:59:00.051646+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: cluster 2026-03-09T15:58:58.715182+0000 mgr.y (mgr.14520) 221 : cluster [DBG] pgmap v275: 290 pgs: 290 active+clean; 458 KiB data, 769 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: cluster 2026-03-09T15:58:58.715182+0000 mgr.y (mgr.14520) 221 : cluster [DBG] pgmap v275: 290 pgs: 290 active+clean; 458 KiB data, 769 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.051085+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.051085+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.052252+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.052252+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.053343+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.053343+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.076487+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.076487+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.077399+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.077399+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.086019+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.086019+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.117781+0000 mon.a (mon.0) 1991 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:58:59.117781+0000 mon.a (mon.0) 1991 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.011617+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.011617+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.011770+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.011770+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.011922+0000 mon.a (mon.0) 1994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.011922+0000 mon.a (mon.0) 1994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.012065+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.012065+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.028803+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.028803+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: cluster 2026-03-09T15:59:00.035896+0000 mon.a (mon.0) 1996 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: cluster 2026-03-09T15:59:00.035896+0000 mon.a (mon.0) 1996 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.051646+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:00 vm01 bash[28152]: audit 2026-03-09T15:59:00.051646+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: cluster 2026-03-09T15:58:58.715182+0000 mgr.y (mgr.14520) 221 : cluster [DBG] pgmap v275: 290 pgs: 290 active+clean; 458 KiB data, 769 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: cluster 2026-03-09T15:58:58.715182+0000 mgr.y (mgr.14520) 221 : cluster [DBG] pgmap v275: 290 pgs: 290 active+clean; 458 KiB data, 769 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.051085+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.051085+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.052252+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.052252+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.053343+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.053343+0000 mon.c (mon.2) 206 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.076487+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.076487+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.077399+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.077399+0000 mon.c (mon.2) 207 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.086019+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.086019+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.117781+0000 mon.a (mon.0) 1991 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:58:59.117781+0000 mon.a (mon.0) 1991 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.011617+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.011617+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm01-59602-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.011770+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.011770+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm01-59610-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.011922+0000 mon.a (mon.0) 1994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.011922+0000 mon.a (mon.0) 1994 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.012065+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.012065+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm01-59696-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.028803+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.028803+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: cluster 2026-03-09T15:59:00.035896+0000 mon.a (mon.0) 1996 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: cluster 2026-03-09T15:59:00.035896+0000 mon.a (mon.0) 1996 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.051646+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:00.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:00 vm01 bash[20728]: audit 2026-03-09T15:59:00.051646+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:01.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:01 vm01 bash[28152]: cluster 2026-03-09T15:59:00.715690+0000 mgr.y (mgr.14520) 222 : cluster [DBG] pgmap v278: 332 pgs: 37 creating+peering, 35 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:01.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:01 vm01 bash[28152]: cluster 2026-03-09T15:59:00.715690+0000 mgr.y (mgr.14520) 222 : cluster [DBG] pgmap v278: 332 pgs: 37 creating+peering, 35 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:01.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:01 vm01 bash[28152]: cluster 2026-03-09T15:59:01.087842+0000 mon.a (mon.0) 1998 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T15:59:01.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:01 vm01 bash[28152]: cluster 2026-03-09T15:59:01.087842+0000 mon.a (mon.0) 1998 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T15:59:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:01 vm01 bash[20728]: cluster 2026-03-09T15:59:00.715690+0000 mgr.y (mgr.14520) 222 : cluster [DBG] pgmap v278: 332 pgs: 37 creating+peering, 35 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:01 vm01 bash[20728]: cluster 2026-03-09T15:59:00.715690+0000 mgr.y (mgr.14520) 222 : cluster [DBG] pgmap v278: 332 pgs: 37 creating+peering, 35 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:01 vm01 bash[20728]: cluster 2026-03-09T15:59:01.087842+0000 mon.a (mon.0) 1998 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T15:59:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:01 vm01 bash[20728]: cluster 2026-03-09T15:59:01.087842+0000 mon.a (mon.0) 1998 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T15:59:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:01 vm09 bash[22983]: cluster 2026-03-09T15:59:00.715690+0000 mgr.y (mgr.14520) 222 : cluster [DBG] pgmap v278: 332 pgs: 37 creating+peering, 35 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:01 vm09 bash[22983]: cluster 2026-03-09T15:59:00.715690+0000 mgr.y (mgr.14520) 222 : cluster [DBG] pgmap v278: 332 pgs: 37 creating+peering, 35 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:01 vm09 bash[22983]: cluster 2026-03-09T15:59:01.087842+0000 mon.a (mon.0) 1998 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T15:59:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:01 vm09 bash[22983]: cluster 2026-03-09T15:59:01.087842+0000 mon.a (mon.0) 1998 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:01.147349+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:01.147349+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:01.147837+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:01.147837+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.069830+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.069830+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.069975+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.069975+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: cluster 2026-03-09T15:59:02.075126+0000 mon.a (mon.0) 2002 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: cluster 2026-03-09T15:59:02.075126+0000 mon.a (mon.0) 2002 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.079755+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.079755+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.080046+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.080046+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.080513+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.101:0/1947779290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.080513+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.101:0/1947779290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.080680+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.080680+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.081971+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.081971+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.082711+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:02 vm01 bash[28152]: audit 2026-03-09T15:59:02.082711+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:01.147349+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:01.147349+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:01.147837+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:01.147837+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.069830+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.069830+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.069975+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.069975+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: cluster 2026-03-09T15:59:02.075126+0000 mon.a (mon.0) 2002 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: cluster 2026-03-09T15:59:02.075126+0000 mon.a (mon.0) 2002 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.079755+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.079755+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.080046+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.080046+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.080513+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.101:0/1947779290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.080513+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.101:0/1947779290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.080680+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.080680+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.081971+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.081971+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.082711+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:02 vm01 bash[20728]: audit 2026-03-09T15:59:02.082711+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:01.147349+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:01.147349+0000 mon.c (mon.2) 209 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:01.147837+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:01.147837+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.069830+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.069830+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm01-59696-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.069975+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.069975+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: cluster 2026-03-09T15:59:02.075126+0000 mon.a (mon.0) 2002 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: cluster 2026-03-09T15:59:02.075126+0000 mon.a (mon.0) 2002 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.079755+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.079755+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.080046+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.080046+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.080513+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.101:0/1947779290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.080513+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.101:0/1947779290' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.080680+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.080680+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.081971+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.081971+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.082711+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:02 vm09 bash[22983]: audit 2026-03-09T15:59:02.082711+0000 mon.a (mon.0) 2005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:03.148 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:59:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:59:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:59:03.189 INFO:tasks.workunit.client.0.vm01.stdout:4 expected=4 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:cfc208b3:::3:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:cfc208b3:::3:head expected=14:cfc208b3:::3:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:cfc208b3:::3:head -> 3 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=3 expected=3 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:89d3ae78:::11:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:89d3ae78:::11:head expected=14:89d3ae78:::11:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:89d3ae78:::11:head -> 11 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=11 expected=11 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:863748b0:::15:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:863748b0:::15:head expected=14:863748b0:::15:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:863748b0:::15:head -> 15 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=15 expected=15 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:6cac518f:::0:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:6cac518f:::0:head expected=14:6cac518f:::0:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:6cac518f:::0:head -> 0 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=0 expected=0 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:62a1935d:::14:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:62a1935d:::14:head expected=14:62a1935d:::14:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:62a1935d:::14:head -> 14 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=14 expected=14 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:5c6b0b28:::7:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:5c6b0b28:::7:head expected=14:5c6b0b28:::7:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:5c6b0b28:::7:head -> 7 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=7 expected=7 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:566253c9:::13:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:566253c9:::13:head expected=14:566253c9:::13:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:566253c9:::13:head -> 13 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=13 expected=13 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : seek to 14:02547ec2:::1:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : cursor()=14:02547ec2:::1:head expected=14:02547ec2:::1:head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: > 14:02547ec2:::1:head -> 1 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: : entry=1 expected=1 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosList.ListObjectsCursor (925 ms) 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjects 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosList.EnumerateObjects (72702 ms) 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjectsSplit 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: split 0/5 -> MIN 14:33333333::::head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: split 1/5 -> 14:33333333::::head 14:66666666::::head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: split 2/5 -> 14:66666666::::head 14:99999999::::head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: split 3/5 -> 14:99999999::::head 14:cccccccc::::head 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: split 4/5 -> 14:cccccccc::::head MAX 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosList.EnumerateObjectsSplit (66786 ms) 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [----------] 7 tests from LibRadosList (142132 ms total) 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [----------] 3 tests from LibRadosListEC 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosListEC.ListObjects 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosListEC.ListObjects (1062 ms) 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsNS 2026-03-09T15:59:03.190 INFO:tasks.workunit.client.0.vm01.stdout: api_list: myset foo1,foo2,foo3 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo1 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo2 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo3 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: myset foo1,foo4,foo5 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo4 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo5 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo1 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: myset foo6,foo7 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo7 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: foo6 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns1:foo4 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns1:foo5 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns2:foo7 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns2:foo6 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: ns1:foo1 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: :foo1 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: :foo2 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: :foo3 2026-03-09T15:59:03.191 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsNS (52 ms) 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: cluster 2026-03-09T15:59:02.716142+0000 mgr.y (mgr.14520) 223 : cluster [DBG] pgmap v281: 332 pgs: 21 creating+peering, 51 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: cluster 2026-03-09T15:59:02.716142+0000 mgr.y (mgr.14520) 223 : cluster [DBG] pgmap v281: 332 pgs: 21 creating+peering, 51 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.073286+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.073286+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.073335+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.073335+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.073373+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.073373+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: cluster 2026-03-09T15:59:03.085042+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: cluster 2026-03-09T15:59:03.085042+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.086808+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.086808+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.087218+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.087218+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.095342+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.095342+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.095422+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:03 vm01 bash[28152]: audit 2026-03-09T15:59:03.095422+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: cluster 2026-03-09T15:59:02.716142+0000 mgr.y (mgr.14520) 223 : cluster [DBG] pgmap v281: 332 pgs: 21 creating+peering, 51 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: cluster 2026-03-09T15:59:02.716142+0000 mgr.y (mgr.14520) 223 : cluster [DBG] pgmap v281: 332 pgs: 21 creating+peering, 51 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.073286+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.073286+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.073335+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.073335+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.073373+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.073373+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: cluster 2026-03-09T15:59:03.085042+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: cluster 2026-03-09T15:59:03.085042+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.086808+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.086808+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.087218+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.087218+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.095342+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.095342+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.095422+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:03 vm01 bash[20728]: audit 2026-03-09T15:59:03.095422+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: cluster 2026-03-09T15:59:02.716142+0000 mgr.y (mgr.14520) 223 : cluster [DBG] pgmap v281: 332 pgs: 21 creating+peering, 51 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: cluster 2026-03-09T15:59:02.716142+0000 mgr.y (mgr.14520) 223 : cluster [DBG] pgmap v281: 332 pgs: 21 creating+peering, 51 unknown, 260 active+clean; 456 KiB data, 773 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.073286+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.073286+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.073335+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.073335+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.073373+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.073373+0000 mon.a (mon.0) 2008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm01-59610-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: cluster 2026-03-09T15:59:03.085042+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: cluster 2026-03-09T15:59:03.085042+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.086808+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.086808+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.087218+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.087218+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.101:0/527510031' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.095342+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.095342+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]: dispatch 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.095422+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:03 vm09 bash[22983]: audit 2026-03-09T15:59:03.095422+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: cluster 2026-03-09T15:59:03.809681+0000 mon.a (mon.0) 2012 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: cluster 2026-03-09T15:59:03.809681+0000 mon.a (mon.0) 2012 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: cluster 2026-03-09T15:59:04.073871+0000 mon.a (mon.0) 2013 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: cluster 2026-03-09T15:59:04.073871+0000 mon.a (mon.0) 2013 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.077605+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]': finished 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.077605+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]': finished 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.077779+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.077779+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: cluster 2026-03-09T15:59:04.097544+0000 mon.a (mon.0) 2016 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: cluster 2026-03-09T15:59:04.097544+0000 mon.a (mon.0) 2016 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.099211+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.099211+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.099642+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.099642+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.120812+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.120812+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.121703+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.121703+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.124895+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.124895+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.125494+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.125494+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.125614+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.125614+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.127741+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.127741+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.128163+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.128163+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.128868+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.128868+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.129305+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:04 vm01 bash[28152]: audit 2026-03-09T15:59:04.129305+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: cluster 2026-03-09T15:59:03.809681+0000 mon.a (mon.0) 2012 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: cluster 2026-03-09T15:59:03.809681+0000 mon.a (mon.0) 2012 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: cluster 2026-03-09T15:59:04.073871+0000 mon.a (mon.0) 2013 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: cluster 2026-03-09T15:59:04.073871+0000 mon.a (mon.0) 2013 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.077605+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]': finished 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.077605+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]': finished 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.077779+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.077779+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: cluster 2026-03-09T15:59:04.097544+0000 mon.a (mon.0) 2016 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: cluster 2026-03-09T15:59:04.097544+0000 mon.a (mon.0) 2016 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.099211+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.099211+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.099642+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.099642+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.120812+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.120812+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.121703+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.121703+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.124895+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.124895+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.125494+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.125494+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.125614+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.125614+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.127741+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.127741+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.128163+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.128163+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.128868+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.128868+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.129305+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:04 vm01 bash[20728]: audit 2026-03-09T15:59:04.129305+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: cluster 2026-03-09T15:59:03.809681+0000 mon.a (mon.0) 2012 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: cluster 2026-03-09T15:59:03.809681+0000 mon.a (mon.0) 2012 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: cluster 2026-03-09T15:59:04.073871+0000 mon.a (mon.0) 2013 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: cluster 2026-03-09T15:59:04.073871+0000 mon.a (mon.0) 2013 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.077605+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]': finished 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.077605+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-27", "mode": "writeback"}]': finished 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.077779+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.077779+0000 mon.a (mon.0) 2015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm01-59602-36"}]': finished 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: cluster 2026-03-09T15:59:04.097544+0000 mon.a (mon.0) 2016 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: cluster 2026-03-09T15:59:04.097544+0000 mon.a (mon.0) 2016 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.099211+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.099211+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.099642+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.099642+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.120812+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.120812+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.121703+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.121703+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.124895+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.124895+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.125494+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.125494+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.125614+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.125614+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.127741+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.127741+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.128163+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.128163+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.128868+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.128868+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.129305+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:04 vm09 bash[22983]: audit 2026-03-09T15:59:04.129305+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: cluster 2026-03-09T15:59:04.716729+0000 mgr.y (mgr.14520) 224 : cluster [DBG] pgmap v284: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: cluster 2026-03-09T15:59:04.716729+0000 mgr.y (mgr.14520) 224 : cluster [DBG] pgmap v284: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.081155+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.081155+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.081287+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.081287+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.081430+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.081430+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.092702+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.092702+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.094656+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.094656+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: cluster 2026-03-09T15:59:05.098351+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: cluster 2026-03-09T15:59:05.098351+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.099405+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.099405+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.099624+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.099624+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.099759+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.099759+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.136619+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.136619+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.137099+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:05 vm01 bash[28152]: audit 2026-03-09T15:59:05.137099+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: cluster 2026-03-09T15:59:04.716729+0000 mgr.y (mgr.14520) 224 : cluster [DBG] pgmap v284: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: cluster 2026-03-09T15:59:04.716729+0000 mgr.y (mgr.14520) 224 : cluster [DBG] pgmap v284: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.081155+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.081155+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.081287+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.081287+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.081430+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.081430+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.092702+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.092702+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.094656+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.094656+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: cluster 2026-03-09T15:59:05.098351+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: cluster 2026-03-09T15:59:05.098351+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.099405+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.099405+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.099624+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.099624+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.099759+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.099759+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.136619+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.136619+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.137099+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:05 vm01 bash[20728]: audit 2026-03-09T15:59:05.137099+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: cluster 2026-03-09T15:59:04.716729+0000 mgr.y (mgr.14520) 224 : cluster [DBG] pgmap v284: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: cluster 2026-03-09T15:59:04.716729+0000 mgr.y (mgr.14520) 224 : cluster [DBG] pgmap v284: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.081155+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.081155+0000 mon.a (mon.0) 2024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.081287+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.081287+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.081430+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.081430+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm01-59602-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.092702+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.092702+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.094656+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.094656+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.101:0/3146544251' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: cluster 2026-03-09T15:59:05.098351+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: cluster 2026-03-09T15:59:05.098351+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.099405+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.099405+0000 mon.a (mon.0) 2028 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.099624+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.099624+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.099759+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.099759+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.136619+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.136619+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.137099+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:05 vm09 bash[22983]: audit 2026-03-09T15:59:05.137099+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:06.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:59:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.094451+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.094451+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.094566+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.094566+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.101284+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.101284+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: cluster 2026-03-09T15:59:06.125725+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: cluster 2026-03-09T15:59:06.125725+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.126312+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.126312+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.363226+0000 mgr.y (mgr.14520) 225 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: audit 2026-03-09T15:59:06.363226+0000 mgr.y (mgr.14520) 225 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: cluster 2026-03-09T15:59:06.717122+0000 mgr.y (mgr.14520) 226 : cluster [DBG] pgmap v287: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:07.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:07 vm09 bash[22983]: cluster 2026-03-09T15:59:06.717122+0000 mgr.y (mgr.14520) 226 : cluster [DBG] pgmap v287: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.094451+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.094451+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.094566+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.094566+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.101284+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.101284+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: cluster 2026-03-09T15:59:06.125725+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: cluster 2026-03-09T15:59:06.125725+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.126312+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.126312+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.363226+0000 mgr.y (mgr.14520) 225 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: audit 2026-03-09T15:59:06.363226+0000 mgr.y (mgr.14520) 225 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: cluster 2026-03-09T15:59:06.717122+0000 mgr.y (mgr.14520) 226 : cluster [DBG] pgmap v287: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:07 vm01 bash[28152]: cluster 2026-03-09T15:59:06.717122+0000 mgr.y (mgr.14520) 226 : cluster [DBG] pgmap v287: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.094451+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.094451+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm01-59696-2"}]': finished 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.094566+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.094566+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.101284+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.101284+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: cluster 2026-03-09T15:59:06.125725+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: cluster 2026-03-09T15:59:06.125725+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.126312+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.126312+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.363226+0000 mgr.y (mgr.14520) 225 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: audit 2026-03-09T15:59:06.363226+0000 mgr.y (mgr.14520) 225 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: cluster 2026-03-09T15:59:06.717122+0000 mgr.y (mgr.14520) 226 : cluster [DBG] pgmap v287: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:07 vm01 bash[20728]: cluster 2026-03-09T15:59:06.717122+0000 mgr.y (mgr.14520) 226 : cluster [DBG] pgmap v287: 292 pgs: 292 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: cluster 2026-03-09T15:59:07.096326+0000 mon.a (mon.0) 2036 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: cluster 2026-03-09T15:59:07.096326+0000 mon.a (mon.0) 2036 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.105942+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.105942+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.106137+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.106137+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.106360+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.106360+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: cluster 2026-03-09T15:59:07.137728+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: cluster 2026-03-09T15:59:07.137728+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.145562+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.145562+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.145848+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:08 vm09 bash[22983]: audit 2026-03-09T15:59:07.145848+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: cluster 2026-03-09T15:59:07.096326+0000 mon.a (mon.0) 2036 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: cluster 2026-03-09T15:59:07.096326+0000 mon.a (mon.0) 2036 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.105942+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.105942+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.106137+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.106137+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.106360+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.106360+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: cluster 2026-03-09T15:59:07.137728+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: cluster 2026-03-09T15:59:07.137728+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.145562+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.145562+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.145848+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:08 vm01 bash[28152]: audit 2026-03-09T15:59:07.145848+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: cluster 2026-03-09T15:59:07.096326+0000 mon.a (mon.0) 2036 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: cluster 2026-03-09T15:59:07.096326+0000 mon.a (mon.0) 2036 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.105942+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.105942+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm01-59602-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.106137+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.106137+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.106360+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.106360+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-27"}]': finished 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: cluster 2026-03-09T15:59:07.137728+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: cluster 2026-03-09T15:59:07.137728+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.145562+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.145562+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.145848+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:08 vm01 bash[20728]: audit 2026-03-09T15:59:07.145848+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.152 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsStart 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 1 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 10 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 13 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 7 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 14 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 0 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 15 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 11 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 5 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 8 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 6 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 3 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 4 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 12 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 9 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 2 0 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsStart (60 ms) 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [----------] 3 tests from LibRadosListEC (1174 ms total) 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [----------] 1 test from LibRadosListNP 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ RUN ] LibRadosListNP.ListObjectsError 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ OK ] LibRadosListNP.ListObjectsError (3048 ms) 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [----------] 1 test from LibRadosListNP (3048 ms total) 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [----------] Global test environment tear-down 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [==========] 11 tests from 3 test suites ran. (155030 ms total) 2026-03-09T15:59:09.153 INFO:tasks.workunit.client.0.vm01.stdout: api_list: [ PASSED ] 11 tests. 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:08.139609+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:08.139609+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: cluster 2026-03-09T15:59:08.165214+0000 mon.a (mon.0) 2043 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: cluster 2026-03-09T15:59:08.165214+0000 mon.a (mon.0) 2043 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:08.211389+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:08.211389+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:08.212983+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:08.212983+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: cluster 2026-03-09T15:59:08.717505+0000 mgr.y (mgr.14520) 227 : cluster [DBG] pgmap v290: 308 pgs: 39 unknown, 269 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: cluster 2026-03-09T15:59:08.717505+0000 mgr.y (mgr.14520) 227 : cluster [DBG] pgmap v290: 308 pgs: 39 unknown, 269 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: cluster 2026-03-09T15:59:08.810254+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: cluster 2026-03-09T15:59:08.810254+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.143154+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.143154+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: cluster 2026-03-09T15:59:09.147752+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: cluster 2026-03-09T15:59:09.147752+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.150405+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.150405+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.151883+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.151883+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.157650+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.157650+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.164976+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.164976+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.165278+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:09 vm09 bash[22983]: audit 2026-03-09T15:59:09.165278+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:08.139609+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:08.139609+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: cluster 2026-03-09T15:59:08.165214+0000 mon.a (mon.0) 2043 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: cluster 2026-03-09T15:59:08.165214+0000 mon.a (mon.0) 2043 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:08.211389+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:08.211389+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:08.212983+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:08.212983+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: cluster 2026-03-09T15:59:08.717505+0000 mgr.y (mgr.14520) 227 : cluster [DBG] pgmap v290: 308 pgs: 39 unknown, 269 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: cluster 2026-03-09T15:59:08.717505+0000 mgr.y (mgr.14520) 227 : cluster [DBG] pgmap v290: 308 pgs: 39 unknown, 269 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: cluster 2026-03-09T15:59:08.810254+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: cluster 2026-03-09T15:59:08.810254+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.143154+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.143154+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: cluster 2026-03-09T15:59:09.147752+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: cluster 2026-03-09T15:59:09.147752+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.150405+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.150405+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.151883+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.151883+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.157650+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.157650+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.164976+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.164976+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.165278+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:09 vm01 bash[28152]: audit 2026-03-09T15:59:09.165278+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:08.139609+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:08.139609+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59696-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: cluster 2026-03-09T15:59:08.165214+0000 mon.a (mon.0) 2043 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: cluster 2026-03-09T15:59:08.165214+0000 mon.a (mon.0) 2043 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:08.211389+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:08.211389+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.101:0/3501548045' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:08.212983+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:08.212983+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: cluster 2026-03-09T15:59:08.717505+0000 mgr.y (mgr.14520) 227 : cluster [DBG] pgmap v290: 308 pgs: 39 unknown, 269 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: cluster 2026-03-09T15:59:08.717505+0000 mgr.y (mgr.14520) 227 : cluster [DBG] pgmap v290: 308 pgs: 39 unknown, 269 active+clean; 456 KiB data, 778 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: cluster 2026-03-09T15:59:08.810254+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: cluster 2026-03-09T15:59:08.810254+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.143154+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.143154+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm01-59696-3","pool2":"test-rados-api-vm01-59696-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T15:59:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: cluster 2026-03-09T15:59:09.147752+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: cluster 2026-03-09T15:59:09.147752+0000 mon.a (mon.0) 2047 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.150405+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.150405+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.151883+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.151883+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.157650+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.157650+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.164976+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.164976+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.165278+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:09 vm01 bash[20728]: audit 2026-03-09T15:59:09.165278+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.147232+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.147232+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.147464+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.147464+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.147680+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.147680+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: cluster 2026-03-09T15:59:10.152610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: cluster 2026-03-09T15:59:10.152610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.153770+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.153770+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.169304+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.169304+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.169667+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: audit 2026-03-09T15:59:10.169667+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: cluster 2026-03-09T15:59:10.717946+0000 mgr.y (mgr.14520) 228 : cluster [DBG] pgmap v293: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:11 vm01 bash[28152]: cluster 2026-03-09T15:59:10.717946+0000 mgr.y (mgr.14520) 228 : cluster [DBG] pgmap v293: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.147232+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.147232+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.147464+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.147464+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.147680+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.147680+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: cluster 2026-03-09T15:59:10.152610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: cluster 2026-03-09T15:59:10.152610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.153770+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.153770+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.169304+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.169304+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.169667+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: audit 2026-03-09T15:59:10.169667+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: cluster 2026-03-09T15:59:10.717946+0000 mgr.y (mgr.14520) 228 : cluster [DBG] pgmap v293: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:11.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:11 vm01 bash[20728]: cluster 2026-03-09T15:59:10.717946+0000 mgr.y (mgr.14520) 228 : cluster [DBG] pgmap v293: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.147232+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.147232+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.147464+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.147464+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.147680+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.147680+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: cluster 2026-03-09T15:59:10.152610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: cluster 2026-03-09T15:59:10.152610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.153770+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.153770+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]: dispatch 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.169304+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.169304+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.101:0/1778606021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.169667+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: audit 2026-03-09T15:59:10.169667+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]: dispatch 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: cluster 2026-03-09T15:59:10.717946+0000 mgr.y (mgr.14520) 228 : cluster [DBG] pgmap v293: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:11 vm09 bash[22983]: cluster 2026-03-09T15:59:10.717946+0000 mgr.y (mgr.14520) 228 : cluster [DBG] pgmap v293: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.156798+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.156798+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.157180+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.157180+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: cluster 2026-03-09T15:59:11.180883+0000 mon.a (mon.0) 2059 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: cluster 2026-03-09T15:59:11.180883+0000 mon.a (mon.0) 2059 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.190349+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.190349+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.191969+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.191969+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.192401+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.192401+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.193489+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.193489+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.194275+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.194275+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.194284+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.194284+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.195038+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.195038+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.195689+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.195689+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.196106+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.196106+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.197248+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.197248+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.197981+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.197981+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.198755+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:12 vm01 bash[28152]: audit 2026-03-09T15:59:11.198755+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.156798+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.156798+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.157180+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.157180+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: cluster 2026-03-09T15:59:11.180883+0000 mon.a (mon.0) 2059 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: cluster 2026-03-09T15:59:11.180883+0000 mon.a (mon.0) 2059 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.190349+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.190349+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.191969+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.191969+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.192401+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.192401+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.193489+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.193489+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.194275+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.194275+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.194284+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.194284+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.195038+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.195038+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.195689+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.195689+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.196106+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.196106+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.197248+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.197248+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.197981+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.197981+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.198755+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:12 vm01 bash[20728]: audit 2026-03-09T15:59:11.198755+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.156798+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.156798+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.101:0/219552525' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm01-59602-37"}]': finished 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.157180+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.157180+0000 mon.a (mon.0) 2058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-46"}]': finished 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: cluster 2026-03-09T15:59:11.180883+0000 mon.a (mon.0) 2059 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: cluster 2026-03-09T15:59:11.180883+0000 mon.a (mon.0) 2059 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.190349+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.190349+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.191969+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.191969+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.192401+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.192401+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.193489+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.193489+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.194275+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.194275+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.194284+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.194284+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.195038+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.195038+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.195689+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.195689+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.196106+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.196106+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.197248+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.197248+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.197981+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.197981+0000 mon.a (mon.0) 2064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.198755+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:12.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:12 vm09 bash[22983]: audit 2026-03-09T15:59:11.198755+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:13.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:59:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:59:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.182366+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.182366+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.182644+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.182644+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.186492+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.186492+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.186724+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.186724+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: cluster 2026-03-09T15:59:12.193266+0000 mon.a (mon.0) 2068 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: cluster 2026-03-09T15:59:12.193266+0000 mon.a (mon.0) 2068 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.194306+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.194306+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.194539+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.194539+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.202231+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.202231+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.202422+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: audit 2026-03-09T15:59:12.202422+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: cluster 2026-03-09T15:59:12.718288+0000 mgr.y (mgr.14520) 229 : cluster [DBG] pgmap v296: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:13 vm09 bash[22983]: cluster 2026-03-09T15:59:12.718288+0000 mgr.y (mgr.14520) 229 : cluster [DBG] pgmap v296: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.182366+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.182366+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.182644+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.182644+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.186492+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.186492+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.186724+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.186724+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: cluster 2026-03-09T15:59:12.193266+0000 mon.a (mon.0) 2068 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: cluster 2026-03-09T15:59:12.193266+0000 mon.a (mon.0) 2068 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.194306+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.194306+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.194539+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.182366+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.182366+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm01-59602-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.182644+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.182644+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm01-59610-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.186492+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.186492+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.186724+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.186724+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: cluster 2026-03-09T15:59:12.193266+0000 mon.a (mon.0) 2068 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: cluster 2026-03-09T15:59:12.193266+0000 mon.a (mon.0) 2068 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.194306+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.194306+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.194539+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.194539+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.202231+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.202231+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.202422+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: audit 2026-03-09T15:59:12.202422+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: cluster 2026-03-09T15:59:12.718288+0000 mgr.y (mgr.14520) 229 : cluster [DBG] pgmap v296: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:13 vm01 bash[28152]: cluster 2026-03-09T15:59:12.718288+0000 mgr.y (mgr.14520) 229 : cluster [DBG] pgmap v296: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.194539+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.202231+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.202231+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.202422+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: audit 2026-03-09T15:59:12.202422+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:13.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: cluster 2026-03-09T15:59:12.718288+0000 mgr.y (mgr.14520) 229 : cluster [DBG] pgmap v296: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:13.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:13 vm01 bash[20728]: cluster 2026-03-09T15:59:12.718288+0000 mgr.y (mgr.14520) 229 : cluster [DBG] pgmap v296: 292 pgs: 32 unknown, 260 active+clean; 456 KiB data, 779 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.186054+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.186054+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.189804+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.189804+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: cluster 2026-03-09T15:59:13.209528+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: cluster 2026-03-09T15:59:13.209528+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.211026+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.211026+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.815010+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.815010+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.815173+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.815173+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.815213+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.815213+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.819209+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.819209+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: cluster 2026-03-09T15:59:13.820527+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: cluster 2026-03-09T15:59:13.820527+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.826209+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:13.826209+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:14.123680+0000 mon.a (mon.0) 2080 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:14 vm09 bash[22983]: audit 2026-03-09T15:59:14.123680+0000 mon.a (mon.0) 2080 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.186054+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.186054+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.189804+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.189804+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: cluster 2026-03-09T15:59:13.209528+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: cluster 2026-03-09T15:59:13.209528+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.211026+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.211026+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.815010+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.815010+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.815173+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.815173+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.815213+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.815213+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.819209+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.819209+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: cluster 2026-03-09T15:59:13.820527+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: cluster 2026-03-09T15:59:13.820527+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.826209+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:13.826209+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:14.123680+0000 mon.a (mon.0) 2080 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:14 vm01 bash[28152]: audit 2026-03-09T15:59:14.123680+0000 mon.a (mon.0) 2080 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.186054+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.186054+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.189804+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.189804+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: cluster 2026-03-09T15:59:13.209528+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: cluster 2026-03-09T15:59:13.209528+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.211026+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.211026+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.815010+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.815010+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm01-59602-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.815173+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.815173+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm01-59610-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.815213+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.815213+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.819209+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.819209+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: cluster 2026-03-09T15:59:13.820527+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: cluster 2026-03-09T15:59:13.820527+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.826209+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:13.826209+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]: dispatch 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:14.123680+0000 mon.a (mon.0) 2080 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:14.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:14 vm01 bash[20728]: audit 2026-03-09T15:59:14.123680+0000 mon.a (mon.0) 2080 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:15 vm09 bash[22983]: cluster 2026-03-09T15:59:14.718868+0000 mgr.y (mgr.14520) 230 : cluster [DBG] pgmap v299: 308 pgs: 8 creating+peering, 8 unknown, 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:15 vm09 bash[22983]: cluster 2026-03-09T15:59:14.718868+0000 mgr.y (mgr.14520) 230 : cluster [DBG] pgmap v299: 308 pgs: 8 creating+peering, 8 unknown, 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:15 vm09 bash[22983]: cluster 2026-03-09T15:59:14.815099+0000 mon.a (mon.0) 2081 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:15 vm09 bash[22983]: cluster 2026-03-09T15:59:14.815099+0000 mon.a (mon.0) 2081 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:15 vm09 bash[22983]: audit 2026-03-09T15:59:14.818566+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]': finished 2026-03-09T15:59:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:15 vm09 bash[22983]: audit 2026-03-09T15:59:14.818566+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]': finished 2026-03-09T15:59:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:15 vm09 bash[22983]: cluster 2026-03-09T15:59:14.824193+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T15:59:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:15 vm09 bash[22983]: cluster 2026-03-09T15:59:14.824193+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:15 vm01 bash[28152]: cluster 2026-03-09T15:59:14.718868+0000 mgr.y (mgr.14520) 230 : cluster [DBG] pgmap v299: 308 pgs: 8 creating+peering, 8 unknown, 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:15 vm01 bash[28152]: cluster 2026-03-09T15:59:14.718868+0000 mgr.y (mgr.14520) 230 : cluster [DBG] pgmap v299: 308 pgs: 8 creating+peering, 8 unknown, 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:15 vm01 bash[28152]: cluster 2026-03-09T15:59:14.815099+0000 mon.a (mon.0) 2081 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:15 vm01 bash[28152]: cluster 2026-03-09T15:59:14.815099+0000 mon.a (mon.0) 2081 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:15 vm01 bash[28152]: audit 2026-03-09T15:59:14.818566+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]': finished 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:15 vm01 bash[28152]: audit 2026-03-09T15:59:14.818566+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]': finished 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:15 vm01 bash[28152]: cluster 2026-03-09T15:59:14.824193+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:15 vm01 bash[28152]: cluster 2026-03-09T15:59:14.824193+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:15 vm01 bash[20728]: cluster 2026-03-09T15:59:14.718868+0000 mgr.y (mgr.14520) 230 : cluster [DBG] pgmap v299: 308 pgs: 8 creating+peering, 8 unknown, 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:15 vm01 bash[20728]: cluster 2026-03-09T15:59:14.718868+0000 mgr.y (mgr.14520) 230 : cluster [DBG] pgmap v299: 308 pgs: 8 creating+peering, 8 unknown, 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:15 vm01 bash[20728]: cluster 2026-03-09T15:59:14.815099+0000 mon.a (mon.0) 2081 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:15 vm01 bash[20728]: cluster 2026-03-09T15:59:14.815099+0000 mon.a (mon.0) 2081 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:15 vm01 bash[20728]: audit 2026-03-09T15:59:14.818566+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]': finished 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:15 vm01 bash[20728]: audit 2026-03-09T15:59:14.818566+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-29", "mode": "writeback"}]': finished 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:15 vm01 bash[20728]: cluster 2026-03-09T15:59:14.824193+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T15:59:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:15 vm01 bash[20728]: cluster 2026-03-09T15:59:14.824193+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T15:59:16.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:59:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: cluster 2026-03-09T15:59:15.215158+0000 mon.a (mon.0) 2084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: cluster 2026-03-09T15:59:15.215158+0000 mon.a (mon.0) 2084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: cluster 2026-03-09T15:59:15.853366+0000 mon.a (mon.0) 2085 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: cluster 2026-03-09T15:59:15.853366+0000 mon.a (mon.0) 2085 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: audit 2026-03-09T15:59:15.853828+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: audit 2026-03-09T15:59:15.853828+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: audit 2026-03-09T15:59:15.853934+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: audit 2026-03-09T15:59:15.853934+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: audit 2026-03-09T15:59:15.861575+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: audit 2026-03-09T15:59:15.861575+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: audit 2026-03-09T15:59:15.861792+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:16 vm09 bash[22983]: audit 2026-03-09T15:59:15.861792+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: cluster 2026-03-09T15:59:15.215158+0000 mon.a (mon.0) 2084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: cluster 2026-03-09T15:59:15.215158+0000 mon.a (mon.0) 2084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: cluster 2026-03-09T15:59:15.853366+0000 mon.a (mon.0) 2085 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: cluster 2026-03-09T15:59:15.853366+0000 mon.a (mon.0) 2085 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: audit 2026-03-09T15:59:15.853828+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: audit 2026-03-09T15:59:15.853828+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: audit 2026-03-09T15:59:15.853934+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: audit 2026-03-09T15:59:15.853934+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: audit 2026-03-09T15:59:15.861575+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: audit 2026-03-09T15:59:15.861575+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: audit 2026-03-09T15:59:15.861792+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:16 vm01 bash[28152]: audit 2026-03-09T15:59:15.861792+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: cluster 2026-03-09T15:59:15.215158+0000 mon.a (mon.0) 2084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: cluster 2026-03-09T15:59:15.215158+0000 mon.a (mon.0) 2084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: cluster 2026-03-09T15:59:15.853366+0000 mon.a (mon.0) 2085 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: cluster 2026-03-09T15:59:15.853366+0000 mon.a (mon.0) 2085 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: audit 2026-03-09T15:59:15.853828+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: audit 2026-03-09T15:59:15.853828+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: audit 2026-03-09T15:59:15.853934+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: audit 2026-03-09T15:59:15.853934+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: audit 2026-03-09T15:59:15.861575+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: audit 2026-03-09T15:59:15.861575+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: audit 2026-03-09T15:59:15.861792+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:16 vm01 bash[20728]: audit 2026-03-09T15:59:15.861792+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.368149+0000 mgr.y (mgr.14520) 231 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.368149+0000 mgr.y (mgr.14520) 231 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: cluster 2026-03-09T15:59:16.719187+0000 mgr.y (mgr.14520) 232 : cluster [DBG] pgmap v302: 292 pgs: 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: cluster 2026-03-09T15:59:16.719187+0000 mgr.y (mgr.14520) 232 : cluster [DBG] pgmap v302: 292 pgs: 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.886644+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.886644+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.886879+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.886879+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.886892+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.886892+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.887248+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.887248+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: cluster 2026-03-09T15:59:16.895980+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: cluster 2026-03-09T15:59:16.895980+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.897151+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.897151+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.897298+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:17 vm01 bash[28152]: audit 2026-03-09T15:59:16.897298+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.368149+0000 mgr.y (mgr.14520) 231 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.368149+0000 mgr.y (mgr.14520) 231 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: cluster 2026-03-09T15:59:16.719187+0000 mgr.y (mgr.14520) 232 : cluster [DBG] pgmap v302: 292 pgs: 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: cluster 2026-03-09T15:59:16.719187+0000 mgr.y (mgr.14520) 232 : cluster [DBG] pgmap v302: 292 pgs: 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.886644+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.886644+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.886879+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.886879+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.886892+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.886892+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.887248+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.887248+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: cluster 2026-03-09T15:59:16.895980+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: cluster 2026-03-09T15:59:16.895980+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.897151+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.897151+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.897298+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:17 vm01 bash[20728]: audit 2026-03-09T15:59:16.897298+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.368149+0000 mgr.y (mgr.14520) 231 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.368149+0000 mgr.y (mgr.14520) 231 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: cluster 2026-03-09T15:59:16.719187+0000 mgr.y (mgr.14520) 232 : cluster [DBG] pgmap v302: 292 pgs: 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: cluster 2026-03-09T15:59:16.719187+0000 mgr.y (mgr.14520) 232 : cluster [DBG] pgmap v302: 292 pgs: 292 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.886644+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.886644+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.886879+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.886879+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.101:0/3957054855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.886892+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.886892+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.887248+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.887248+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.101:0/3461887675' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: cluster 2026-03-09T15:59:16.895980+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: cluster 2026-03-09T15:59:16.895980+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.897151+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.897151+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.897298+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:17 vm09 bash[22983]: audit 2026-03-09T15:59:16.897298+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.890822+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.890822+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.890984+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.890984+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: cluster 2026-03-09T15:59:17.897421+0000 mon.a (mon.0) 2095 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: cluster 2026-03-09T15:59:17.897421+0000 mon.a (mon.0) 2095 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.922809+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.922809+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.934398+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.934398+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.944354+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.944354+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.944473+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.944473+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.945578+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.945578+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.946390+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.946390+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.946729+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.946729+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.946881+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.946881+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.948430+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.948430+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.948720+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.948720+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.950701+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.950701+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.951258+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.951258+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.996219+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.996219+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.996842+0000 mon.a (mon.0) 2102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:19 vm09 bash[22983]: audit 2026-03-09T15:59:17.996842+0000 mon.a (mon.0) 2102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.890822+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.890822+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.890984+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.890984+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: cluster 2026-03-09T15:59:17.897421+0000 mon.a (mon.0) 2095 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: cluster 2026-03-09T15:59:17.897421+0000 mon.a (mon.0) 2095 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.922809+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.922809+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.934398+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.934398+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.944354+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.944354+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.944473+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.944473+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.945578+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.945578+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.946390+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.946390+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.946729+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.946729+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.946881+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.946881+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.948430+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.948430+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.948720+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.948720+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.950701+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.950701+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.951258+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.951258+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.996219+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.996219+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.996842+0000 mon.a (mon.0) 2102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:19 vm01 bash[28152]: audit 2026-03-09T15:59:17.996842+0000 mon.a (mon.0) 2102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.890822+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.890822+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm01-59610-47"}]': finished 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.890984+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.890984+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm01-59602-38"}]': finished 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: cluster 2026-03-09T15:59:17.897421+0000 mon.a (mon.0) 2095 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: cluster 2026-03-09T15:59:17.897421+0000 mon.a (mon.0) 2095 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.922809+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.922809+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.934398+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.934398+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.944354+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.944354+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.944473+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.944473+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.945578+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.945578+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.946390+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.946390+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.946729+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.946729+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.946881+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.946881+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.948430+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.948430+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.948720+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.948720+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.950701+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.950701+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.951258+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.951258+0000 mon.a (mon.0) 2101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.996219+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.996219+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.996842+0000 mon.a (mon.0) 2102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:19.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:19 vm01 bash[20728]: audit 2026-03-09T15:59:17.996842+0000 mon.a (mon.0) 2102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: cluster 2026-03-09T15:59:18.719525+0000 mgr.y (mgr.14520) 233 : cluster [DBG] pgmap v305: 292 pgs: 292 active+clean; 4.4 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: cluster 2026-03-09T15:59:18.719525+0000 mgr.y (mgr.14520) 233 : cluster [DBG] pgmap v305: 292 pgs: 292 active+clean; 4.4 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.014169+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.014169+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.014265+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.014265+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.014313+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.014313+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.021200+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.021200+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.021660+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.021660+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.023162+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.023162+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: cluster 2026-03-09T15:59:19.031495+0000 mon.a (mon.0) 2106 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: cluster 2026-03-09T15:59:19.031495+0000 mon.a (mon.0) 2106 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.032309+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.032309+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.032495+0000 mon.a (mon.0) 2108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.032495+0000 mon.a (mon.0) 2108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.032565+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:20 vm09 bash[22983]: audit 2026-03-09T15:59:19.032565+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: cluster 2026-03-09T15:59:18.719525+0000 mgr.y (mgr.14520) 233 : cluster [DBG] pgmap v305: 292 pgs: 292 active+clean; 4.4 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: cluster 2026-03-09T15:59:18.719525+0000 mgr.y (mgr.14520) 233 : cluster [DBG] pgmap v305: 292 pgs: 292 active+clean; 4.4 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.014169+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.014169+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.014265+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.014265+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.014313+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.014313+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.021200+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.021200+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.021660+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.021660+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.023162+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.023162+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: cluster 2026-03-09T15:59:19.031495+0000 mon.a (mon.0) 2106 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: cluster 2026-03-09T15:59:19.031495+0000 mon.a (mon.0) 2106 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.032309+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.032309+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.032495+0000 mon.a (mon.0) 2108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.032495+0000 mon.a (mon.0) 2108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.032565+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:20 vm01 bash[28152]: audit 2026-03-09T15:59:19.032565+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: cluster 2026-03-09T15:59:18.719525+0000 mgr.y (mgr.14520) 233 : cluster [DBG] pgmap v305: 292 pgs: 292 active+clean; 4.4 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: cluster 2026-03-09T15:59:18.719525+0000 mgr.y (mgr.14520) 233 : cluster [DBG] pgmap v305: 292 pgs: 292 active+clean; 4.4 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.014169+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.014169+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm01-59610-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.014265+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.014265+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm01-59602-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.014313+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.014313+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.021200+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.021200+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.021660+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.021660+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.023162+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.023162+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: cluster 2026-03-09T15:59:19.031495+0000 mon.a (mon.0) 2106 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: cluster 2026-03-09T15:59:19.031495+0000 mon.a (mon.0) 2106 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.032309+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.032309+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.032495+0000 mon.a (mon.0) 2108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.032495+0000 mon.a (mon.0) 2108 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.032565+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:20.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:20 vm01 bash[20728]: audit 2026-03-09T15:59:19.032565+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: cluster 2026-03-09T15:59:20.014765+0000 mon.a (mon.0) 2110 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: cluster 2026-03-09T15:59:20.014765+0000 mon.a (mon.0) 2110 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: audit 2026-03-09T15:59:20.017954+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: audit 2026-03-09T15:59:20.017954+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: cluster 2026-03-09T15:59:20.038569+0000 mon.a (mon.0) 2112 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: cluster 2026-03-09T15:59:20.038569+0000 mon.a (mon.0) 2112 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: cluster 2026-03-09T15:59:20.719812+0000 mgr.y (mgr.14520) 234 : cluster [DBG] pgmap v308: 292 pgs: 292 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 1024 KiB/s wr, 27 op/s 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: cluster 2026-03-09T15:59:20.719812+0000 mgr.y (mgr.14520) 234 : cluster [DBG] pgmap v308: 292 pgs: 292 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 1024 KiB/s wr, 27 op/s 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: audit 2026-03-09T15:59:21.040701+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: audit 2026-03-09T15:59:21.040701+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: audit 2026-03-09T15:59:21.040861+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: audit 2026-03-09T15:59:21.040861+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: cluster 2026-03-09T15:59:21.060367+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T15:59:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:21 vm09 bash[22983]: cluster 2026-03-09T15:59:21.060367+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T15:59:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: cluster 2026-03-09T15:59:20.014765+0000 mon.a (mon.0) 2110 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: cluster 2026-03-09T15:59:20.014765+0000 mon.a (mon.0) 2110 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: audit 2026-03-09T15:59:20.017954+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: audit 2026-03-09T15:59:20.017954+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: cluster 2026-03-09T15:59:20.038569+0000 mon.a (mon.0) 2112 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: cluster 2026-03-09T15:59:20.038569+0000 mon.a (mon.0) 2112 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: cluster 2026-03-09T15:59:20.719812+0000 mgr.y (mgr.14520) 234 : cluster [DBG] pgmap v308: 292 pgs: 292 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 1024 KiB/s wr, 27 op/s 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: cluster 2026-03-09T15:59:20.719812+0000 mgr.y (mgr.14520) 234 : cluster [DBG] pgmap v308: 292 pgs: 292 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 1024 KiB/s wr, 27 op/s 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: audit 2026-03-09T15:59:21.040701+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: audit 2026-03-09T15:59:21.040701+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: audit 2026-03-09T15:59:21.040861+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: audit 2026-03-09T15:59:21.040861+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: cluster 2026-03-09T15:59:21.060367+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:21 vm01 bash[28152]: cluster 2026-03-09T15:59:21.060367+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: cluster 2026-03-09T15:59:20.014765+0000 mon.a (mon.0) 2110 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: cluster 2026-03-09T15:59:20.014765+0000 mon.a (mon.0) 2110 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: audit 2026-03-09T15:59:20.017954+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: audit 2026-03-09T15:59:20.017954+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-29"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: cluster 2026-03-09T15:59:20.038569+0000 mon.a (mon.0) 2112 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: cluster 2026-03-09T15:59:20.038569+0000 mon.a (mon.0) 2112 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: cluster 2026-03-09T15:59:20.719812+0000 mgr.y (mgr.14520) 234 : cluster [DBG] pgmap v308: 292 pgs: 292 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 1024 KiB/s wr, 27 op/s 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: cluster 2026-03-09T15:59:20.719812+0000 mgr.y (mgr.14520) 234 : cluster [DBG] pgmap v308: 292 pgs: 292 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 1024 KiB/s wr, 27 op/s 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: audit 2026-03-09T15:59:21.040701+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: audit 2026-03-09T15:59:21.040701+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm01-59602-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: audit 2026-03-09T15:59:21.040861+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: audit 2026-03-09T15:59:21.040861+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm01-59610-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: cluster 2026-03-09T15:59:21.060367+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T15:59:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:21 vm01 bash[20728]: cluster 2026-03-09T15:59:21.060367+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T15:59:23.138 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:59:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:59:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:23 vm01 bash[20728]: audit 2026-03-09T15:59:22.114063+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:23 vm01 bash[20728]: audit 2026-03-09T15:59:22.114063+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:23 vm01 bash[20728]: cluster 2026-03-09T15:59:22.118436+0000 mon.a (mon.0) 2116 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:23 vm01 bash[20728]: cluster 2026-03-09T15:59:22.118436+0000 mon.a (mon.0) 2116 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:23 vm01 bash[20728]: audit 2026-03-09T15:59:22.119837+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:23 vm01 bash[20728]: audit 2026-03-09T15:59:22.119837+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:23 vm01 bash[20728]: cluster 2026-03-09T15:59:22.720112+0000 mgr.y (mgr.14520) 235 : cluster [DBG] pgmap v311: 308 pgs: 48 unknown, 260 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:23 vm01 bash[20728]: cluster 2026-03-09T15:59:22.720112+0000 mgr.y (mgr.14520) 235 : cluster [DBG] pgmap v311: 308 pgs: 48 unknown, 260 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:23 vm01 bash[28152]: audit 2026-03-09T15:59:22.114063+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:23 vm01 bash[28152]: audit 2026-03-09T15:59:22.114063+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:23 vm01 bash[28152]: cluster 2026-03-09T15:59:22.118436+0000 mon.a (mon.0) 2116 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:23 vm01 bash[28152]: cluster 2026-03-09T15:59:22.118436+0000 mon.a (mon.0) 2116 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:23 vm01 bash[28152]: audit 2026-03-09T15:59:22.119837+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:23 vm01 bash[28152]: audit 2026-03-09T15:59:22.119837+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:23 vm01 bash[28152]: cluster 2026-03-09T15:59:22.720112+0000 mgr.y (mgr.14520) 235 : cluster [DBG] pgmap v311: 308 pgs: 48 unknown, 260 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-09T15:59:23.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:23 vm01 bash[28152]: cluster 2026-03-09T15:59:22.720112+0000 mgr.y (mgr.14520) 235 : cluster [DBG] pgmap v311: 308 pgs: 48 unknown, 260 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-09T15:59:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:23 vm09 bash[22983]: audit 2026-03-09T15:59:22.114063+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:23 vm09 bash[22983]: audit 2026-03-09T15:59:22.114063+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:23 vm09 bash[22983]: cluster 2026-03-09T15:59:22.118436+0000 mon.a (mon.0) 2116 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T15:59:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:23 vm09 bash[22983]: cluster 2026-03-09T15:59:22.118436+0000 mon.a (mon.0) 2116 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T15:59:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:23 vm09 bash[22983]: audit 2026-03-09T15:59:22.119837+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:23 vm09 bash[22983]: audit 2026-03-09T15:59:22.119837+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:23 vm09 bash[22983]: cluster 2026-03-09T15:59:22.720112+0000 mgr.y (mgr.14520) 235 : cluster [DBG] pgmap v311: 308 pgs: 48 unknown, 260 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-09T15:59:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:23 vm09 bash[22983]: cluster 2026-03-09T15:59:22.720112+0000 mgr.y (mgr.14520) 235 : cluster [DBG] pgmap v311: 308 pgs: 48 unknown, 260 active+clean; 4.4 MiB data, 800 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: cluster 2026-03-09T15:59:23.106756+0000 mon.a (mon.0) 2118 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: cluster 2026-03-09T15:59:23.106756+0000 mon.a (mon.0) 2118 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.108853+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.108853+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: cluster 2026-03-09T15:59:23.122419+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: cluster 2026-03-09T15:59:23.122419+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.137651+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.137651+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.137977+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.137977+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.140306+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.140306+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.147782+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.147782+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.148253+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.148253+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.153471+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.153471+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.872384+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.872384+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.872591+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.872591+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.872769+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.872769+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.874418+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.874418+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.874543+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.874543+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: cluster 2026-03-09T15:59:23.876352+0000 mon.a (mon.0) 2127 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T15:59:24.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: cluster 2026-03-09T15:59:23.876352+0000 mon.a (mon.0) 2127 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.878930+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.878930+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.879015+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.879015+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.883950+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.883950+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.884315+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:24 vm01 bash[20728]: audit 2026-03-09T15:59:23.884315+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: cluster 2026-03-09T15:59:23.106756+0000 mon.a (mon.0) 2118 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: cluster 2026-03-09T15:59:23.106756+0000 mon.a (mon.0) 2118 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.108853+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.108853+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: cluster 2026-03-09T15:59:23.122419+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: cluster 2026-03-09T15:59:23.122419+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.137651+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.137651+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.137977+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.137977+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.140306+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.140306+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.147782+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.147782+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.148253+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.148253+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.153471+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.153471+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.872384+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.872384+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.872591+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.872591+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.872769+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.872769+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.874418+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.874418+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.874543+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.874543+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: cluster 2026-03-09T15:59:23.876352+0000 mon.a (mon.0) 2127 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: cluster 2026-03-09T15:59:23.876352+0000 mon.a (mon.0) 2127 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.878930+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.878930+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.879015+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.879015+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.883950+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.883950+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.884315+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:24 vm01 bash[28152]: audit 2026-03-09T15:59:23.884315+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: cluster 2026-03-09T15:59:23.106756+0000 mon.a (mon.0) 2118 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: cluster 2026-03-09T15:59:23.106756+0000 mon.a (mon.0) 2118 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.108853+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.108853+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: cluster 2026-03-09T15:59:23.122419+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: cluster 2026-03-09T15:59:23.122419+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.137651+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.137651+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.137977+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.137977+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.140306+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.140306+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.147782+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.147782+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.148253+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.148253+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.153471+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.153471+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.872384+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.872384+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.872591+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.872591+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.872769+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.872769+0000 mon.a (mon.0) 2126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.874418+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.874418+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.101:0/2517755736' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.874543+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.874543+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.101:0/3542364433' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: cluster 2026-03-09T15:59:23.876352+0000 mon.a (mon.0) 2127 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: cluster 2026-03-09T15:59:23.876352+0000 mon.a (mon.0) 2127 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.878930+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.878930+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.879015+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.879015+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.883950+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.883950+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.884315+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:24 vm09 bash[22983]: audit 2026-03-09T15:59:23.884315+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: cluster 2026-03-09T15:59:24.720646+0000 mgr.y (mgr.14520) 236 : cluster [DBG] pgmap v314: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: cluster 2026-03-09T15:59:24.720646+0000 mgr.y (mgr.14520) 236 : cluster [DBG] pgmap v314: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.875292+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.875292+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.875389+0000 mon.a (mon.0) 2132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.875389+0000 mon.a (mon.0) 2132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.875481+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.875481+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: cluster 2026-03-09T15:59:24.878666+0000 mon.a (mon.0) 2134 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: cluster 2026-03-09T15:59:24.878666+0000 mon.a (mon.0) 2134 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.881843+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.881843+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.886023+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.886023+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.896773+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.896773+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.898650+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: cluster 2026-03-09T15:59:24.720646+0000 mgr.y (mgr.14520) 236 : cluster [DBG] pgmap v314: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: cluster 2026-03-09T15:59:24.720646+0000 mgr.y (mgr.14520) 236 : cluster [DBG] pgmap v314: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.875292+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.875292+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.875389+0000 mon.a (mon.0) 2132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.875389+0000 mon.a (mon.0) 2132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.875481+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.875481+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: cluster 2026-03-09T15:59:24.878666+0000 mon.a (mon.0) 2134 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: cluster 2026-03-09T15:59:24.878666+0000 mon.a (mon.0) 2134 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.881843+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.881843+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.886023+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.886023+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.896773+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.896773+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.898650+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.898650+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.899129+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.899129+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.901114+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.901114+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.902397+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.902397+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.902830+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:25 vm01 bash[28152]: audit 2026-03-09T15:59:24.902830+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.898650+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.899129+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.899129+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.901114+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.901114+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.902397+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.902397+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.902830+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:25 vm01 bash[20728]: audit 2026-03-09T15:59:24.902830+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: cluster 2026-03-09T15:59:24.720646+0000 mgr.y (mgr.14520) 236 : cluster [DBG] pgmap v314: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: cluster 2026-03-09T15:59:24.720646+0000 mgr.y (mgr.14520) 236 : cluster [DBG] pgmap v314: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.875292+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.875292+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm01-59602-39"}]': finished 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.875389+0000 mon.a (mon.0) 2132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.875389+0000 mon.a (mon.0) 2132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm01-59610-48"}]': finished 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.875481+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.875481+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: cluster 2026-03-09T15:59:24.878666+0000 mon.a (mon.0) 2134 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: cluster 2026-03-09T15:59:24.878666+0000 mon.a (mon.0) 2134 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.881843+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.881843+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.886023+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.886023+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.896773+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.896773+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.898650+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.898650+0000 mon.a (mon.0) 2137 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.899129+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.899129+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.901114+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.901114+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.902397+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.902397+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.902830+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:25 vm09 bash[22983]: audit 2026-03-09T15:59:24.902830+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:26.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:59:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: cluster 2026-03-09T15:59:25.875623+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: cluster 2026-03-09T15:59:25.875623+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.879277+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.879277+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.879339+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.879339+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.879369+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.879369+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: cluster 2026-03-09T15:59:25.882551+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: cluster 2026-03-09T15:59:25.882551+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.886893+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.886893+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.887093+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.887093+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.957981+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.957981+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.958439+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:26 vm01 bash[20728]: audit 2026-03-09T15:59:25.958439+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: cluster 2026-03-09T15:59:25.875623+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: cluster 2026-03-09T15:59:25.875623+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.879277+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.879277+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.879339+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.879339+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.879369+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.879369+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: cluster 2026-03-09T15:59:25.882551+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: cluster 2026-03-09T15:59:25.882551+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.886893+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.886893+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.887093+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.887093+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.957981+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.957981+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.958439+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:26 vm01 bash[28152]: audit 2026-03-09T15:59:25.958439+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: cluster 2026-03-09T15:59:25.875623+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: cluster 2026-03-09T15:59:25.875623+0000 mon.a (mon.0) 2142 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.879277+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]': finished 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.879277+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-31", "mode": "writeback"}]': finished 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.879339+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.879339+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm01-59602-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.879369+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.879369+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm01-59610-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: cluster 2026-03-09T15:59:25.882551+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: cluster 2026-03-09T15:59:25.882551+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.886893+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.886893+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.887093+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.887093+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.957981+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.957981+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.958439+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:26 vm09 bash[22983]: audit 2026-03-09T15:59:25.958439+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:26.375219+0000 mgr.y (mgr.14520) 237 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:26.375219+0000 mgr.y (mgr.14520) 237 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: cluster 2026-03-09T15:59:26.720982+0000 mgr.y (mgr.14520) 238 : cluster [DBG] pgmap v317: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: cluster 2026-03-09T15:59:26.720982+0000 mgr.y (mgr.14520) 238 : cluster [DBG] pgmap v317: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:26.893465+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:26.893465+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:26.900462+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:26.900462+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: cluster 2026-03-09T15:59:26.914425+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: cluster 2026-03-09T15:59:26.914425+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:26.915878+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:26.915878+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: cluster 2026-03-09T15:59:27.893679+0000 mon.a (mon.0) 2153 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: cluster 2026-03-09T15:59:27.893679+0000 mon.a (mon.0) 2153 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:27.897523+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:27.897523+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:27.897637+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:27.897637+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:27.897895+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: audit 2026-03-09T15:59:27.897895+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: cluster 2026-03-09T15:59:27.903070+0000 mon.a (mon.0) 2157 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:27 vm01 bash[20728]: cluster 2026-03-09T15:59:27.903070+0000 mon.a (mon.0) 2157 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:26.375219+0000 mgr.y (mgr.14520) 237 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:26.375219+0000 mgr.y (mgr.14520) 237 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: cluster 2026-03-09T15:59:26.720982+0000 mgr.y (mgr.14520) 238 : cluster [DBG] pgmap v317: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: cluster 2026-03-09T15:59:26.720982+0000 mgr.y (mgr.14520) 238 : cluster [DBG] pgmap v317: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:26.893465+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:26.893465+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:26.900462+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:26.900462+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: cluster 2026-03-09T15:59:26.914425+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: cluster 2026-03-09T15:59:26.914425+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:26.915878+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:26.915878+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: cluster 2026-03-09T15:59:27.893679+0000 mon.a (mon.0) 2153 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: cluster 2026-03-09T15:59:27.893679+0000 mon.a (mon.0) 2153 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:27.897523+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:28.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:27.897523+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:28.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:27.897637+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:28.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:27.897637+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:28.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:27.897895+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:28.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: audit 2026-03-09T15:59:27.897895+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:28.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: cluster 2026-03-09T15:59:27.903070+0000 mon.a (mon.0) 2157 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T15:59:28.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:27 vm01 bash[28152]: cluster 2026-03-09T15:59:27.903070+0000 mon.a (mon.0) 2157 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:26.375219+0000 mgr.y (mgr.14520) 237 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:26.375219+0000 mgr.y (mgr.14520) 237 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: cluster 2026-03-09T15:59:26.720982+0000 mgr.y (mgr.14520) 238 : cluster [DBG] pgmap v317: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: cluster 2026-03-09T15:59:26.720982+0000 mgr.y (mgr.14520) 238 : cluster [DBG] pgmap v317: 292 pgs: 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:26.893465+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:26.893465+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:26.900462+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:26.900462+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: cluster 2026-03-09T15:59:26.914425+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: cluster 2026-03-09T15:59:26.914425+0000 mon.a (mon.0) 2151 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:26.915878+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:26.915878+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]: dispatch 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: cluster 2026-03-09T15:59:27.893679+0000 mon.a (mon.0) 2153 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: cluster 2026-03-09T15:59:27.893679+0000 mon.a (mon.0) 2153 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:27.897523+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:27.897523+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm01-59610-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:27.897637+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:27.897637+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm01-59602-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:27.897895+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: audit 2026-03-09T15:59:27.897895+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-31"}]': finished 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: cluster 2026-03-09T15:59:27.903070+0000 mon.a (mon.0) 2157 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T15:59:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:27 vm09 bash[22983]: cluster 2026-03-09T15:59:27.903070+0000 mon.a (mon.0) 2157 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T15:59:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:28 vm09 bash[22983]: cluster 2026-03-09T15:59:28.815707+0000 mon.a (mon.0) 2158 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:28 vm09 bash[22983]: cluster 2026-03-09T15:59:28.815707+0000 mon.a (mon.0) 2158 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:28 vm09 bash[22983]: cluster 2026-03-09T15:59:28.933252+0000 mon.a (mon.0) 2159 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T15:59:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:28 vm09 bash[22983]: cluster 2026-03-09T15:59:28.933252+0000 mon.a (mon.0) 2159 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T15:59:29.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:28 vm01 bash[20728]: cluster 2026-03-09T15:59:28.815707+0000 mon.a (mon.0) 2158 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:28 vm01 bash[20728]: cluster 2026-03-09T15:59:28.815707+0000 mon.a (mon.0) 2158 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:28 vm01 bash[20728]: cluster 2026-03-09T15:59:28.933252+0000 mon.a (mon.0) 2159 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T15:59:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:28 vm01 bash[20728]: cluster 2026-03-09T15:59:28.933252+0000 mon.a (mon.0) 2159 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T15:59:29.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:28 vm01 bash[28152]: cluster 2026-03-09T15:59:28.815707+0000 mon.a (mon.0) 2158 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:29.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:28 vm01 bash[28152]: cluster 2026-03-09T15:59:28.815707+0000 mon.a (mon.0) 2158 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:29.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:28 vm01 bash[28152]: cluster 2026-03-09T15:59:28.933252+0000 mon.a (mon.0) 2159 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T15:59:29.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:28 vm01 bash[28152]: cluster 2026-03-09T15:59:28.933252+0000 mon.a (mon.0) 2159 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T15:59:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:30 vm09 bash[22983]: cluster 2026-03-09T15:59:28.721494+0000 mgr.y (mgr.14520) 239 : cluster [DBG] pgmap v320: 308 pgs: 4 creating+peering, 12 unknown, 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:30 vm09 bash[22983]: cluster 2026-03-09T15:59:28.721494+0000 mgr.y (mgr.14520) 239 : cluster [DBG] pgmap v320: 308 pgs: 4 creating+peering, 12 unknown, 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:30 vm09 bash[22983]: audit 2026-03-09T15:59:29.129488+0000 mon.a (mon.0) 2160 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:30 vm09 bash[22983]: audit 2026-03-09T15:59:29.129488+0000 mon.a (mon.0) 2160 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:30 vm01 bash[20728]: cluster 2026-03-09T15:59:28.721494+0000 mgr.y (mgr.14520) 239 : cluster [DBG] pgmap v320: 308 pgs: 4 creating+peering, 12 unknown, 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:30 vm01 bash[20728]: cluster 2026-03-09T15:59:28.721494+0000 mgr.y (mgr.14520) 239 : cluster [DBG] pgmap v320: 308 pgs: 4 creating+peering, 12 unknown, 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:30 vm01 bash[20728]: audit 2026-03-09T15:59:29.129488+0000 mon.a (mon.0) 2160 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:30 vm01 bash[20728]: audit 2026-03-09T15:59:29.129488+0000 mon.a (mon.0) 2160 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:30 vm01 bash[28152]: cluster 2026-03-09T15:59:28.721494+0000 mgr.y (mgr.14520) 239 : cluster [DBG] pgmap v320: 308 pgs: 4 creating+peering, 12 unknown, 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:30 vm01 bash[28152]: cluster 2026-03-09T15:59:28.721494+0000 mgr.y (mgr.14520) 239 : cluster [DBG] pgmap v320: 308 pgs: 4 creating+peering, 12 unknown, 292 active+clean; 4.4 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:30 vm01 bash[28152]: audit 2026-03-09T15:59:29.129488+0000 mon.a (mon.0) 2160 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:30.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:30 vm01 bash[28152]: audit 2026-03-09T15:59:29.129488+0000 mon.a (mon.0) 2160 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: cluster 2026-03-09T15:59:30.015718+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: cluster 2026-03-09T15:59:30.015718+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:30.024326+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:30.024326+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:30.029132+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:30.029132+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:30.033433+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:30.033433+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:30.044312+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:30.044312+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.011185+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.011185+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.011550+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.011550+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.011586+0000 mon.a (mon.0) 2167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.011586+0000 mon.a (mon.0) 2167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: cluster 2026-03-09T15:59:31.014703+0000 mon.a (mon.0) 2168 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: cluster 2026-03-09T15:59:31.014703+0000 mon.a (mon.0) 2168 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.015262+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.015262+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.015387+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:31 vm09 bash[22983]: audit 2026-03-09T15:59:31.015387+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: cluster 2026-03-09T15:59:30.015718+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: cluster 2026-03-09T15:59:30.015718+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:30.024326+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:30.024326+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:30.029132+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:30.029132+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:30.033433+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:30.033433+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:30.044312+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:30.044312+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.011185+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.011185+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.011550+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.011550+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.011586+0000 mon.a (mon.0) 2167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.011586+0000 mon.a (mon.0) 2167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: cluster 2026-03-09T15:59:31.014703+0000 mon.a (mon.0) 2168 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: cluster 2026-03-09T15:59:31.014703+0000 mon.a (mon.0) 2168 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.015262+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.015262+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.015387+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:31 vm01 bash[20728]: audit 2026-03-09T15:59:31.015387+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: cluster 2026-03-09T15:59:30.015718+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: cluster 2026-03-09T15:59:30.015718+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:30.024326+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:30.024326+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:30.029132+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:30.029132+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:30.033433+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:30.033433+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:30.044312+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:30.044312+0000 mon.a (mon.0) 2164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.011185+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.011185+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.011550+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.011550+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.011586+0000 mon.a (mon.0) 2167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.011586+0000 mon.a (mon.0) 2167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: cluster 2026-03-09T15:59:31.014703+0000 mon.a (mon.0) 2168 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: cluster 2026-03-09T15:59:31.014703+0000 mon.a (mon.0) 2168 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.015262+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.015262+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.015387+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:31 vm01 bash[28152]: audit 2026-03-09T15:59:31.015387+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: cluster 2026-03-09T15:59:30.721952+0000 mgr.y (mgr.14520) 240 : cluster [DBG] pgmap v323: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: cluster 2026-03-09T15:59:30.721952+0000 mgr.y (mgr.14520) 240 : cluster [DBG] pgmap v323: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:31.040077+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:31.040077+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:31.062476+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:31.062476+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.014788+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.014788+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.015286+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.015286+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.015342+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.015342+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: cluster 2026-03-09T15:59:32.024286+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: cluster 2026-03-09T15:59:32.024286+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.026208+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.026208+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.029751+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.029751+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.039836+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:32 vm09 bash[22983]: audit 2026-03-09T15:59:32.039836+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: cluster 2026-03-09T15:59:30.721952+0000 mgr.y (mgr.14520) 240 : cluster [DBG] pgmap v323: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: cluster 2026-03-09T15:59:30.721952+0000 mgr.y (mgr.14520) 240 : cluster [DBG] pgmap v323: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:31.040077+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:31.040077+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:31.062476+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:31.062476+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.014788+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.014788+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.015286+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.015286+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.015342+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.015342+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: cluster 2026-03-09T15:59:32.024286+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: cluster 2026-03-09T15:59:32.024286+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.026208+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: cluster 2026-03-09T15:59:30.721952+0000 mgr.y (mgr.14520) 240 : cluster [DBG] pgmap v323: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: cluster 2026-03-09T15:59:30.721952+0000 mgr.y (mgr.14520) 240 : cluster [DBG] pgmap v323: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:31.040077+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:31.040077+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:31.062476+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:31.062476+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.014788+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.014788+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.101:0/2791163713' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm01-59610-49"}]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.015286+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.015286+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? 192.168.123.101:0/1973303834' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm01-59602-40"}]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.015342+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.015342+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: cluster 2026-03-09T15:59:32.024286+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: cluster 2026-03-09T15:59:32.024286+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.026208+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.026208+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.029751+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.029751+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.039836+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:32 vm01 bash[28152]: audit 2026-03-09T15:59:32.039836+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.026208+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.029751+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.029751+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:32.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.039836+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:32.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:32 vm01 bash[20728]: audit 2026-03-09T15:59:32.039836+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.037872+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.037872+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.039740+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.039740+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.041973+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.041973+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.042082+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.042082+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.042505+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.042505+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.058290+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.058290+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.061291+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.061291+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.062002+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.062002+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.586033+0000 mon.a (mon.0) 2183 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:59:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.586033+0000 mon.a (mon.0) 2183 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.878871+0000 mon.a (mon.0) 2184 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.878871+0000 mon.a (mon.0) 2184 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.888268+0000 mon.a (mon.0) 2185 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:32.888268+0000 mon.a (mon.0) 2185 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.017658+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.017658+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.017733+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.017733+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.017872+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.017872+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.018205+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.018205+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: cluster 2026-03-09T15:59:33.022107+0000 mon.a (mon.0) 2189 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: cluster 2026-03-09T15:59:33.022107+0000 mon.a (mon.0) 2189 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.023119+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.023119+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.023347+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.023347+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.025550+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.025550+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.032338+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:33 vm01 bash[20728]: audit 2026-03-09T15:59:33.032338+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:59:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:59:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.037872+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.037872+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.039740+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.039740+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.041973+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.041973+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.042082+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.042082+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.042505+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.042505+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.058290+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.058290+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.061291+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.061291+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.062002+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.062002+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.586033+0000 mon.a (mon.0) 2183 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.586033+0000 mon.a (mon.0) 2183 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.878871+0000 mon.a (mon.0) 2184 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.878871+0000 mon.a (mon.0) 2184 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.888268+0000 mon.a (mon.0) 2185 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:32.888268+0000 mon.a (mon.0) 2185 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.017658+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.017658+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.017733+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.017733+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.017872+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.017872+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.018205+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.018205+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: cluster 2026-03-09T15:59:33.022107+0000 mon.a (mon.0) 2189 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: cluster 2026-03-09T15:59:33.022107+0000 mon.a (mon.0) 2189 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.023119+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.023119+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.023347+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.023347+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.025550+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.025550+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.032338+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.179 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:33 vm01 bash[28152]: audit 2026-03-09T15:59:33.032338+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.037872+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.037872+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.039740+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.039740+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.041973+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.041973+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.042082+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.042082+0000 mon.a (mon.0) 2179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.042505+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.042505+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.058290+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.058290+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.061291+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.061291+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.062002+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.062002+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.586033+0000 mon.a (mon.0) 2183 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.586033+0000 mon.a (mon.0) 2183 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.878871+0000 mon.a (mon.0) 2184 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.878871+0000 mon.a (mon.0) 2184 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.888268+0000 mon.a (mon.0) 2185 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:32.888268+0000 mon.a (mon.0) 2185 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.017658+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.017658+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.017733+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.017733+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm01-59602-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.017872+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.017872+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.018205+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.018205+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm01-59610-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: cluster 2026-03-09T15:59:33.022107+0000 mon.a (mon.0) 2189 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: cluster 2026-03-09T15:59:33.022107+0000 mon.a (mon.0) 2189 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T15:59:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.023119+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.023119+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:33.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.023347+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.023347+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:33.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.025550+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.025550+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.032338+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:33.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:33 vm09 bash[22983]: audit 2026-03-09T15:59:33.032338+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]: dispatch 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: cluster 2026-03-09T15:59:32.722293+0000 mgr.y (mgr.14520) 241 : cluster [DBG] pgmap v326: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: cluster 2026-03-09T15:59:32.722293+0000 mgr.y (mgr.14520) 241 : cluster [DBG] pgmap v326: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: audit 2026-03-09T15:59:33.213551+0000 mon.a (mon.0) 2193 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: audit 2026-03-09T15:59:33.213551+0000 mon.a (mon.0) 2193 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: audit 2026-03-09T15:59:33.214183+0000 mon.a (mon.0) 2194 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: audit 2026-03-09T15:59:33.214183+0000 mon.a (mon.0) 2194 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: audit 2026-03-09T15:59:33.218901+0000 mon.a (mon.0) 2195 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: audit 2026-03-09T15:59:33.218901+0000 mon.a (mon.0) 2195 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: cluster 2026-03-09T15:59:34.018032+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: cluster 2026-03-09T15:59:34.018032+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: audit 2026-03-09T15:59:34.021027+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]': finished 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: audit 2026-03-09T15:59:34.021027+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]': finished 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: cluster 2026-03-09T15:59:34.041764+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T15:59:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:34 vm09 bash[22983]: cluster 2026-03-09T15:59:34.041764+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: cluster 2026-03-09T15:59:32.722293+0000 mgr.y (mgr.14520) 241 : cluster [DBG] pgmap v326: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: cluster 2026-03-09T15:59:32.722293+0000 mgr.y (mgr.14520) 241 : cluster [DBG] pgmap v326: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: audit 2026-03-09T15:59:33.213551+0000 mon.a (mon.0) 2193 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: audit 2026-03-09T15:59:33.213551+0000 mon.a (mon.0) 2193 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: audit 2026-03-09T15:59:33.214183+0000 mon.a (mon.0) 2194 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: audit 2026-03-09T15:59:33.214183+0000 mon.a (mon.0) 2194 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: audit 2026-03-09T15:59:33.218901+0000 mon.a (mon.0) 2195 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: audit 2026-03-09T15:59:33.218901+0000 mon.a (mon.0) 2195 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: cluster 2026-03-09T15:59:34.018032+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: cluster 2026-03-09T15:59:34.018032+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: audit 2026-03-09T15:59:34.021027+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]': finished 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: audit 2026-03-09T15:59:34.021027+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]': finished 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: cluster 2026-03-09T15:59:34.041764+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:34 vm01 bash[20728]: cluster 2026-03-09T15:59:34.041764+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: cluster 2026-03-09T15:59:32.722293+0000 mgr.y (mgr.14520) 241 : cluster [DBG] pgmap v326: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: cluster 2026-03-09T15:59:32.722293+0000 mgr.y (mgr.14520) 241 : cluster [DBG] pgmap v326: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 798 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: audit 2026-03-09T15:59:33.213551+0000 mon.a (mon.0) 2193 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: audit 2026-03-09T15:59:33.213551+0000 mon.a (mon.0) 2193 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: audit 2026-03-09T15:59:33.214183+0000 mon.a (mon.0) 2194 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: audit 2026-03-09T15:59:33.214183+0000 mon.a (mon.0) 2194 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: audit 2026-03-09T15:59:33.218901+0000 mon.a (mon.0) 2195 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: audit 2026-03-09T15:59:33.218901+0000 mon.a (mon.0) 2195 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: cluster 2026-03-09T15:59:34.018032+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: cluster 2026-03-09T15:59:34.018032+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: audit 2026-03-09T15:59:34.021027+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]': finished 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: audit 2026-03-09T15:59:34.021027+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-33", "mode": "writeback"}]': finished 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: cluster 2026-03-09T15:59:34.041764+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T15:59:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:34 vm01 bash[28152]: cluster 2026-03-09T15:59:34.041764+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:34.106627+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:34.106627+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:34.107113+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:34.107113+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:35.026535+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:35.026535+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:35.026586+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:35.026586+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:35.026616+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: audit 2026-03-09T15:59:35.026616+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: cluster 2026-03-09T15:59:35.029886+0000 mon.a (mon.0) 2203 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T15:59:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:35 vm09 bash[22983]: cluster 2026-03-09T15:59:35.029886+0000 mon.a (mon.0) 2203 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T15:59:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:34.106627+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:34.106627+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:34.107113+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:34.107113+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:35.026535+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:35.026535+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:35.026586+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:35.026586+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:35.026616+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: audit 2026-03-09T15:59:35.026616+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: cluster 2026-03-09T15:59:35.029886+0000 mon.a (mon.0) 2203 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:35 vm01 bash[28152]: cluster 2026-03-09T15:59:35.029886+0000 mon.a (mon.0) 2203 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:34.106627+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:34.106627+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:34.107113+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:34.107113+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:35.026535+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:35.026535+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm01-59602-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:35.026586+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:35.026586+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm01-59610-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:35.026616+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: audit 2026-03-09T15:59:35.026616+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: cluster 2026-03-09T15:59:35.029886+0000 mon.a (mon.0) 2203 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T15:59:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:35 vm01 bash[20728]: cluster 2026-03-09T15:59:35.029886+0000 mon.a (mon.0) 2203 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: cluster 2026-03-09T15:59:34.722912+0000 mgr.y (mgr.14520) 242 : cluster [DBG] pgmap v329: 292 pgs: 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: cluster 2026-03-09T15:59:34.722912+0000 mgr.y (mgr.14520) 242 : cluster [DBG] pgmap v329: 292 pgs: 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: audit 2026-03-09T15:59:35.045451+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: audit 2026-03-09T15:59:35.045451+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: audit 2026-03-09T15:59:35.054904+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: audit 2026-03-09T15:59:35.054904+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: cluster 2026-03-09T15:59:36.026742+0000 mon.a (mon.0) 2205 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: cluster 2026-03-09T15:59:36.026742+0000 mon.a (mon.0) 2205 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: audit 2026-03-09T15:59:36.030088+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: audit 2026-03-09T15:59:36.030088+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: cluster 2026-03-09T15:59:36.042356+0000 mon.a (mon.0) 2207 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T15:59:36.380 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:36 vm09 bash[22983]: cluster 2026-03-09T15:59:36.042356+0000 mon.a (mon.0) 2207 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T15:59:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: cluster 2026-03-09T15:59:34.722912+0000 mgr.y (mgr.14520) 242 : cluster [DBG] pgmap v329: 292 pgs: 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: cluster 2026-03-09T15:59:34.722912+0000 mgr.y (mgr.14520) 242 : cluster [DBG] pgmap v329: 292 pgs: 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: audit 2026-03-09T15:59:35.045451+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: audit 2026-03-09T15:59:35.045451+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: audit 2026-03-09T15:59:35.054904+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: audit 2026-03-09T15:59:35.054904+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: cluster 2026-03-09T15:59:36.026742+0000 mon.a (mon.0) 2205 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: cluster 2026-03-09T15:59:36.026742+0000 mon.a (mon.0) 2205 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: audit 2026-03-09T15:59:36.030088+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: audit 2026-03-09T15:59:36.030088+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: cluster 2026-03-09T15:59:36.042356+0000 mon.a (mon.0) 2207 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:36 vm01 bash[28152]: cluster 2026-03-09T15:59:36.042356+0000 mon.a (mon.0) 2207 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: cluster 2026-03-09T15:59:34.722912+0000 mgr.y (mgr.14520) 242 : cluster [DBG] pgmap v329: 292 pgs: 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: cluster 2026-03-09T15:59:34.722912+0000 mgr.y (mgr.14520) 242 : cluster [DBG] pgmap v329: 292 pgs: 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: audit 2026-03-09T15:59:35.045451+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: audit 2026-03-09T15:59:35.045451+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: audit 2026-03-09T15:59:35.054904+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: audit 2026-03-09T15:59:35.054904+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]: dispatch 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: cluster 2026-03-09T15:59:36.026742+0000 mon.a (mon.0) 2205 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: cluster 2026-03-09T15:59:36.026742+0000 mon.a (mon.0) 2205 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: audit 2026-03-09T15:59:36.030088+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: audit 2026-03-09T15:59:36.030088+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-33"}]': finished 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: cluster 2026-03-09T15:59:36.042356+0000 mon.a (mon.0) 2207 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T15:59:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:36 vm01 bash[20728]: cluster 2026-03-09T15:59:36.042356+0000 mon.a (mon.0) 2207 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T15:59:36.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:59:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: audit 2026-03-09T15:59:36.383020+0000 mgr.y (mgr.14520) 243 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: audit 2026-03-09T15:59:36.383020+0000 mgr.y (mgr.14520) 243 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: cluster 2026-03-09T15:59:36.723238+0000 mgr.y (mgr.14520) 244 : cluster [DBG] pgmap v332: 308 pgs: 16 unknown, 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: cluster 2026-03-09T15:59:36.723238+0000 mgr.y (mgr.14520) 244 : cluster [DBG] pgmap v332: 308 pgs: 16 unknown, 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: cluster 2026-03-09T15:59:37.036635+0000 mon.a (mon.0) 2208 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: cluster 2026-03-09T15:59:37.036635+0000 mon.a (mon.0) 2208 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: audit 2026-03-09T15:59:37.037876+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: audit 2026-03-09T15:59:37.037876+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: audit 2026-03-09T15:59:37.043644+0000 mon.a (mon.0) 2209 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: audit 2026-03-09T15:59:37.043644+0000 mon.a (mon.0) 2209 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: audit 2026-03-09T15:59:37.050314+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: audit 2026-03-09T15:59:37.050314+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: cluster 2026-03-09T15:59:37.070416+0000 mon.a (mon.0) 2211 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:38 vm09 bash[22983]: cluster 2026-03-09T15:59:37.070416+0000 mon.a (mon.0) 2211 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: audit 2026-03-09T15:59:36.383020+0000 mgr.y (mgr.14520) 243 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: audit 2026-03-09T15:59:36.383020+0000 mgr.y (mgr.14520) 243 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: cluster 2026-03-09T15:59:36.723238+0000 mgr.y (mgr.14520) 244 : cluster [DBG] pgmap v332: 308 pgs: 16 unknown, 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: cluster 2026-03-09T15:59:36.723238+0000 mgr.y (mgr.14520) 244 : cluster [DBG] pgmap v332: 308 pgs: 16 unknown, 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: cluster 2026-03-09T15:59:37.036635+0000 mon.a (mon.0) 2208 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: cluster 2026-03-09T15:59:37.036635+0000 mon.a (mon.0) 2208 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: audit 2026-03-09T15:59:37.037876+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: audit 2026-03-09T15:59:37.037876+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: audit 2026-03-09T15:59:37.043644+0000 mon.a (mon.0) 2209 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: audit 2026-03-09T15:59:37.043644+0000 mon.a (mon.0) 2209 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: audit 2026-03-09T15:59:37.050314+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: audit 2026-03-09T15:59:37.050314+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: cluster 2026-03-09T15:59:37.070416+0000 mon.a (mon.0) 2211 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:38 vm01 bash[28152]: cluster 2026-03-09T15:59:37.070416+0000 mon.a (mon.0) 2211 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: audit 2026-03-09T15:59:36.383020+0000 mgr.y (mgr.14520) 243 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: audit 2026-03-09T15:59:36.383020+0000 mgr.y (mgr.14520) 243 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: cluster 2026-03-09T15:59:36.723238+0000 mgr.y (mgr.14520) 244 : cluster [DBG] pgmap v332: 308 pgs: 16 unknown, 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: cluster 2026-03-09T15:59:36.723238+0000 mgr.y (mgr.14520) 244 : cluster [DBG] pgmap v332: 308 pgs: 16 unknown, 292 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: cluster 2026-03-09T15:59:37.036635+0000 mon.a (mon.0) 2208 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: cluster 2026-03-09T15:59:37.036635+0000 mon.a (mon.0) 2208 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: audit 2026-03-09T15:59:37.037876+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: audit 2026-03-09T15:59:37.037876+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: audit 2026-03-09T15:59:37.043644+0000 mon.a (mon.0) 2209 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: audit 2026-03-09T15:59:37.043644+0000 mon.a (mon.0) 2209 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: audit 2026-03-09T15:59:37.050314+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: audit 2026-03-09T15:59:37.050314+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: cluster 2026-03-09T15:59:37.070416+0000 mon.a (mon.0) 2211 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:38 vm01 bash[20728]: cluster 2026-03-09T15:59:37.070416+0000 mon.a (mon.0) 2211 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.037047+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.037047+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.037172+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.037172+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: cluster 2026-03-09T15:59:38.041056+0000 mon.a (mon.0) 2214 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: cluster 2026-03-09T15:59:38.041056+0000 mon.a (mon.0) 2214 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.046925+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.046925+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.047183+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.047183+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.050815+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.050815+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.051051+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.051051+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.051372+0000 mon.a (mon.0) 2217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:38.051372+0000 mon.a (mon.0) 2217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:39.042454+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:39.042454+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:39.042789+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:39.042789+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:39.042826+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:39.042826+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:39.061294+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: audit 2026-03-09T15:59:39.061294+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: cluster 2026-03-09T15:59:39.062318+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T15:59:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:39 vm09 bash[22983]: cluster 2026-03-09T15:59:39.062318+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.037047+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.037047+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.037172+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.037172+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: cluster 2026-03-09T15:59:38.041056+0000 mon.a (mon.0) 2214 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: cluster 2026-03-09T15:59:38.041056+0000 mon.a (mon.0) 2214 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.046925+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.046925+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.047183+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.047183+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.050815+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.050815+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.051051+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.051051+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.051372+0000 mon.a (mon.0) 2217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:38.051372+0000 mon.a (mon.0) 2217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:39.042454+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:39.042454+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:39.042789+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:39.042789+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:39.042826+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:39.042826+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:39.061294+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: audit 2026-03-09T15:59:39.061294+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: cluster 2026-03-09T15:59:39.062318+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:39 vm01 bash[28152]: cluster 2026-03-09T15:59:39.062318+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.037047+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.037047+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.037172+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.037172+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: cluster 2026-03-09T15:59:38.041056+0000 mon.a (mon.0) 2214 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: cluster 2026-03-09T15:59:38.041056+0000 mon.a (mon.0) 2214 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.046925+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.046925+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.101:0/211356298' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.047183+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.047183+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.050815+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.050815+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.051051+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.051051+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.051372+0000 mon.a (mon.0) 2217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:38.051372+0000 mon.a (mon.0) 2217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:39.042454+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:39.042454+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.101:0/929479658' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm01-59602-41"}]': finished 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:39.042789+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:39.042789+0000 mon.a (mon.0) 2219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:39.042826+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:39.042826+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm01-59610-50"}]': finished 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:39.061294+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: audit 2026-03-09T15:59:39.061294+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: cluster 2026-03-09T15:59:39.062318+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T15:59:39.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:39 vm01 bash[20728]: cluster 2026-03-09T15:59:39.062318+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: cluster 2026-03-09T15:59:38.723701+0000 mgr.y (mgr.14520) 245 : cluster [DBG] pgmap v335: 292 pgs: 3 creating+peering, 29 unknown, 260 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: cluster 2026-03-09T15:59:38.723701+0000 mgr.y (mgr.14520) 245 : cluster [DBG] pgmap v335: 292 pgs: 3 creating+peering, 29 unknown, 260 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:39.080715+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:39.080715+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:39.132692+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:39.132692+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:39.133914+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:39.133914+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:39.134116+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:39.134116+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.046250+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.046250+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.046390+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.046390+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.049601+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.049601+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: cluster 2026-03-09T15:59:40.066491+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: cluster 2026-03-09T15:59:40.066491+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.067668+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.067668+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.068039+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.068039+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.068592+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:40 vm09 bash[22983]: audit 2026-03-09T15:59:40.068592+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: cluster 2026-03-09T15:59:38.723701+0000 mgr.y (mgr.14520) 245 : cluster [DBG] pgmap v335: 292 pgs: 3 creating+peering, 29 unknown, 260 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: cluster 2026-03-09T15:59:38.723701+0000 mgr.y (mgr.14520) 245 : cluster [DBG] pgmap v335: 292 pgs: 3 creating+peering, 29 unknown, 260 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:39.080715+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:39.080715+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:39.132692+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:39.132692+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:39.133914+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:39.133914+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:39.134116+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:39.134116+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.046250+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.046250+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.046390+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.046390+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.049601+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.049601+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: cluster 2026-03-09T15:59:40.066491+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: cluster 2026-03-09T15:59:40.066491+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.067668+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.067668+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.068039+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.068039+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.068592+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:40 vm01 bash[28152]: audit 2026-03-09T15:59:40.068592+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: cluster 2026-03-09T15:59:38.723701+0000 mgr.y (mgr.14520) 245 : cluster [DBG] pgmap v335: 292 pgs: 3 creating+peering, 29 unknown, 260 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: cluster 2026-03-09T15:59:38.723701+0000 mgr.y (mgr.14520) 245 : cluster [DBG] pgmap v335: 292 pgs: 3 creating+peering, 29 unknown, 260 active+clean; 4.4 MiB data, 802 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:39.080715+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:39.080715+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:39.132692+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:39.132692+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:39.133914+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:39.133914+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:39.134116+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:39.134116+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.046250+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.046250+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.046390+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.046390+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm01-59602-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.049601+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.049601+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: cluster 2026-03-09T15:59:40.066491+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: cluster 2026-03-09T15:59:40.066491+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.067668+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.067668+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.068039+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.068039+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.068592+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:40 vm01 bash[20728]: audit 2026-03-09T15:59:40.068592+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: cluster 2026-03-09T15:59:40.724000+0000 mgr.y (mgr.14520) 246 : cluster [DBG] pgmap v338: 324 pgs: 32 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: cluster 2026-03-09T15:59:40.724000+0000 mgr.y (mgr.14520) 246 : cluster [DBG] pgmap v338: 324 pgs: 32 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: audit 2026-03-09T15:59:41.058648+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: audit 2026-03-09T15:59:41.058648+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: audit 2026-03-09T15:59:41.058767+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: audit 2026-03-09T15:59:41.058767+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: audit 2026-03-09T15:59:41.062540+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: audit 2026-03-09T15:59:41.062540+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: cluster 2026-03-09T15:59:41.075743+0000 mon.a (mon.0) 2234 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: cluster 2026-03-09T15:59:41.075743+0000 mon.a (mon.0) 2234 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: audit 2026-03-09T15:59:41.089830+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:41 vm09 bash[22983]: audit 2026-03-09T15:59:41.089830+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: cluster 2026-03-09T15:59:40.724000+0000 mgr.y (mgr.14520) 246 : cluster [DBG] pgmap v338: 324 pgs: 32 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:59:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: cluster 2026-03-09T15:59:40.724000+0000 mgr.y (mgr.14520) 246 : cluster [DBG] pgmap v338: 324 pgs: 32 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:59:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: audit 2026-03-09T15:59:41.058648+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: audit 2026-03-09T15:59:41.058648+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: audit 2026-03-09T15:59:41.058767+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: audit 2026-03-09T15:59:41.058767+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: audit 2026-03-09T15:59:41.062540+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: audit 2026-03-09T15:59:41.062540+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: cluster 2026-03-09T15:59:41.075743+0000 mon.a (mon.0) 2234 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: cluster 2026-03-09T15:59:41.075743+0000 mon.a (mon.0) 2234 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: audit 2026-03-09T15:59:41.089830+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:41 vm01 bash[28152]: audit 2026-03-09T15:59:41.089830+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: cluster 2026-03-09T15:59:40.724000+0000 mgr.y (mgr.14520) 246 : cluster [DBG] pgmap v338: 324 pgs: 32 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: cluster 2026-03-09T15:59:40.724000+0000 mgr.y (mgr.14520) 246 : cluster [DBG] pgmap v338: 324 pgs: 32 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: audit 2026-03-09T15:59:41.058648+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: audit 2026-03-09T15:59:41.058648+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: audit 2026-03-09T15:59:41.058767+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: audit 2026-03-09T15:59:41.058767+0000 mon.a (mon.0) 2233 : audit [INF] from='client.? 192.168.123.101:0/1244441176' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm01-59610-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: audit 2026-03-09T15:59:41.062540+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: audit 2026-03-09T15:59:41.062540+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: cluster 2026-03-09T15:59:41.075743+0000 mon.a (mon.0) 2234 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: cluster 2026-03-09T15:59:41.075743+0000 mon.a (mon.0) 2234 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: audit 2026-03-09T15:59:41.089830+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:41 vm01 bash[20728]: audit 2026-03-09T15:59:41.089830+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: cluster 2026-03-09T15:59:42.058601+0000 mon.a (mon.0) 2236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: cluster 2026-03-09T15:59:42.058601+0000 mon.a (mon.0) 2236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.147329+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.147329+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.147375+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]': finished 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.147375+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]': finished 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: cluster 2026-03-09T15:59:42.151086+0000 mon.a (mon.0) 2239 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: cluster 2026-03-09T15:59:42.151086+0000 mon.a (mon.0) 2239 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.173448+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.173448+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.173706+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.173706+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.181112+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.181112+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.181366+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.181366+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.182865+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.182865+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.183091+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:42 vm09 bash[22983]: audit 2026-03-09T15:59:42.183091+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: cluster 2026-03-09T15:59:42.058601+0000 mon.a (mon.0) 2236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: cluster 2026-03-09T15:59:42.058601+0000 mon.a (mon.0) 2236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.147329+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.147329+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.147375+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]': finished 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.147375+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]': finished 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: cluster 2026-03-09T15:59:42.151086+0000 mon.a (mon.0) 2239 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: cluster 2026-03-09T15:59:42.151086+0000 mon.a (mon.0) 2239 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.173448+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.173448+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.173706+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.173706+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.181112+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.181112+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.181366+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.181366+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.182865+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.182865+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.183091+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:42 vm01 bash[28152]: audit 2026-03-09T15:59:42.183091+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: cluster 2026-03-09T15:59:42.058601+0000 mon.a (mon.0) 2236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: cluster 2026-03-09T15:59:42.058601+0000 mon.a (mon.0) 2236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.147329+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.147329+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm01-59602-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.147375+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]': finished 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.147375+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-35", "mode": "writeback"}]': finished 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: cluster 2026-03-09T15:59:42.151086+0000 mon.a (mon.0) 2239 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: cluster 2026-03-09T15:59:42.151086+0000 mon.a (mon.0) 2239 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.173448+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.173448+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.173706+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.173706+0000 mon.a (mon.0) 2240 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.181112+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.181112+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.181366+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.181366+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.182865+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.182865+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.183091+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:42.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:42 vm01 bash[20728]: audit 2026-03-09T15:59:42.183091+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:43.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:59:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:59:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: cluster 2026-03-09T15:59:42.724304+0000 mgr.y (mgr.14520) 247 : cluster [DBG] pgmap v341: 300 pgs: 8 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: cluster 2026-03-09T15:59:42.724304+0000 mgr.y (mgr.14520) 247 : cluster [DBG] pgmap v341: 300 pgs: 8 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: audit 2026-03-09T15:59:43.150495+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: audit 2026-03-09T15:59:43.150495+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: cluster 2026-03-09T15:59:43.152970+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: cluster 2026-03-09T15:59:43.152970+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: audit 2026-03-09T15:59:43.161531+0000 mon.c (mon.2) 252 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: audit 2026-03-09T15:59:43.161531+0000 mon.c (mon.2) 252 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: audit 2026-03-09T15:59:43.167373+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:43 vm09 bash[22983]: audit 2026-03-09T15:59:43.167373+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: cluster 2026-03-09T15:59:42.724304+0000 mgr.y (mgr.14520) 247 : cluster [DBG] pgmap v341: 300 pgs: 8 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: cluster 2026-03-09T15:59:42.724304+0000 mgr.y (mgr.14520) 247 : cluster [DBG] pgmap v341: 300 pgs: 8 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: audit 2026-03-09T15:59:43.150495+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: audit 2026-03-09T15:59:43.150495+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: cluster 2026-03-09T15:59:43.152970+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T15:59:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: cluster 2026-03-09T15:59:43.152970+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: audit 2026-03-09T15:59:43.161531+0000 mon.c (mon.2) 252 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: audit 2026-03-09T15:59:43.161531+0000 mon.c (mon.2) 252 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: audit 2026-03-09T15:59:43.167373+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:43 vm01 bash[28152]: audit 2026-03-09T15:59:43.167373+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: cluster 2026-03-09T15:59:42.724304+0000 mgr.y (mgr.14520) 247 : cluster [DBG] pgmap v341: 300 pgs: 8 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: cluster 2026-03-09T15:59:42.724304+0000 mgr.y (mgr.14520) 247 : cluster [DBG] pgmap v341: 300 pgs: 8 unknown, 6 creating+activating, 17 creating+peering, 269 active+clean; 4.4 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: audit 2026-03-09T15:59:43.150495+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: audit 2026-03-09T15:59:43.150495+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: cluster 2026-03-09T15:59:43.152970+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: cluster 2026-03-09T15:59:43.152970+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: audit 2026-03-09T15:59:43.161531+0000 mon.c (mon.2) 252 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: audit 2026-03-09T15:59:43.161531+0000 mon.c (mon.2) 252 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: audit 2026-03-09T15:59:43.167373+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:43 vm01 bash[20728]: audit 2026-03-09T15:59:43.167373+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: cluster 2026-03-09T15:59:43.817602+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: cluster 2026-03-09T15:59:43.817602+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: audit 2026-03-09T15:59:44.139641+0000 mon.a (mon.0) 2247 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: audit 2026-03-09T15:59:44.139641+0000 mon.a (mon.0) 2247 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: audit 2026-03-09T15:59:44.140293+0000 mon.a (mon.0) 2248 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: audit 2026-03-09T15:59:44.140293+0000 mon.a (mon.0) 2248 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: cluster 2026-03-09T15:59:44.165963+0000 mon.a (mon.0) 2249 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: cluster 2026-03-09T15:59:44.165963+0000 mon.a (mon.0) 2249 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: audit 2026-03-09T15:59:44.166753+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:44 vm09 bash[22983]: audit 2026-03-09T15:59:44.166753+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: cluster 2026-03-09T15:59:43.817602+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: cluster 2026-03-09T15:59:43.817602+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: audit 2026-03-09T15:59:44.139641+0000 mon.a (mon.0) 2247 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: audit 2026-03-09T15:59:44.139641+0000 mon.a (mon.0) 2247 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: audit 2026-03-09T15:59:44.140293+0000 mon.a (mon.0) 2248 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: audit 2026-03-09T15:59:44.140293+0000 mon.a (mon.0) 2248 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: cluster 2026-03-09T15:59:44.165963+0000 mon.a (mon.0) 2249 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: cluster 2026-03-09T15:59:44.165963+0000 mon.a (mon.0) 2249 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: audit 2026-03-09T15:59:44.166753+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:44 vm01 bash[28152]: audit 2026-03-09T15:59:44.166753+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: cluster 2026-03-09T15:59:43.817602+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: cluster 2026-03-09T15:59:43.817602+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: audit 2026-03-09T15:59:44.139641+0000 mon.a (mon.0) 2247 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: audit 2026-03-09T15:59:44.139641+0000 mon.a (mon.0) 2247 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: audit 2026-03-09T15:59:44.140293+0000 mon.a (mon.0) 2248 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: audit 2026-03-09T15:59:44.140293+0000 mon.a (mon.0) 2248 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: cluster 2026-03-09T15:59:44.165963+0000 mon.a (mon.0) 2249 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: cluster 2026-03-09T15:59:44.165963+0000 mon.a (mon.0) 2249 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: audit 2026-03-09T15:59:44.166753+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:44 vm01 bash[20728]: audit 2026-03-09T15:59:44.166753+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:44.220766+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:44.220766+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:44.221013+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:44.221013+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: cluster 2026-03-09T15:59:44.724676+0000 mgr.y (mgr.14520) 248 : cluster [DBG] pgmap v344: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: cluster 2026-03-09T15:59:44.724676+0000 mgr.y (mgr.14520) 248 : cluster [DBG] pgmap v344: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:45.184299+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:45.184299+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:45.184447+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:45.184447+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:45.184672+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:45.184672+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: cluster 2026-03-09T15:59:45.195969+0000 mon.a (mon.0) 2255 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: cluster 2026-03-09T15:59:45.195969+0000 mon.a (mon.0) 2255 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:45.199012+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:45 vm09 bash[22983]: audit 2026-03-09T15:59:45.199012+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:44.220766+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:44.220766+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:44.221013+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:44.221013+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: cluster 2026-03-09T15:59:44.724676+0000 mgr.y (mgr.14520) 248 : cluster [DBG] pgmap v344: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: cluster 2026-03-09T15:59:44.724676+0000 mgr.y (mgr.14520) 248 : cluster [DBG] pgmap v344: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:45.184299+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:45.184299+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:45.184447+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:45.184447+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:45.184672+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:45.184672+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: cluster 2026-03-09T15:59:45.195969+0000 mon.a (mon.0) 2255 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: cluster 2026-03-09T15:59:45.195969+0000 mon.a (mon.0) 2255 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:45.199012+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:45 vm01 bash[28152]: audit 2026-03-09T15:59:45.199012+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:44.220766+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:44.220766+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:44.221013+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:44.221013+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: cluster 2026-03-09T15:59:44.724676+0000 mgr.y (mgr.14520) 248 : cluster [DBG] pgmap v344: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: cluster 2026-03-09T15:59:44.724676+0000 mgr.y (mgr.14520) 248 : cluster [DBG] pgmap v344: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:45.184299+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:45.184299+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm01-59610-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:45.184447+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:45.184447+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:45.184672+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:45.184672+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: cluster 2026-03-09T15:59:45.195969+0000 mon.a (mon.0) 2255 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: cluster 2026-03-09T15:59:45.195969+0000 mon.a (mon.0) 2255 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:45.199012+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:45.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:45 vm01 bash[20728]: audit 2026-03-09T15:59:45.199012+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]: dispatch 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleWrite (7161 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.WaitForComplete 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.WaitForComplete (7086 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip (7064 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip2 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip2 (6212 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripAppend 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripAppend (6922 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.IsComplete 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.IsComplete (7035 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.IsSafe 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.IsSafe (7049 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.ReturnValue 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.ReturnValue (6951 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.Flush 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.Flush (7127 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.FlushAsync 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.FlushAsync (7121 ms) 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripWriteFull 2026-03-09T15:59:46.213 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripWriteFull (7086 ms) 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStat 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStat (6717 ms) 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStatNS 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStatNS (6981 ms) 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.StatRemove 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.StatRemove (7149 ms) 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.ExecuteClass 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.ExecuteClass (7056 ms) 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ RUN ] LibRadosAioEC.MultiWrite 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ OK ] LibRadosAioEC.MultiWrite (7128 ms) 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [----------] 16 tests from LibRadosAioEC (111845 ms total) 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [----------] Global test environment tear-down 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [==========] 42 tests from 2 test suites ran. (192195 ms total) 2026-03-09T15:59:46.214 INFO:tasks.workunit.client.0.vm01.stdout: api_aio: [ PASSED ] 42 tests. 2026-03-09T15:59:46.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:59:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: audit 2026-03-09T15:59:45.233305+0000 mon.c (mon.2) 254 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: audit 2026-03-09T15:59:45.233305+0000 mon.c (mon.2) 254 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: audit 2026-03-09T15:59:45.233762+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: audit 2026-03-09T15:59:45.233762+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: audit 2026-03-09T15:59:46.188229+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: audit 2026-03-09T15:59:46.188229+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: audit 2026-03-09T15:59:46.188325+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: audit 2026-03-09T15:59:46.188325+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: cluster 2026-03-09T15:59:46.191786+0000 mon.a (mon.0) 2260 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T15:59:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:46 vm09 bash[22983]: cluster 2026-03-09T15:59:46.191786+0000 mon.a (mon.0) 2260 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: audit 2026-03-09T15:59:45.233305+0000 mon.c (mon.2) 254 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: audit 2026-03-09T15:59:45.233305+0000 mon.c (mon.2) 254 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: audit 2026-03-09T15:59:45.233762+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: audit 2026-03-09T15:59:45.233762+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: audit 2026-03-09T15:59:46.188229+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: audit 2026-03-09T15:59:46.188229+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: audit 2026-03-09T15:59:46.188325+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: audit 2026-03-09T15:59:46.188325+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: cluster 2026-03-09T15:59:46.191786+0000 mon.a (mon.0) 2260 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:46 vm01 bash[28152]: cluster 2026-03-09T15:59:46.191786+0000 mon.a (mon.0) 2260 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: audit 2026-03-09T15:59:45.233305+0000 mon.c (mon.2) 254 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: audit 2026-03-09T15:59:45.233305+0000 mon.c (mon.2) 254 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: audit 2026-03-09T15:59:45.233762+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: audit 2026-03-09T15:59:45.233762+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: audit 2026-03-09T15:59:46.188229+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: audit 2026-03-09T15:59:46.188229+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? 192.168.123.101:0/3017713838' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm01-59602-42"}]': finished 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: audit 2026-03-09T15:59:46.188325+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: audit 2026-03-09T15:59:46.188325+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: cluster 2026-03-09T15:59:46.191786+0000 mon.a (mon.0) 2260 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T15:59:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:46 vm01 bash[20728]: cluster 2026-03-09T15:59:46.191786+0000 mon.a (mon.0) 2260 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: audit 2026-03-09T15:59:46.393817+0000 mgr.y (mgr.14520) 249 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: audit 2026-03-09T15:59:46.393817+0000 mgr.y (mgr.14520) 249 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: cluster 2026-03-09T15:59:46.725081+0000 mgr.y (mgr.14520) 250 : cluster [DBG] pgmap v347: 300 pgs: 8 unknown, 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: cluster 2026-03-09T15:59:46.725081+0000 mgr.y (mgr.14520) 250 : cluster [DBG] pgmap v347: 300 pgs: 8 unknown, 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: cluster 2026-03-09T15:59:47.207384+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: cluster 2026-03-09T15:59:47.207384+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: audit 2026-03-09T15:59:47.212431+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: audit 2026-03-09T15:59:47.212431+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: audit 2026-03-09T15:59:47.215616+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:47 vm09 bash[22983]: audit 2026-03-09T15:59:47.215616+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: audit 2026-03-09T15:59:46.393817+0000 mgr.y (mgr.14520) 249 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: audit 2026-03-09T15:59:46.393817+0000 mgr.y (mgr.14520) 249 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: cluster 2026-03-09T15:59:46.725081+0000 mgr.y (mgr.14520) 250 : cluster [DBG] pgmap v347: 300 pgs: 8 unknown, 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: cluster 2026-03-09T15:59:46.725081+0000 mgr.y (mgr.14520) 250 : cluster [DBG] pgmap v347: 300 pgs: 8 unknown, 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: cluster 2026-03-09T15:59:47.207384+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: cluster 2026-03-09T15:59:47.207384+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: audit 2026-03-09T15:59:47.212431+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: audit 2026-03-09T15:59:47.212431+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: audit 2026-03-09T15:59:47.215616+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:47 vm01 bash[28152]: audit 2026-03-09T15:59:47.215616+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: audit 2026-03-09T15:59:46.393817+0000 mgr.y (mgr.14520) 249 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: audit 2026-03-09T15:59:46.393817+0000 mgr.y (mgr.14520) 249 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: cluster 2026-03-09T15:59:46.725081+0000 mgr.y (mgr.14520) 250 : cluster [DBG] pgmap v347: 300 pgs: 8 unknown, 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: cluster 2026-03-09T15:59:46.725081+0000 mgr.y (mgr.14520) 250 : cluster [DBG] pgmap v347: 300 pgs: 8 unknown, 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: cluster 2026-03-09T15:59:47.207384+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: cluster 2026-03-09T15:59:47.207384+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: audit 2026-03-09T15:59:47.212431+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: audit 2026-03-09T15:59:47.212431+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: audit 2026-03-09T15:59:47.215616+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:47 vm01 bash[20728]: audit 2026-03-09T15:59:47.215616+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:47.260373+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:47.260373+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:47.260617+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:47.260617+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.204362+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.204362+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.204445+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.204445+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: cluster 2026-03-09T15:59:48.207122+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: cluster 2026-03-09T15:59:48.207122+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.213182+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.213182+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.213533+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.213533+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.213706+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.213706+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.213925+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:48 vm09 bash[22983]: audit 2026-03-09T15:59:48.213925+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:47.260373+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:47.260373+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:47.260617+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:47.260617+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.204362+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.204362+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.204445+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.204445+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: cluster 2026-03-09T15:59:48.207122+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: cluster 2026-03-09T15:59:48.207122+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.213182+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.213182+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.213533+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.213533+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.213706+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.213706+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.213925+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:48 vm01 bash[28152]: audit 2026-03-09T15:59:48.213925+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:47.260373+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:47.260373+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:47.260617+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:47.260617+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.204362+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.204362+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.204445+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.204445+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: cluster 2026-03-09T15:59:48.207122+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: cluster 2026-03-09T15:59:48.207122+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.213182+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.213182+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.101:0/568445949' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.213533+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.213533+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.213706+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.213706+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.213925+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:48 vm01 bash[20728]: audit 2026-03-09T15:59:48.213925+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: cluster 2026-03-09T15:59:48.725676+0000 mgr.y (mgr.14520) 251 : cluster [DBG] pgmap v350: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: cluster 2026-03-09T15:59:48.725676+0000 mgr.y (mgr.14520) 251 : cluster [DBG] pgmap v350: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: cluster 2026-03-09T15:59:49.204817+0000 mon.a (mon.0) 2269 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: cluster 2026-03-09T15:59:49.204817+0000 mon.a (mon.0) 2269 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.209370+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.209370+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.209496+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.209496+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: cluster 2026-03-09T15:59:49.213654+0000 mon.a (mon.0) 2272 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: cluster 2026-03-09T15:59:49.213654+0000 mon.a (mon.0) 2272 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.223973+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.223973+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.225072+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.225072+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.226048+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.226048+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.228107+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.228107+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.229206+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.229206+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.229897+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:49 vm09 bash[22983]: audit 2026-03-09T15:59:49.229897+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: cluster 2026-03-09T15:59:48.725676+0000 mgr.y (mgr.14520) 251 : cluster [DBG] pgmap v350: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: cluster 2026-03-09T15:59:48.725676+0000 mgr.y (mgr.14520) 251 : cluster [DBG] pgmap v350: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: cluster 2026-03-09T15:59:49.204817+0000 mon.a (mon.0) 2269 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: cluster 2026-03-09T15:59:49.204817+0000 mon.a (mon.0) 2269 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.209370+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.209370+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.209496+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.209496+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: cluster 2026-03-09T15:59:49.213654+0000 mon.a (mon.0) 2272 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: cluster 2026-03-09T15:59:49.213654+0000 mon.a (mon.0) 2272 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.223973+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.223973+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.225072+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.225072+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.226048+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.226048+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.228107+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.228107+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.229206+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.229206+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.229897+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:49 vm01 bash[28152]: audit 2026-03-09T15:59:49.229897+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: cluster 2026-03-09T15:59:48.725676+0000 mgr.y (mgr.14520) 251 : cluster [DBG] pgmap v350: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: cluster 2026-03-09T15:59:48.725676+0000 mgr.y (mgr.14520) 251 : cluster [DBG] pgmap v350: 292 pgs: 292 active+clean; 4.4 MiB data, 796 MiB used, 159 GiB / 160 GiB avail 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: cluster 2026-03-09T15:59:49.204817+0000 mon.a (mon.0) 2269 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: cluster 2026-03-09T15:59:49.204817+0000 mon.a (mon.0) 2269 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.209370+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.209370+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm01-59610-52"}]': finished 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.209496+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:49.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.209496+0000 mon.a (mon.0) 2271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-35"}]': finished 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: cluster 2026-03-09T15:59:49.213654+0000 mon.a (mon.0) 2272 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: cluster 2026-03-09T15:59:49.213654+0000 mon.a (mon.0) 2272 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.223973+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.223973+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.225072+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.225072+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.226048+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.226048+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.228107+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.228107+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.229206+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.229206+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.229897+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:49.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:49 vm01 bash[20728]: audit 2026-03-09T15:59:49.229897+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:50 vm09 bash[22983]: cluster 2026-03-09T15:59:49.241938+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:50 vm09 bash[22983]: cluster 2026-03-09T15:59:49.241938+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:50 vm01 bash[28152]: cluster 2026-03-09T15:59:49.241938+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:50.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:50 vm01 bash[28152]: cluster 2026-03-09T15:59:49.241938+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:50.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:50 vm01 bash[20728]: cluster 2026-03-09T15:59:49.241938+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:50.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:50 vm01 bash[20728]: cluster 2026-03-09T15:59:49.241938+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: audit 2026-03-09T15:59:50.358337+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: audit 2026-03-09T15:59:50.358337+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: cluster 2026-03-09T15:59:50.361964+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: cluster 2026-03-09T15:59:50.361964+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: audit 2026-03-09T15:59:50.364588+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: audit 2026-03-09T15:59:50.364588+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: audit 2026-03-09T15:59:50.369871+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: audit 2026-03-09T15:59:50.369871+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: cluster 2026-03-09T15:59:50.726099+0000 mgr.y (mgr.14520) 252 : cluster [DBG] pgmap v353: 260 pgs: 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:51 vm01 bash[28152]: cluster 2026-03-09T15:59:50.726099+0000 mgr.y (mgr.14520) 252 : cluster [DBG] pgmap v353: 260 pgs: 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: audit 2026-03-09T15:59:50.358337+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: audit 2026-03-09T15:59:50.358337+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: cluster 2026-03-09T15:59:50.361964+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: cluster 2026-03-09T15:59:50.361964+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: audit 2026-03-09T15:59:50.364588+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: audit 2026-03-09T15:59:50.364588+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: audit 2026-03-09T15:59:50.369871+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: audit 2026-03-09T15:59:50.369871+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: cluster 2026-03-09T15:59:50.726099+0000 mgr.y (mgr.14520) 252 : cluster [DBG] pgmap v353: 260 pgs: 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:51 vm01 bash[20728]: cluster 2026-03-09T15:59:50.726099+0000 mgr.y (mgr.14520) 252 : cluster [DBG] pgmap v353: 260 pgs: 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: audit 2026-03-09T15:59:50.358337+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: audit 2026-03-09T15:59:50.358337+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm01-59610-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: cluster 2026-03-09T15:59:50.361964+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: cluster 2026-03-09T15:59:50.361964+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: audit 2026-03-09T15:59:50.364588+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: audit 2026-03-09T15:59:50.364588+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: audit 2026-03-09T15:59:50.369871+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: audit 2026-03-09T15:59:50.369871+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: cluster 2026-03-09T15:59:50.726099+0000 mgr.y (mgr.14520) 252 : cluster [DBG] pgmap v353: 260 pgs: 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:51 vm09 bash[22983]: cluster 2026-03-09T15:59:50.726099+0000 mgr.y (mgr.14520) 252 : cluster [DBG] pgmap v353: 260 pgs: 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: cluster 2026-03-09T15:59:51.388202+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: cluster 2026-03-09T15:59:51.388202+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:51.388830+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:51.388830+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:51.395108+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:51.395108+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:52.365666+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:52.365666+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:52.365738+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:52.365738+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: cluster 2026-03-09T15:59:52.388522+0000 mon.a (mon.0) 2284 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: cluster 2026-03-09T15:59:52.388522+0000 mon.a (mon.0) 2284 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:52.389240+0000 mon.c (mon.2) 260 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:52.389240+0000 mon.c (mon.2) 260 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:52.400849+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:52 vm01 bash[28152]: audit 2026-03-09T15:59:52.400849+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: cluster 2026-03-09T15:59:51.388202+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: cluster 2026-03-09T15:59:51.388202+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:51.388830+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:51.388830+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:51.395108+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:51.395108+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:52.365666+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:52.365666+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:52.365738+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:52.365738+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: cluster 2026-03-09T15:59:52.388522+0000 mon.a (mon.0) 2284 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: cluster 2026-03-09T15:59:52.388522+0000 mon.a (mon.0) 2284 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:52.389240+0000 mon.c (mon.2) 260 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:52.389240+0000 mon.c (mon.2) 260 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:52.400849+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:52 vm01 bash[20728]: audit 2026-03-09T15:59:52.400849+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: cluster 2026-03-09T15:59:51.388202+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: cluster 2026-03-09T15:59:51.388202+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:51.388830+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:51.388830+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:51.395108+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:51.395108+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:52.365666+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:52.365666+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm01-59610-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:52.365738+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:52.365738+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: cluster 2026-03-09T15:59:52.388522+0000 mon.a (mon.0) 2284 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: cluster 2026-03-09T15:59:52.388522+0000 mon.a (mon.0) 2284 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:52.389240+0000 mon.c (mon.2) 260 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:52.389240+0000 mon.c (mon.2) 260 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:52.400849+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:52 vm09 bash[22983]: audit 2026-03-09T15:59:52.400849+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T15:59:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 15:59:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:15:59:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: cluster 2026-03-09T15:59:52.726546+0000 mgr.y (mgr.14520) 253 : cluster [DBG] pgmap v356: 300 pgs: 40 unknown, 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: cluster 2026-03-09T15:59:52.726546+0000 mgr.y (mgr.14520) 253 : cluster [DBG] pgmap v356: 300 pgs: 40 unknown, 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: audit 2026-03-09T15:59:53.369722+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: audit 2026-03-09T15:59:53.369722+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: cluster 2026-03-09T15:59:53.379359+0000 mon.a (mon.0) 2287 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: cluster 2026-03-09T15:59:53.379359+0000 mon.a (mon.0) 2287 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: audit 2026-03-09T15:59:53.403504+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: audit 2026-03-09T15:59:53.403504+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: audit 2026-03-09T15:59:53.404390+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:53 vm01 bash[28152]: audit 2026-03-09T15:59:53.404390+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: cluster 2026-03-09T15:59:52.726546+0000 mgr.y (mgr.14520) 253 : cluster [DBG] pgmap v356: 300 pgs: 40 unknown, 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: cluster 2026-03-09T15:59:52.726546+0000 mgr.y (mgr.14520) 253 : cluster [DBG] pgmap v356: 300 pgs: 40 unknown, 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: audit 2026-03-09T15:59:53.369722+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: audit 2026-03-09T15:59:53.369722+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: cluster 2026-03-09T15:59:53.379359+0000 mon.a (mon.0) 2287 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: cluster 2026-03-09T15:59:53.379359+0000 mon.a (mon.0) 2287 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: audit 2026-03-09T15:59:53.403504+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: audit 2026-03-09T15:59:53.403504+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: audit 2026-03-09T15:59:53.404390+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:53 vm01 bash[20728]: audit 2026-03-09T15:59:53.404390+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: cluster 2026-03-09T15:59:52.726546+0000 mgr.y (mgr.14520) 253 : cluster [DBG] pgmap v356: 300 pgs: 40 unknown, 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: cluster 2026-03-09T15:59:52.726546+0000 mgr.y (mgr.14520) 253 : cluster [DBG] pgmap v356: 300 pgs: 40 unknown, 260 active+clean; 4.4 MiB data, 780 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: audit 2026-03-09T15:59:53.369722+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: audit 2026-03-09T15:59:53.369722+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: cluster 2026-03-09T15:59:53.379359+0000 mon.a (mon.0) 2287 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: cluster 2026-03-09T15:59:53.379359+0000 mon.a (mon.0) 2287 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: audit 2026-03-09T15:59:53.403504+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: audit 2026-03-09T15:59:53.403504+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: audit 2026-03-09T15:59:53.404390+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:53 vm09 bash[22983]: audit 2026-03-09T15:59:53.404390+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.822947+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.822947+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.832467+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.832467+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: cluster 2026-03-09T15:59:53.834898+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: cluster 2026-03-09T15:59:53.834898+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.836546+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.836546+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.837096+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.837096+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.837185+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:54 vm09 bash[22983]: audit 2026-03-09T15:59:53.837185+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.822947+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.822947+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.832467+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.832467+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: cluster 2026-03-09T15:59:53.834898+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: cluster 2026-03-09T15:59:53.834898+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.836546+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.836546+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.837096+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.837096+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.837185+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:54 vm01 bash[28152]: audit 2026-03-09T15:59:53.837185+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.822947+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.822947+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.832467+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.832467+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: cluster 2026-03-09T15:59:53.834898+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: cluster 2026-03-09T15:59:53.834898+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.836546+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.836546+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.837096+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.837096+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.837185+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:54 vm01 bash[20728]: audit 2026-03-09T15:59:53.837185+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:54.727240+0000 mgr.y (mgr.14520) 254 : cluster [DBG] pgmap v359: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:54.727240+0000 mgr.y (mgr.14520) 254 : cluster [DBG] pgmap v359: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:54.823141+0000 mon.a (mon.0) 2293 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:54.823141+0000 mon.a (mon.0) 2293 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:54.824000+0000 mon.a (mon.0) 2294 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:54.824000+0000 mon.a (mon.0) 2294 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.851532+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.851532+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.851715+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]': finished 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.851715+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]': finished 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.857155+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.857155+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:54.861374+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:54.861374+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.865494+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.865494+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.916849+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.916849+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.917077+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:54.917077+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:55.855894+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:55.855894+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:55.856074+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:55.856074+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:55.860745+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: audit 2026-03-09T15:59:55.860745+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:55.868738+0000 mon.a (mon.0) 2302 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T15:59:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:55 vm09 bash[22983]: cluster 2026-03-09T15:59:55.868738+0000 mon.a (mon.0) 2302 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:54.727240+0000 mgr.y (mgr.14520) 254 : cluster [DBG] pgmap v359: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:54.727240+0000 mgr.y (mgr.14520) 254 : cluster [DBG] pgmap v359: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:54.823141+0000 mon.a (mon.0) 2293 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:54.823141+0000 mon.a (mon.0) 2293 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:54.824000+0000 mon.a (mon.0) 2294 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:54.824000+0000 mon.a (mon.0) 2294 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.851532+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.851532+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.851715+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]': finished 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.851715+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]': finished 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.857155+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.857155+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:54.861374+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:54.861374+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.865494+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.865494+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.916849+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.916849+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.917077+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:54.917077+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:55.855894+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:55.855894+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:55.856074+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:55.856074+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:55.860745+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: audit 2026-03-09T15:59:55.860745+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:55.868738+0000 mon.a (mon.0) 2302 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T15:59:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:55 vm01 bash[28152]: cluster 2026-03-09T15:59:55.868738+0000 mon.a (mon.0) 2302 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:54.727240+0000 mgr.y (mgr.14520) 254 : cluster [DBG] pgmap v359: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:54.727240+0000 mgr.y (mgr.14520) 254 : cluster [DBG] pgmap v359: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:54.823141+0000 mon.a (mon.0) 2293 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:54.823141+0000 mon.a (mon.0) 2293 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:54.824000+0000 mon.a (mon.0) 2294 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:54.824000+0000 mon.a (mon.0) 2294 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.851532+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.851532+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.851715+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]': finished 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.851715+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-37", "mode": "writeback"}]': finished 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.857155+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.857155+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.101:0/853810639' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:54.861374+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:54.861374+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.865494+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.865494+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.916849+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.916849+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.917077+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:54.917077+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:55.855894+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:55.855894+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm01-59610-53"}]': finished 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:55.856074+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:55.856074+0000 mon.a (mon.0) 2301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:55.860745+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: audit 2026-03-09T15:59:55.860745+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:55.868738+0000 mon.a (mon.0) 2302 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T15:59:56.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:55 vm01 bash[20728]: cluster 2026-03-09T15:59:55.868738+0000 mon.a (mon.0) 2302 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T15:59:56.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 15:59:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.869675+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.869675+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.896494+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.896494+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.897041+0000 mon.a (mon.0) 2304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.897041+0000 mon.a (mon.0) 2304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.898572+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.898572+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.898818+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.898818+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.899152+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.899152+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.899423+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:55.899423+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: cluster 2026-03-09T15:59:56.855806+0000 mon.a (mon.0) 2307 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: cluster 2026-03-09T15:59:56.855806+0000 mon.a (mon.0) 2307 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:56.859202+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:56.859202+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:56.859268+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:56.859268+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: cluster 2026-03-09T15:59:56.862392+0000 mon.a (mon.0) 2310 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: cluster 2026-03-09T15:59:56.862392+0000 mon.a (mon.0) 2310 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:56.865170+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:56.865170+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:56.870485+0000 mon.a (mon.0) 2311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:56 vm09 bash[22983]: audit 2026-03-09T15:59:56.870485+0000 mon.a (mon.0) 2311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.869675+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.869675+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.896494+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.896494+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.897041+0000 mon.a (mon.0) 2304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.897041+0000 mon.a (mon.0) 2304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.898572+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.898572+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.898818+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.898818+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.899152+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.899152+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.899423+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:55.899423+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: cluster 2026-03-09T15:59:56.855806+0000 mon.a (mon.0) 2307 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: cluster 2026-03-09T15:59:56.855806+0000 mon.a (mon.0) 2307 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:56.859202+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:56.859202+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:56.859268+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:56.859268+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: cluster 2026-03-09T15:59:56.862392+0000 mon.a (mon.0) 2310 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: cluster 2026-03-09T15:59:56.862392+0000 mon.a (mon.0) 2310 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:56.865170+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:56.865170+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:56.870485+0000 mon.a (mon.0) 2311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:56 vm01 bash[28152]: audit 2026-03-09T15:59:56.870485+0000 mon.a (mon.0) 2311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.869675+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.869675+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.896494+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.896494+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.897041+0000 mon.a (mon.0) 2304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.897041+0000 mon.a (mon.0) 2304 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.898572+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.898572+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.898818+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.898818+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.899152+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.899152+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.899423+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:55.899423+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: cluster 2026-03-09T15:59:56.855806+0000 mon.a (mon.0) 2307 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:57.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: cluster 2026-03-09T15:59:56.855806+0000 mon.a (mon.0) 2307 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:56.859202+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:56.859202+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-37"}]': finished 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:56.859268+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:56.859268+0000 mon.a (mon.0) 2309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm01-59610-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: cluster 2026-03-09T15:59:56.862392+0000 mon.a (mon.0) 2310 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: cluster 2026-03-09T15:59:56.862392+0000 mon.a (mon.0) 2310 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:56.865170+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:56.865170+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:56.870485+0000 mon.a (mon.0) 2311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:57.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:56 vm01 bash[20728]: audit 2026-03-09T15:59:56.870485+0000 mon.a (mon.0) 2311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:57 vm01 bash[28152]: audit 2026-03-09T15:59:56.394941+0000 mgr.y (mgr.14520) 255 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:57 vm01 bash[28152]: audit 2026-03-09T15:59:56.394941+0000 mgr.y (mgr.14520) 255 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:57 vm01 bash[28152]: cluster 2026-03-09T15:59:56.727524+0000 mgr.y (mgr.14520) 256 : cluster [DBG] pgmap v362: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:57 vm01 bash[28152]: cluster 2026-03-09T15:59:56.727524+0000 mgr.y (mgr.14520) 256 : cluster [DBG] pgmap v362: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:57 vm01 bash[28152]: cluster 2026-03-09T15:59:57.879821+0000 mon.a (mon.0) 2312 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:57 vm01 bash[28152]: cluster 2026-03-09T15:59:57.879821+0000 mon.a (mon.0) 2312 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:57 vm01 bash[20728]: audit 2026-03-09T15:59:56.394941+0000 mgr.y (mgr.14520) 255 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:57 vm01 bash[20728]: audit 2026-03-09T15:59:56.394941+0000 mgr.y (mgr.14520) 255 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:57 vm01 bash[20728]: cluster 2026-03-09T15:59:56.727524+0000 mgr.y (mgr.14520) 256 : cluster [DBG] pgmap v362: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:57 vm01 bash[20728]: cluster 2026-03-09T15:59:56.727524+0000 mgr.y (mgr.14520) 256 : cluster [DBG] pgmap v362: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:57 vm01 bash[20728]: cluster 2026-03-09T15:59:57.879821+0000 mon.a (mon.0) 2312 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T15:59:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:57 vm01 bash[20728]: cluster 2026-03-09T15:59:57.879821+0000 mon.a (mon.0) 2312 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T15:59:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:57 vm09 bash[22983]: audit 2026-03-09T15:59:56.394941+0000 mgr.y (mgr.14520) 255 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:57 vm09 bash[22983]: audit 2026-03-09T15:59:56.394941+0000 mgr.y (mgr.14520) 255 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T15:59:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:57 vm09 bash[22983]: cluster 2026-03-09T15:59:56.727524+0000 mgr.y (mgr.14520) 256 : cluster [DBG] pgmap v362: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:57 vm09 bash[22983]: cluster 2026-03-09T15:59:56.727524+0000 mgr.y (mgr.14520) 256 : cluster [DBG] pgmap v362: 292 pgs: 292 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:59:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:57 vm09 bash[22983]: cluster 2026-03-09T15:59:57.879821+0000 mon.a (mon.0) 2312 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T15:59:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:57 vm09 bash[22983]: cluster 2026-03-09T15:59:57.879821+0000 mon.a (mon.0) 2312 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T16:00:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: cluster 2026-03-09T15:59:58.727925+0000 mgr.y (mgr.14520) 257 : cluster [DBG] pgmap v365: 260 pgs: 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: cluster 2026-03-09T15:59:58.727925+0000 mgr.y (mgr.14520) 257 : cluster [DBG] pgmap v365: 260 pgs: 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:58.867766+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:58.867766+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: cluster 2026-03-09T15:59:58.875751+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: cluster 2026-03-09T15:59:58.875751+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:58.891515+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:58.891515+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:58.900163+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:58.900163+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:59.150322+0000 mon.a (mon.0) 2316 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:59.150322+0000 mon.a (mon.0) 2316 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:59.151984+0000 mon.a (mon.0) 2317 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 15:59:59 vm01 bash[28152]: audit 2026-03-09T15:59:59.151984+0000 mon.a (mon.0) 2317 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: cluster 2026-03-09T15:59:58.727925+0000 mgr.y (mgr.14520) 257 : cluster [DBG] pgmap v365: 260 pgs: 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: cluster 2026-03-09T15:59:58.727925+0000 mgr.y (mgr.14520) 257 : cluster [DBG] pgmap v365: 260 pgs: 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:58.867766+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:58.867766+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: cluster 2026-03-09T15:59:58.875751+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: cluster 2026-03-09T15:59:58.875751+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:58.891515+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:58.891515+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:58.900163+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:58.900163+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:59.150322+0000 mon.a (mon.0) 2316 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:59.150322+0000 mon.a (mon.0) 2316 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:59.151984+0000 mon.a (mon.0) 2317 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:00.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 15:59:59 vm01 bash[20728]: audit 2026-03-09T15:59:59.151984+0000 mon.a (mon.0) 2317 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: cluster 2026-03-09T15:59:58.727925+0000 mgr.y (mgr.14520) 257 : cluster [DBG] pgmap v365: 260 pgs: 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: cluster 2026-03-09T15:59:58.727925+0000 mgr.y (mgr.14520) 257 : cluster [DBG] pgmap v365: 260 pgs: 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:58.867766+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:58.867766+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm01-59610-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: cluster 2026-03-09T15:59:58.875751+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: cluster 2026-03-09T15:59:58.875751+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:58.891515+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:58.891515+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:58.900163+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:58.900163+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:59.150322+0000 mon.a (mon.0) 2316 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:59.150322+0000 mon.a (mon.0) 2316 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:59.151984+0000 mon.a (mon.0) 2317 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 15:59:59 vm09 bash[22983]: audit 2026-03-09T15:59:59.151984+0000 mon.a (mon.0) 2317 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T15:59:59.873747+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T15:59:59.873747+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T15:59:59.886330+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T15:59:59.886330+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T15:59:59.889437+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T15:59:59.889437+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T15:59:59.895344+0000 mon.a (mon.0) 2320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T15:59:59.895344+0000 mon.a (mon.0) 2320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000107+0000 mon.a (mon.0) 2321 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000107+0000 mon.a (mon.0) 2321 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000122+0000 mon.a (mon.0) 2322 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000122+0000 mon.a (mon.0) 2322 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000126+0000 mon.a (mon.0) 2323 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000126+0000 mon.a (mon.0) 2323 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000130+0000 mon.a (mon.0) 2324 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000130+0000 mon.a (mon.0) 2324 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000134+0000 mon.a (mon.0) 2325 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000134+0000 mon.a (mon.0) 2325 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000137+0000 mon.a (mon.0) 2326 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.000137+0000 mon.a (mon.0) 2326 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.876903+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.876903+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.884445+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.884445+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.884892+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: cluster 2026-03-09T16:00:00.884892+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.885582+0000 mon.c (mon.2) 272 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.885582+0000 mon.c (mon.2) 272 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.886297+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.886297+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.886359+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:00 vm01 bash[28152]: audit 2026-03-09T16:00:00.886359+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T15:59:59.873747+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T15:59:59.873747+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T15:59:59.886330+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T15:59:59.886330+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T15:59:59.889437+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T15:59:59.889437+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T15:59:59.895344+0000 mon.a (mon.0) 2320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T15:59:59.895344+0000 mon.a (mon.0) 2320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000107+0000 mon.a (mon.0) 2321 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000107+0000 mon.a (mon.0) 2321 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000122+0000 mon.a (mon.0) 2322 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000122+0000 mon.a (mon.0) 2322 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000126+0000 mon.a (mon.0) 2323 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000126+0000 mon.a (mon.0) 2323 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000130+0000 mon.a (mon.0) 2324 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000130+0000 mon.a (mon.0) 2324 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000134+0000 mon.a (mon.0) 2325 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000134+0000 mon.a (mon.0) 2325 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000137+0000 mon.a (mon.0) 2326 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.000137+0000 mon.a (mon.0) 2326 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.876903+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.876903+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.884445+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.884445+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.884892+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: cluster 2026-03-09T16:00:00.884892+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.885582+0000 mon.c (mon.2) 272 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.885582+0000 mon.c (mon.2) 272 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.886297+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.886297+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.886359+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:00 vm01 bash[20728]: audit 2026-03-09T16:00:00.886359+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T15:59:59.873747+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T15:59:59.873747+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T15:59:59.886330+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T15:59:59.886330+0000 mon.c (mon.2) 270 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T15:59:59.889437+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T15:59:59.889437+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T15:59:59.895344+0000 mon.a (mon.0) 2320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T15:59:59.895344+0000 mon.a (mon.0) 2320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000107+0000 mon.a (mon.0) 2321 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000107+0000 mon.a (mon.0) 2321 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000122+0000 mon.a (mon.0) 2322 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000122+0000 mon.a (mon.0) 2322 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000126+0000 mon.a (mon.0) 2323 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000126+0000 mon.a (mon.0) 2323 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000130+0000 mon.a (mon.0) 2324 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000130+0000 mon.a (mon.0) 2324 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000134+0000 mon.a (mon.0) 2325 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000134+0000 mon.a (mon.0) 2325 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000137+0000 mon.a (mon.0) 2326 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.000137+0000 mon.a (mon.0) 2326 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.876903+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.876903+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.884445+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.884445+0000 mon.c (mon.2) 271 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.884892+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: cluster 2026-03-09T16:00:00.884892+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.885582+0000 mon.c (mon.2) 272 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.885582+0000 mon.c (mon.2) 272 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.886297+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.886297+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.886359+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:00 vm09 bash[22983]: audit 2026-03-09T16:00:00.886359+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:02 vm09 bash[22983]: cluster 2026-03-09T16:00:00.728259+0000 mgr.y (mgr.14520) 258 : cluster [DBG] pgmap v368: 300 pgs: 20 creating+peering, 20 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:02 vm09 bash[22983]: cluster 2026-03-09T16:00:00.728259+0000 mgr.y (mgr.14520) 258 : cluster [DBG] pgmap v368: 300 pgs: 20 creating+peering, 20 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:02 vm09 bash[22983]: cluster 2026-03-09T16:00:00.903597+0000 mon.a (mon.0) 2331 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:02 vm09 bash[22983]: cluster 2026-03-09T16:00:00.903597+0000 mon.a (mon.0) 2331 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:02 vm01 bash[28152]: cluster 2026-03-09T16:00:00.728259+0000 mgr.y (mgr.14520) 258 : cluster [DBG] pgmap v368: 300 pgs: 20 creating+peering, 20 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:02 vm01 bash[28152]: cluster 2026-03-09T16:00:00.728259+0000 mgr.y (mgr.14520) 258 : cluster [DBG] pgmap v368: 300 pgs: 20 creating+peering, 20 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:02 vm01 bash[28152]: cluster 2026-03-09T16:00:00.903597+0000 mon.a (mon.0) 2331 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:02 vm01 bash[28152]: cluster 2026-03-09T16:00:00.903597+0000 mon.a (mon.0) 2331 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:02 vm01 bash[20728]: cluster 2026-03-09T16:00:00.728259+0000 mgr.y (mgr.14520) 258 : cluster [DBG] pgmap v368: 300 pgs: 20 creating+peering, 20 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:02 vm01 bash[20728]: cluster 2026-03-09T16:00:00.728259+0000 mgr.y (mgr.14520) 258 : cluster [DBG] pgmap v368: 300 pgs: 20 creating+peering, 20 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:02 vm01 bash[20728]: cluster 2026-03-09T16:00:00.903597+0000 mon.a (mon.0) 2331 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:02 vm01 bash[20728]: cluster 2026-03-09T16:00:00.903597+0000 mon.a (mon.0) 2331 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:00:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:00:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.159409+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.159409+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.159484+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.159484+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: cluster 2026-03-09T16:00:02.180875+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: cluster 2026-03-09T16:00:02.180875+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.181527+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.181527+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.182291+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.182291+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.182655+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.182655+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.182857+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: audit 2026-03-09T16:00:02.182857+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: cluster 2026-03-09T16:00:02.728549+0000 mgr.y (mgr.14520) 259 : cluster [DBG] pgmap v371: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: cluster 2026-03-09T16:00:02.728549+0000 mgr.y (mgr.14520) 259 : cluster [DBG] pgmap v371: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: cluster 2026-03-09T16:00:03.159690+0000 mon.a (mon.0) 2337 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:03 vm09 bash[22983]: cluster 2026-03-09T16:00:03.159690+0000 mon.a (mon.0) 2337 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.159409+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.159409+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.159484+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.159484+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: cluster 2026-03-09T16:00:02.180875+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: cluster 2026-03-09T16:00:02.180875+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.181527+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.181527+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.182291+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.182291+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.182655+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.182655+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.182857+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: audit 2026-03-09T16:00:02.182857+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: cluster 2026-03-09T16:00:02.728549+0000 mgr.y (mgr.14520) 259 : cluster [DBG] pgmap v371: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: cluster 2026-03-09T16:00:02.728549+0000 mgr.y (mgr.14520) 259 : cluster [DBG] pgmap v371: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: cluster 2026-03-09T16:00:03.159690+0000 mon.a (mon.0) 2337 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:03 vm01 bash[28152]: cluster 2026-03-09T16:00:03.159690+0000 mon.a (mon.0) 2337 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.159409+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.159409+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.159484+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.159484+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: cluster 2026-03-09T16:00:02.180875+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: cluster 2026-03-09T16:00:02.180875+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.181527+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.181527+0000 mon.c (mon.2) 273 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.182291+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.182291+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.101:0/2576523928' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.182655+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.182655+0000 mon.a (mon.0) 2335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.182857+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: audit 2026-03-09T16:00:02.182857+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]: dispatch 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: cluster 2026-03-09T16:00:02.728549+0000 mgr.y (mgr.14520) 259 : cluster [DBG] pgmap v371: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: cluster 2026-03-09T16:00:02.728549+0000 mgr.y (mgr.14520) 259 : cluster [DBG] pgmap v371: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 4.4 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: cluster 2026-03-09T16:00:03.159690+0000 mon.a (mon.0) 2337 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:03 vm01 bash[20728]: cluster 2026-03-09T16:00:03.159690+0000 mon.a (mon.0) 2337 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.250787+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]': finished 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.250787+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]': finished 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.250844+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.250844+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: cluster 2026-03-09T16:00:03.255620+0000 mon.a (mon.0) 2340 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: cluster 2026-03-09T16:00:03.255620+0000 mon.a (mon.0) 2340 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.279037+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.279037+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.296940+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.296940+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.297471+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.297471+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.300021+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.300021+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.300848+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.300848+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.301374+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.301374+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.395001+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.395001+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.395431+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:04 vm09 bash[22983]: audit 2026-03-09T16:00:03.395431+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.250787+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]': finished 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.250787+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]': finished 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.250844+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.250844+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: cluster 2026-03-09T16:00:03.255620+0000 mon.a (mon.0) 2340 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: cluster 2026-03-09T16:00:03.255620+0000 mon.a (mon.0) 2340 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.279037+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.279037+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.296940+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.296940+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.297471+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.297471+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.300021+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.300021+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.300848+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.300848+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.301374+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.301374+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.395001+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.395001+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.395431+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:04 vm01 bash[28152]: audit 2026-03-09T16:00:03.395431+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.250787+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]': finished 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.250787+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-39", "mode": "writeback"}]': finished 2026-03-09T16:00:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.250844+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.250844+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm01-59610-54"}]': finished 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: cluster 2026-03-09T16:00:03.255620+0000 mon.a (mon.0) 2340 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: cluster 2026-03-09T16:00:03.255620+0000 mon.a (mon.0) 2340 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.279037+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.279037+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.296940+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.296940+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.297471+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.297471+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.300021+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.300021+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.300848+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.300848+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.301374+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.301374+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.395001+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.395001+0000 mon.c (mon.2) 275 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.395431+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:04.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:04 vm01 bash[20728]: audit 2026-03-09T16:00:03.395431+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.298853+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.298853+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.299717+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.299717+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.299855+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.299855+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.306669+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.306669+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: cluster 2026-03-09T16:00:04.306879+0000 mon.a (mon.0) 2347 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: cluster 2026-03-09T16:00:04.306879+0000 mon.a (mon.0) 2347 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.307719+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.307719+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.308788+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: audit 2026-03-09T16:00:04.308788+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: cluster 2026-03-09T16:00:04.728898+0000 mgr.y (mgr.14520) 260 : cluster [DBG] pgmap v374: 292 pgs: 292 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 127 op/s 2026-03-09T16:00:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:05 vm09 bash[22983]: cluster 2026-03-09T16:00:04.728898+0000 mgr.y (mgr.14520) 260 : cluster [DBG] pgmap v374: 292 pgs: 292 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 127 op/s 2026-03-09T16:00:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.298853+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.298853+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.299717+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.299717+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.299855+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.299855+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.306669+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.306669+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: cluster 2026-03-09T16:00:04.306879+0000 mon.a (mon.0) 2347 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: cluster 2026-03-09T16:00:04.306879+0000 mon.a (mon.0) 2347 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.307719+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.307719+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.308788+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: audit 2026-03-09T16:00:04.308788+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: cluster 2026-03-09T16:00:04.728898+0000 mgr.y (mgr.14520) 260 : cluster [DBG] pgmap v374: 292 pgs: 292 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 127 op/s 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:05 vm01 bash[28152]: cluster 2026-03-09T16:00:04.728898+0000 mgr.y (mgr.14520) 260 : cluster [DBG] pgmap v374: 292 pgs: 292 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 127 op/s 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.298853+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.298853+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.299717+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.299717+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm01-59610-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.299855+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.299855+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.306669+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.306669+0000 mon.c (mon.2) 276 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: cluster 2026-03-09T16:00:04.306879+0000 mon.a (mon.0) 2347 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: cluster 2026-03-09T16:00:04.306879+0000 mon.a (mon.0) 2347 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.307719+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.307719+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.308788+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: audit 2026-03-09T16:00:04.308788+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: cluster 2026-03-09T16:00:04.728898+0000 mgr.y (mgr.14520) 260 : cluster [DBG] pgmap v374: 292 pgs: 292 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 127 op/s 2026-03-09T16:00:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:05 vm01 bash[20728]: cluster 2026-03-09T16:00:04.728898+0000 mgr.y (mgr.14520) 260 : cluster [DBG] pgmap v374: 292 pgs: 292 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 127 op/s 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [==========] Running 77 tests from 4 test suites. 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] Global test environment set-up. 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: seed 59821 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.Dirty 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTierPP.Dirty (410 ms) 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.FlushWriteRaces 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTierPP.FlushWriteRaces (12068 ms) 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.HitSetNone 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTierPP.HitSetNone (23 ms) 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP (12502 ms total) 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Overlay 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Overlay (7227 ms) 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Promote 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Promote (8106 ms) 2026-03-09T16:00:06.336 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnap 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnap (11253 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapScrub 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: my_snaps [3] 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: my_snaps [4,3] 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: my_snaps [5,4,3] 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: my_snaps [6,5,4,3] 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: promoting some heads 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: promoting from clones for snap 6 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: promoting from clones for snap 5 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: promoting from clones for snap 4 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: promoting from clones for snap 3 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: waiting for scrubs... 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: done waiting 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapScrub (47551 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapTrimRace 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapTrimRace (10136 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Whiteout 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Whiteout (7213 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate (7922 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Evict 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Evict (8003 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap (9997 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap2 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap2 (9144 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ListSnap 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ListSnap (10192 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace (12897 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlush 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlush (7871 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Flush 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Flush (8107 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushSnap 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushSnap (13326 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushTryFlushRaces 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushTryFlushRaces (7517 ms) 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlushReadRace 2026-03-09T16:00:06.337 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlushReadRace (8457 ms) 2026-03-09T16:00:06.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:00:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:00:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:06 vm09 bash[22983]: cluster 2026-03-09T16:00:05.301367+0000 mon.a (mon.0) 2350 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:06 vm09 bash[22983]: cluster 2026-03-09T16:00:05.301367+0000 mon.a (mon.0) 2350 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:06 vm09 bash[22983]: audit 2026-03-09T16:00:05.308778+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:06 vm09 bash[22983]: audit 2026-03-09T16:00:05.308778+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:06 vm09 bash[22983]: cluster 2026-03-09T16:00:05.319519+0000 mon.a (mon.0) 2352 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T16:00:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:06 vm09 bash[22983]: cluster 2026-03-09T16:00:05.319519+0000 mon.a (mon.0) 2352 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T16:00:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:06 vm01 bash[20728]: cluster 2026-03-09T16:00:05.301367+0000 mon.a (mon.0) 2350 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:06 vm01 bash[20728]: cluster 2026-03-09T16:00:05.301367+0000 mon.a (mon.0) 2350 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:06 vm01 bash[20728]: audit 2026-03-09T16:00:05.308778+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:06 vm01 bash[20728]: audit 2026-03-09T16:00:05.308778+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:06 vm01 bash[20728]: cluster 2026-03-09T16:00:05.319519+0000 mon.a (mon.0) 2352 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T16:00:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:06 vm01 bash[20728]: cluster 2026-03-09T16:00:05.319519+0000 mon.a (mon.0) 2352 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T16:00:06.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:06 vm01 bash[28152]: cluster 2026-03-09T16:00:05.301367+0000 mon.a (mon.0) 2350 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:06.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:06 vm01 bash[28152]: cluster 2026-03-09T16:00:05.301367+0000 mon.a (mon.0) 2350 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:06.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:06 vm01 bash[28152]: audit 2026-03-09T16:00:05.308778+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:06.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:06 vm01 bash[28152]: audit 2026-03-09T16:00:05.308778+0000 mon.a (mon.0) 2351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-39"}]': finished 2026-03-09T16:00:06.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:06 vm01 bash[28152]: cluster 2026-03-09T16:00:05.319519+0000 mon.a (mon.0) 2352 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T16:00:06.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:06 vm01 bash[28152]: cluster 2026-03-09T16:00:05.319519+0000 mon.a (mon.0) 2352 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T16:00:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:07 vm09 bash[22983]: audit 2026-03-09T16:00:06.319028+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:07 vm09 bash[22983]: audit 2026-03-09T16:00:06.319028+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:07 vm09 bash[22983]: cluster 2026-03-09T16:00:06.322824+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T16:00:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:07 vm09 bash[22983]: cluster 2026-03-09T16:00:06.322824+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T16:00:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:07 vm09 bash[22983]: audit 2026-03-09T16:00:06.399101+0000 mgr.y (mgr.14520) 261 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:07 vm09 bash[22983]: audit 2026-03-09T16:00:06.399101+0000 mgr.y (mgr.14520) 261 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:07 vm09 bash[22983]: cluster 2026-03-09T16:00:06.729281+0000 mgr.y (mgr.14520) 262 : cluster [DBG] pgmap v377: 268 pgs: 8 unknown, 260 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:00:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:07 vm09 bash[22983]: cluster 2026-03-09T16:00:06.729281+0000 mgr.y (mgr.14520) 262 : cluster [DBG] pgmap v377: 268 pgs: 8 unknown, 260 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:00:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:07 vm01 bash[28152]: audit 2026-03-09T16:00:06.319028+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:07 vm01 bash[28152]: audit 2026-03-09T16:00:06.319028+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:07 vm01 bash[28152]: cluster 2026-03-09T16:00:06.322824+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:07 vm01 bash[28152]: cluster 2026-03-09T16:00:06.322824+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:07 vm01 bash[28152]: audit 2026-03-09T16:00:06.399101+0000 mgr.y (mgr.14520) 261 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:07 vm01 bash[28152]: audit 2026-03-09T16:00:06.399101+0000 mgr.y (mgr.14520) 261 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:07 vm01 bash[28152]: cluster 2026-03-09T16:00:06.729281+0000 mgr.y (mgr.14520) 262 : cluster [DBG] pgmap v377: 268 pgs: 8 unknown, 260 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:07 vm01 bash[28152]: cluster 2026-03-09T16:00:06.729281+0000 mgr.y (mgr.14520) 262 : cluster [DBG] pgmap v377: 268 pgs: 8 unknown, 260 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:07 vm01 bash[20728]: audit 2026-03-09T16:00:06.319028+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:07 vm01 bash[20728]: audit 2026-03-09T16:00:06.319028+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm01-59610-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:07 vm01 bash[20728]: cluster 2026-03-09T16:00:06.322824+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:07 vm01 bash[20728]: cluster 2026-03-09T16:00:06.322824+0000 mon.a (mon.0) 2354 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:07 vm01 bash[20728]: audit 2026-03-09T16:00:06.399101+0000 mgr.y (mgr.14520) 261 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:07 vm01 bash[20728]: audit 2026-03-09T16:00:06.399101+0000 mgr.y (mgr.14520) 261 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:07 vm01 bash[20728]: cluster 2026-03-09T16:00:06.729281+0000 mgr.y (mgr.14520) 262 : cluster [DBG] pgmap v377: 268 pgs: 8 unknown, 260 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:00:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:07 vm01 bash[20728]: cluster 2026-03-09T16:00:06.729281+0000 mgr.y (mgr.14520) 262 : cluster [DBG] pgmap v377: 268 pgs: 8 unknown, 260 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:00:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:08 vm01 bash[28152]: cluster 2026-03-09T16:00:07.319121+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:08 vm01 bash[28152]: cluster 2026-03-09T16:00:07.319121+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:08 vm01 bash[28152]: cluster 2026-03-09T16:00:07.423490+0000 mon.a (mon.0) 2356 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:08 vm01 bash[28152]: cluster 2026-03-09T16:00:07.423490+0000 mon.a (mon.0) 2356 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:08 vm01 bash[28152]: audit 2026-03-09T16:00:07.424717+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:08 vm01 bash[28152]: audit 2026-03-09T16:00:07.424717+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:08 vm01 bash[28152]: audit 2026-03-09T16:00:07.428631+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:08 vm01 bash[28152]: audit 2026-03-09T16:00:07.428631+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:08 vm01 bash[20728]: cluster 2026-03-09T16:00:07.319121+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:08 vm01 bash[20728]: cluster 2026-03-09T16:00:07.319121+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:08 vm01 bash[20728]: cluster 2026-03-09T16:00:07.423490+0000 mon.a (mon.0) 2356 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:08 vm01 bash[20728]: cluster 2026-03-09T16:00:07.423490+0000 mon.a (mon.0) 2356 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:08 vm01 bash[20728]: audit 2026-03-09T16:00:07.424717+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:08 vm01 bash[20728]: audit 2026-03-09T16:00:07.424717+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:08 vm01 bash[20728]: audit 2026-03-09T16:00:07.428631+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:08 vm01 bash[20728]: audit 2026-03-09T16:00:07.428631+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:08 vm09 bash[22983]: cluster 2026-03-09T16:00:07.319121+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:08 vm09 bash[22983]: cluster 2026-03-09T16:00:07.319121+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:08 vm09 bash[22983]: cluster 2026-03-09T16:00:07.423490+0000 mon.a (mon.0) 2356 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T16:00:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:08 vm09 bash[22983]: cluster 2026-03-09T16:00:07.423490+0000 mon.a (mon.0) 2356 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T16:00:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:08 vm09 bash[22983]: audit 2026-03-09T16:00:07.424717+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:08 vm09 bash[22983]: audit 2026-03-09T16:00:07.424717+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:08 vm09 bash[22983]: audit 2026-03-09T16:00:07.428631+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:08 vm09 bash[22983]: audit 2026-03-09T16:00:07.428631+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.362006+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.362006+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: cluster 2026-03-09T16:00:08.376581+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: cluster 2026-03-09T16:00:08.376581+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.377885+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.377885+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.381295+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.381295+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.386300+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.386300+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.386522+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:08.386522+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: cluster 2026-03-09T16:00:08.729718+0000 mgr.y (mgr.14520) 263 : cluster [DBG] pgmap v380: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: cluster 2026-03-09T16:00:08.729718+0000 mgr.y (mgr.14520) 263 : cluster [DBG] pgmap v380: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.367097+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.367097+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.367266+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.367266+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.370861+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.370861+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: cluster 2026-03-09T16:00:09.373830+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: cluster 2026-03-09T16:00:09.373830+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.375359+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.375359+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.376483+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.376483+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.378359+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:09 vm01 bash[28152]: audit 2026-03-09T16:00:09.378359+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.362006+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.362006+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: cluster 2026-03-09T16:00:08.376581+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: cluster 2026-03-09T16:00:08.376581+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.377885+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.377885+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.381295+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.381295+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.386300+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.386300+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.386522+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:08.386522+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: cluster 2026-03-09T16:00:08.729718+0000 mgr.y (mgr.14520) 263 : cluster [DBG] pgmap v380: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: cluster 2026-03-09T16:00:08.729718+0000 mgr.y (mgr.14520) 263 : cluster [DBG] pgmap v380: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.367097+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.367097+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.367266+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.367266+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.370861+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.370861+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: cluster 2026-03-09T16:00:09.373830+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: cluster 2026-03-09T16:00:09.373830+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.375359+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.375359+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.376483+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.376483+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.378359+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:09 vm01 bash[20728]: audit 2026-03-09T16:00:09.378359+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.362006+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.362006+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: cluster 2026-03-09T16:00:08.376581+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: cluster 2026-03-09T16:00:08.376581+0000 mon.a (mon.0) 2359 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.377885+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.377885+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.381295+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.381295+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.386300+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.386300+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.386522+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:08.386522+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: cluster 2026-03-09T16:00:08.729718+0000 mgr.y (mgr.14520) 263 : cluster [DBG] pgmap v380: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: cluster 2026-03-09T16:00:08.729718+0000 mgr.y (mgr.14520) 263 : cluster [DBG] pgmap v380: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 774 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.367097+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.367097+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.367266+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.367266+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.370861+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.370861+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.101:0/2435495541' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: cluster 2026-03-09T16:00:09.373830+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: cluster 2026-03-09T16:00:09.373830+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.375359+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.375359+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.376483+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.376483+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.378359+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:09.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:09 vm09 bash[22983]: audit 2026-03-09T16:00:09.378359+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.370116+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.370116+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.370183+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.370183+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: cluster 2026-03-09T16:00:10.373231+0000 mon.a (mon.0) 2369 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: cluster 2026-03-09T16:00:10.373231+0000 mon.a (mon.0) 2369 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.386571+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.386571+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.388283+0000 mon.a (mon.0) 2370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.388283+0000 mon.a (mon.0) 2370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.390820+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.390820+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.391453+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.391453+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.391654+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: audit 2026-03-09T16:00:10.391654+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: cluster 2026-03-09T16:00:10.730035+0000 mgr.y (mgr.14520) 264 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:11 vm09 bash[22983]: cluster 2026-03-09T16:00:10.730035+0000 mgr.y (mgr.14520) 264 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.370116+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.370116+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.370183+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.370183+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: cluster 2026-03-09T16:00:10.373231+0000 mon.a (mon.0) 2369 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: cluster 2026-03-09T16:00:10.373231+0000 mon.a (mon.0) 2369 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.386571+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.386571+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.388283+0000 mon.a (mon.0) 2370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.388283+0000 mon.a (mon.0) 2370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.390820+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.390820+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.391453+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.391453+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.391654+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: audit 2026-03-09T16:00:10.391654+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: cluster 2026-03-09T16:00:10.730035+0000 mgr.y (mgr.14520) 264 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:11 vm01 bash[28152]: cluster 2026-03-09T16:00:10.730035+0000 mgr.y (mgr.14520) 264 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.370116+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.370116+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm01-59610-55"}]': finished 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.370183+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.370183+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: cluster 2026-03-09T16:00:10.373231+0000 mon.a (mon.0) 2369 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: cluster 2026-03-09T16:00:10.373231+0000 mon.a (mon.0) 2369 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.386571+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.386571+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.388283+0000 mon.a (mon.0) 2370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.388283+0000 mon.a (mon.0) 2370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.390820+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.390820+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.391453+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.391453+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.391654+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: audit 2026-03-09T16:00:10.391654+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: cluster 2026-03-09T16:00:10.730035+0000 mgr.y (mgr.14520) 264 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:11 vm01 bash[20728]: cluster 2026-03-09T16:00:10.730035+0000 mgr.y (mgr.14520) 264 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.377890+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.377890+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.377936+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.377936+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: cluster 2026-03-09T16:00:11.383346+0000 mon.a (mon.0) 2376 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: cluster 2026-03-09T16:00:11.383346+0000 mon.a (mon.0) 2376 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.386918+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.386918+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.400548+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.400548+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.400803+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:12 vm01 bash[20728]: audit 2026-03-09T16:00:11.400803+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.377890+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.377890+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.377936+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.377936+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: cluster 2026-03-09T16:00:11.383346+0000 mon.a (mon.0) 2376 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: cluster 2026-03-09T16:00:11.383346+0000 mon.a (mon.0) 2376 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.386918+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.386918+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.400548+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.400548+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.400803+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:12 vm01 bash[28152]: audit 2026-03-09T16:00:11.400803+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.377890+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.377890+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.377936+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.377936+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm01-59610-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: cluster 2026-03-09T16:00:11.383346+0000 mon.a (mon.0) 2376 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: cluster 2026-03-09T16:00:11.383346+0000 mon.a (mon.0) 2376 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.386918+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.386918+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.400548+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.400548+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.400803+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:12.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:12 vm09 bash[22983]: audit 2026-03-09T16:00:11.400803+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:00:13.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:00:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:00:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:13 vm01 bash[20728]: audit 2026-03-09T16:00:12.381367+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:13 vm01 bash[20728]: audit 2026-03-09T16:00:12.381367+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:13 vm01 bash[20728]: cluster 2026-03-09T16:00:12.388330+0000 mon.a (mon.0) 2380 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:13 vm01 bash[20728]: cluster 2026-03-09T16:00:12.388330+0000 mon.a (mon.0) 2380 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:13 vm01 bash[20728]: cluster 2026-03-09T16:00:12.730462+0000 mgr.y (mgr.14520) 265 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:13 vm01 bash[20728]: cluster 2026-03-09T16:00:12.730462+0000 mgr.y (mgr.14520) 265 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:13 vm01 bash[20728]: audit 2026-03-09T16:00:13.385112+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:13 vm01 bash[20728]: audit 2026-03-09T16:00:13.385112+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:13 vm01 bash[28152]: audit 2026-03-09T16:00:12.381367+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:13 vm01 bash[28152]: audit 2026-03-09T16:00:12.381367+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:13 vm01 bash[28152]: cluster 2026-03-09T16:00:12.388330+0000 mon.a (mon.0) 2380 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:13 vm01 bash[28152]: cluster 2026-03-09T16:00:12.388330+0000 mon.a (mon.0) 2380 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:13 vm01 bash[28152]: cluster 2026-03-09T16:00:12.730462+0000 mgr.y (mgr.14520) 265 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:13 vm01 bash[28152]: cluster 2026-03-09T16:00:12.730462+0000 mgr.y (mgr.14520) 265 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:13 vm01 bash[28152]: audit 2026-03-09T16:00:13.385112+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:13.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:13 vm01 bash[28152]: audit 2026-03-09T16:00:13.385112+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:13 vm09 bash[22983]: audit 2026-03-09T16:00:12.381367+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:00:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:13 vm09 bash[22983]: audit 2026-03-09T16:00:12.381367+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:00:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:13 vm09 bash[22983]: cluster 2026-03-09T16:00:12.388330+0000 mon.a (mon.0) 2380 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T16:00:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:13 vm09 bash[22983]: cluster 2026-03-09T16:00:12.388330+0000 mon.a (mon.0) 2380 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T16:00:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:13 vm09 bash[22983]: cluster 2026-03-09T16:00:12.730462+0000 mgr.y (mgr.14520) 265 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:13 vm09 bash[22983]: cluster 2026-03-09T16:00:12.730462+0000 mgr.y (mgr.14520) 265 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 782 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:13 vm09 bash[22983]: audit 2026-03-09T16:00:13.385112+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:13 vm09 bash[22983]: audit 2026-03-09T16:00:13.385112+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm01-59610-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: cluster 2026-03-09T16:00:13.388877+0000 mon.a (mon.0) 2382 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: cluster 2026-03-09T16:00:13.388877+0000 mon.a (mon.0) 2382 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:13.431450+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:13.431450+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:13.431689+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:13.431689+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:13.431982+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:13.431982+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:13.432197+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:13.432197+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: cluster 2026-03-09T16:00:13.824021+0000 mon.a (mon.0) 2385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: cluster 2026-03-09T16:00:13.824021+0000 mon.a (mon.0) 2385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:14.158047+0000 mon.a (mon.0) 2386 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:14.158047+0000 mon.a (mon.0) 2386 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:14.388037+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]': finished 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: audit 2026-03-09T16:00:14.388037+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]': finished 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: cluster 2026-03-09T16:00:14.392436+0000 mon.a (mon.0) 2388 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:14 vm01 bash[20728]: cluster 2026-03-09T16:00:14.392436+0000 mon.a (mon.0) 2388 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: cluster 2026-03-09T16:00:13.388877+0000 mon.a (mon.0) 2382 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: cluster 2026-03-09T16:00:13.388877+0000 mon.a (mon.0) 2382 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:13.431450+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:13.431450+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:13.431689+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:13.431689+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:13.431982+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:13.431982+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:13.432197+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:13.432197+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: cluster 2026-03-09T16:00:13.824021+0000 mon.a (mon.0) 2385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: cluster 2026-03-09T16:00:13.824021+0000 mon.a (mon.0) 2385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:14.158047+0000 mon.a (mon.0) 2386 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:14.158047+0000 mon.a (mon.0) 2386 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:14.388037+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]': finished 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: audit 2026-03-09T16:00:14.388037+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]': finished 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: cluster 2026-03-09T16:00:14.392436+0000 mon.a (mon.0) 2388 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T16:00:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:14 vm01 bash[28152]: cluster 2026-03-09T16:00:14.392436+0000 mon.a (mon.0) 2388 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: cluster 2026-03-09T16:00:13.388877+0000 mon.a (mon.0) 2382 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: cluster 2026-03-09T16:00:13.388877+0000 mon.a (mon.0) 2382 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:13.431450+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:13.431450+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:13.431689+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:13.431689+0000 mon.a (mon.0) 2383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:13.431982+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:13.431982+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:13.432197+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:13.432197+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: cluster 2026-03-09T16:00:13.824021+0000 mon.a (mon.0) 2385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: cluster 2026-03-09T16:00:13.824021+0000 mon.a (mon.0) 2385 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:14.158047+0000 mon.a (mon.0) 2386 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:14.158047+0000 mon.a (mon.0) 2386 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:14.388037+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]': finished 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: audit 2026-03-09T16:00:14.388037+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-41"}]': finished 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: cluster 2026-03-09T16:00:14.392436+0000 mon.a (mon.0) 2388 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T16:00:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:14 vm09 bash[22983]: cluster 2026-03-09T16:00:14.392436+0000 mon.a (mon.0) 2388 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T16:00:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:15 vm01 bash[28152]: cluster 2026-03-09T16:00:14.730970+0000 mgr.y (mgr.14520) 266 : cluster [DBG] pgmap v389: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:15 vm01 bash[28152]: cluster 2026-03-09T16:00:14.730970+0000 mgr.y (mgr.14520) 266 : cluster [DBG] pgmap v389: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:15 vm01 bash[28152]: cluster 2026-03-09T16:00:15.399188+0000 mon.a (mon.0) 2389 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:15 vm01 bash[28152]: cluster 2026-03-09T16:00:15.399188+0000 mon.a (mon.0) 2389 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:15 vm01 bash[28152]: audit 2026-03-09T16:00:15.414170+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:15 vm01 bash[28152]: audit 2026-03-09T16:00:15.414170+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:15 vm01 bash[20728]: cluster 2026-03-09T16:00:14.730970+0000 mgr.y (mgr.14520) 266 : cluster [DBG] pgmap v389: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:15 vm01 bash[20728]: cluster 2026-03-09T16:00:14.730970+0000 mgr.y (mgr.14520) 266 : cluster [DBG] pgmap v389: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:15 vm01 bash[20728]: cluster 2026-03-09T16:00:15.399188+0000 mon.a (mon.0) 2389 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:15 vm01 bash[20728]: cluster 2026-03-09T16:00:15.399188+0000 mon.a (mon.0) 2389 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:15 vm01 bash[20728]: audit 2026-03-09T16:00:15.414170+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:15 vm01 bash[20728]: audit 2026-03-09T16:00:15.414170+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:15 vm09 bash[22983]: cluster 2026-03-09T16:00:14.730970+0000 mgr.y (mgr.14520) 266 : cluster [DBG] pgmap v389: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:15 vm09 bash[22983]: cluster 2026-03-09T16:00:14.730970+0000 mgr.y (mgr.14520) 266 : cluster [DBG] pgmap v389: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:15 vm09 bash[22983]: cluster 2026-03-09T16:00:15.399188+0000 mon.a (mon.0) 2389 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T16:00:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:15 vm09 bash[22983]: cluster 2026-03-09T16:00:15.399188+0000 mon.a (mon.0) 2389 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T16:00:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:15 vm09 bash[22983]: audit 2026-03-09T16:00:15.414170+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:15 vm09 bash[22983]: audit 2026-03-09T16:00:15.414170+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:16.883 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:00:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.395336+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.395336+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.408156+0000 mgr.y (mgr.14520) 267 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.408156+0000 mgr.y (mgr.14520) 267 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: cluster 2026-03-09T16:00:16.408804+0000 mon.a (mon.0) 2392 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: cluster 2026-03-09T16:00:16.408804+0000 mon.a (mon.0) 2392 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.415042+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.415042+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.418656+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.418656+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.424078+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: audit 2026-03-09T16:00:16.424078+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: cluster 2026-03-09T16:00:16.731377+0000 mgr.y (mgr.14520) 268 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:17 vm01 bash[20728]: cluster 2026-03-09T16:00:16.731377+0000 mgr.y (mgr.14520) 268 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.395336+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.395336+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.408156+0000 mgr.y (mgr.14520) 267 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.408156+0000 mgr.y (mgr.14520) 267 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: cluster 2026-03-09T16:00:16.408804+0000 mon.a (mon.0) 2392 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: cluster 2026-03-09T16:00:16.408804+0000 mon.a (mon.0) 2392 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.415042+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.415042+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.418656+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.418656+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.424078+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: audit 2026-03-09T16:00:16.424078+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: cluster 2026-03-09T16:00:16.731377+0000 mgr.y (mgr.14520) 268 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:17.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:17 vm01 bash[28152]: cluster 2026-03-09T16:00:16.731377+0000 mgr.y (mgr.14520) 268 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.395336+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.395336+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.408156+0000 mgr.y (mgr.14520) 267 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.408156+0000 mgr.y (mgr.14520) 267 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: cluster 2026-03-09T16:00:16.408804+0000 mon.a (mon.0) 2392 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: cluster 2026-03-09T16:00:16.408804+0000 mon.a (mon.0) 2392 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.415042+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.415042+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.418656+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.418656+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]: dispatch 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.424078+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: audit 2026-03-09T16:00:16.424078+0000 mon.a (mon.0) 2394 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: cluster 2026-03-09T16:00:16.731377+0000 mgr.y (mgr.14520) 268 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:17 vm09 bash[22983]: cluster 2026-03-09T16:00:16.731377+0000 mgr.y (mgr.14520) 268 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 783 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:18.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.404971+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.404971+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.405076+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.405076+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: cluster 2026-03-09T16:00:17.414419+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: cluster 2026-03-09T16:00:17.414419+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.420687+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.420687+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.421020+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.421020+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.421664+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.421664+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.436730+0000 mon.c (mon.2) 287 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.436730+0000 mon.c (mon.2) 287 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.436989+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.436989+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.439835+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.439835+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.440053+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.440053+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.441936+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.441936+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.442220+0000 mon.a (mon.0) 2401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:18 vm01 bash[20728]: audit 2026-03-09T16:00:17.442220+0000 mon.a (mon.0) 2401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.404971+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.404971+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.405076+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.405076+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: cluster 2026-03-09T16:00:17.414419+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: cluster 2026-03-09T16:00:17.414419+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.420687+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.420687+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.421020+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.421020+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.421664+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.421664+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.436730+0000 mon.c (mon.2) 287 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.436730+0000 mon.c (mon.2) 287 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.436989+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.436989+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.439835+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.439835+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.440053+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.440053+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.441936+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.441936+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.442220+0000 mon.a (mon.0) 2401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:18 vm01 bash[28152]: audit 2026-03-09T16:00:17.442220+0000 mon.a (mon.0) 2401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.404971+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.404971+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.101:0/4126032748' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm01-59610-56"}]': finished 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.405076+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.405076+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: cluster 2026-03-09T16:00:17.414419+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: cluster 2026-03-09T16:00:17.414419+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.420687+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.420687+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.421020+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.421020+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.421664+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.421664+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.436730+0000 mon.c (mon.2) 287 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.436730+0000 mon.c (mon.2) 287 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.436989+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.436989+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.439835+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.439835+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.440053+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.440053+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.441936+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.441936+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.442220+0000 mon.a (mon.0) 2401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:18.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:18 vm09 bash[22983]: audit 2026-03-09T16:00:17.442220+0000 mon.a (mon.0) 2401 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.408091+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.408091+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.408200+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.408200+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: cluster 2026-03-09T16:00:18.411512+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: cluster 2026-03-09T16:00:18.411512+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.420825+0000 mon.c (mon.2) 290 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.420825+0000 mon.c (mon.2) 290 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.420939+0000 mon.c (mon.2) 291 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.420939+0000 mon.c (mon.2) 291 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.427851+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.427851+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.428135+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: audit 2026-03-09T16:00:18.428135+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: cluster 2026-03-09T16:00:18.731782+0000 mgr.y (mgr.14520) 269 : cluster [DBG] pgmap v395: 292 pgs: 24 unknown, 268 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: cluster 2026-03-09T16:00:18.731782+0000 mgr.y (mgr.14520) 269 : cluster [DBG] pgmap v395: 292 pgs: 24 unknown, 268 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: cluster 2026-03-09T16:00:18.824758+0000 mon.a (mon.0) 2407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:19.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:19 vm09 bash[22983]: cluster 2026-03-09T16:00:18.824758+0000 mon.a (mon.0) 2407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.408091+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.408091+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.408200+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.408200+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: cluster 2026-03-09T16:00:18.411512+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T16:00:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: cluster 2026-03-09T16:00:18.411512+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.420825+0000 mon.c (mon.2) 290 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.420825+0000 mon.c (mon.2) 290 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.420939+0000 mon.c (mon.2) 291 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.420939+0000 mon.c (mon.2) 291 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.427851+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.427851+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.428135+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: audit 2026-03-09T16:00:18.428135+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: cluster 2026-03-09T16:00:18.731782+0000 mgr.y (mgr.14520) 269 : cluster [DBG] pgmap v395: 292 pgs: 24 unknown, 268 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: cluster 2026-03-09T16:00:18.731782+0000 mgr.y (mgr.14520) 269 : cluster [DBG] pgmap v395: 292 pgs: 24 unknown, 268 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: cluster 2026-03-09T16:00:18.824758+0000 mon.a (mon.0) 2407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:19 vm01 bash[28152]: cluster 2026-03-09T16:00:18.824758+0000 mon.a (mon.0) 2407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.408091+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.408091+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.408200+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.408200+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm01-59610-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: cluster 2026-03-09T16:00:18.411512+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: cluster 2026-03-09T16:00:18.411512+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.420825+0000 mon.c (mon.2) 290 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.420825+0000 mon.c (mon.2) 290 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.420939+0000 mon.c (mon.2) 291 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.420939+0000 mon.c (mon.2) 291 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.427851+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.427851+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.428135+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: audit 2026-03-09T16:00:18.428135+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: cluster 2026-03-09T16:00:18.731782+0000 mgr.y (mgr.14520) 269 : cluster [DBG] pgmap v395: 292 pgs: 24 unknown, 268 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: cluster 2026-03-09T16:00:18.731782+0000 mgr.y (mgr.14520) 269 : cluster [DBG] pgmap v395: 292 pgs: 24 unknown, 268 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: cluster 2026-03-09T16:00:18.824758+0000 mon.a (mon.0) 2407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:19 vm01 bash[20728]: cluster 2026-03-09T16:00:18.824758+0000 mon.a (mon.0) 2407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:20 vm09 bash[22983]: audit 2026-03-09T16:00:19.429645+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T16:00:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:20 vm09 bash[22983]: audit 2026-03-09T16:00:19.429645+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T16:00:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:20 vm09 bash[22983]: cluster 2026-03-09T16:00:19.448678+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T16:00:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:20 vm09 bash[22983]: cluster 2026-03-09T16:00:19.448678+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T16:00:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:20 vm09 bash[22983]: audit 2026-03-09T16:00:19.452841+0000 mon.c (mon.2) 292 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:20 vm09 bash[22983]: audit 2026-03-09T16:00:19.452841+0000 mon.c (mon.2) 292 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:20 vm09 bash[22983]: audit 2026-03-09T16:00:19.453104+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:20 vm09 bash[22983]: audit 2026-03-09T16:00:19.453104+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:20 vm01 bash[28152]: audit 2026-03-09T16:00:19.429645+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T16:00:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:20 vm01 bash[28152]: audit 2026-03-09T16:00:19.429645+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T16:00:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:20 vm01 bash[28152]: cluster 2026-03-09T16:00:19.448678+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T16:00:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:20 vm01 bash[28152]: cluster 2026-03-09T16:00:19.448678+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T16:00:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:20 vm01 bash[28152]: audit 2026-03-09T16:00:19.452841+0000 mon.c (mon.2) 292 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:20 vm01 bash[28152]: audit 2026-03-09T16:00:19.452841+0000 mon.c (mon.2) 292 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:20 vm01 bash[28152]: audit 2026-03-09T16:00:19.453104+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:20 vm01 bash[28152]: audit 2026-03-09T16:00:19.453104+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:20 vm01 bash[20728]: audit 2026-03-09T16:00:19.429645+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:20 vm01 bash[20728]: audit 2026-03-09T16:00:19.429645+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:20 vm01 bash[20728]: cluster 2026-03-09T16:00:19.448678+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:20 vm01 bash[20728]: cluster 2026-03-09T16:00:19.448678+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:20 vm01 bash[20728]: audit 2026-03-09T16:00:19.452841+0000 mon.c (mon.2) 292 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:20 vm01 bash[20728]: audit 2026-03-09T16:00:19.452841+0000 mon.c (mon.2) 292 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:20 vm01 bash[20728]: audit 2026-03-09T16:00:19.453104+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:20.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:20 vm01 bash[20728]: audit 2026-03-09T16:00:19.453104+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:21.864 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetRead 2026-03-09T16:00:21.864 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-09T16:00:21.864 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: ok, hit_set contains 267:602f83fe:::foo:head 2026-03-09T16:00:21.864 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetRead (9064 ms) 2026-03-09T16:00:21.864 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetWrite 2026-03-09T16:00:21.864 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg_num = 32 2026-03-09T16:00:21.864 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 0 ls 1773072022,0 2026-03-09T16:00:21.864 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 1 ls 1773072022,0 2026-03-09T16:00:21.864 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 2 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 3 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 4 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 5 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 6 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 7 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 8 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 9 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 10 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 11 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 12 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 13 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 14 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 15 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 16 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 17 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 18 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 19 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 20 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 21 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 22 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 23 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 24 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 25 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 26 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 27 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 28 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 29 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 30 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg 31 ls 1773072022,0 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: pg_num = 32 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:6cac518f:::0:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:02547ec2:::1:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:f905c69b:::2:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:cfc208b3:::3:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d83876eb:::4:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:b29083e3:::5:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c4fdafeb:::6:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:5c6b0b28:::7:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:bd63b0f1:::8:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:e960b815:::9:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:52ea6a34:::10:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:89d3ae78:::11:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:de5d7c5f:::12:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:566253c9:::13:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:62a1935d:::14:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:863748b0:::15:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:3958e169:::16:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:4d4dabf9:::17:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:8391935d:::18:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:28883081:::19:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:69259c59:::20:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:4bdb80b7:::21:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:a11c5d71:::22:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:271af37b:::23:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:95b121be:::24:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:58d1031b:::25:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:0a050783:::26:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c709704c:::27:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:cbe56eaf:::28:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:86b4b162:::29:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:70d89383:::30:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:dd450c7c:::31:head 2026-03-09T16:00:21.865 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:6d5729b1:::32:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c388f3fb:::33:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:56cfea31:::34:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:9dbc1bf7:::35:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:40b74ccd:::36:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:4d5aaf42:::37:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:920f362c:::38:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:6cc53222:::39:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:9cad833f:::40:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:1ea84d41:::41:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c4480ef6:::42:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:a694361e:::43:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d1bd33e9:::44:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:ddc2cd5d:::45:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:2b782207:::46:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:7b187fca:::47:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:90ecdf6f:::48:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:a5ed95fe:::49:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:ea0eaa55:::50:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:f33ef17b:::51:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:a0d1b2f6:::52:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:60c5229e:::53:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:edcbc575:::54:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:102cf253:::55:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:efb7fb0b:::56:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:50d0a326:::57:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d4dc5daf:::58:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:3a130462:::59:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:ec87ed71:::60:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d5bc9454:::61:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:3ddfe313:::62:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:7c2816b9:::63:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:47e00e4d:::64:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c6410c18:::65:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:b48ed237:::66:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:cd63ad31:::67:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:b179e92b:::68:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:0d9f741a:::69:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:6d3352ae:::70:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c6d5c19e:::71:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:bc4729c3:::72:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:77e930b9:::73:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:0abeecfd:::74:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:b7c37e15:::75:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:b6378398:::76:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:02bd68de:::77:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:cc795d2d:::78:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:630d4fea:::79:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:e0d29ef5:::80:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:fd6f13d2:::81:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:606461d5:::82:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:eadbdc43:::83:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:8761d0bb:::84:head 2026-03-09T16:00:21.866 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:9ef0186f:::85:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:e0d41294:::86:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:961de695:::87:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:1423148f:::88:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:633a8fa2:::89:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:a8653809:::90:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:3dac8b33:::91:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:35aad435:::92:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:f6dcc343:::93:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:dbbdad87:::94:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:1cb48ce0:::95:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:03cd461c:::96:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:17a4ea99:::97:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:9993c9a7:::98:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:6394211c:::99:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:94c7ae57:::100:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:6fdee5bb:::101:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:9a477fd1:::102:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:eb850916:::103:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:affc56b9:::104:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:b42dc814:::105:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:f319f8f0:::106:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:9a40b9de:::107:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:8b524f28:::108:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:e3de589f:::109:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:90f90a5b:::110:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:a7b4f1d7:::111:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:af51766e:::112:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:b6f90bd1:::113:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:e0261208:::114:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c9569ef7:::115:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:61bebe50:::116:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:fe93412b:::117:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d3d38bee:::118:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:3100ba0c:::119:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d0560ada:::120:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:f0ea8b35:::121:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:766f231a:::122:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:a07a2582:::123:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:bd7c6b3a:::124:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:fb2ddaff:::125:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:4408e1fe:::126:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:ee1df7a7:::127:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c3002909:::128:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:4f48ffa9:::129:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:edf38733:::130:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c08425c0:::131:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:5f902d98:::132:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:41ea2c93:::133:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:813cee13:::134:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:0131818d:::135:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:26ba5a85:::136:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:381b8a5a:::137:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:28797e47:::138:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:bfca7f22:::139:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:36807075:::140:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:80b03975:::141:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:5c15709b:::142:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:f39ea15e:::143:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:ea992956:::144:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:48887b1c:::145:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:9f24a9dd:::146:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:987f100b:::147:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d2dd3581:::148:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:7fed1808:::149:head 2026-03-09T16:00:21.867 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c80b70e9:::150:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:85ed90f9:::151:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:36428b24:::152:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d044c34a:::153:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:7c18bf58:::154:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d1c21232:::155:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:a7a3c575:::156:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:87da0633:::157:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d5ac3822:::158:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:3f20522d:::159:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:6ca26563:::160:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:532ce135:::161:head 2026-03-09T16:00:21.868 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:c78863e6:::162:head 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: audit 2026-03-09T16:00:20.548063+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: audit 2026-03-09T16:00:20.548063+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: audit 2026-03-09T16:00:20.548130+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: audit 2026-03-09T16:00:20.548130+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: cluster 2026-03-09T16:00:20.550441+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: cluster 2026-03-09T16:00:20.550441+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: audit 2026-03-09T16:00:20.564702+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: audit 2026-03-09T16:00:20.564702+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: audit 2026-03-09T16:00:20.565122+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: audit 2026-03-09T16:00:20.565122+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: cluster 2026-03-09T16:00:20.732148+0000 mgr.y (mgr.14520) 270 : cluster [DBG] pgmap v398: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:21 vm09 bash[22983]: cluster 2026-03-09T16:00:20.732148+0000 mgr.y (mgr.14520) 270 : cluster [DBG] pgmap v398: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: audit 2026-03-09T16:00:20.548063+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: audit 2026-03-09T16:00:20.548063+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: audit 2026-03-09T16:00:20.548130+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: audit 2026-03-09T16:00:20.548130+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: cluster 2026-03-09T16:00:20.550441+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T16:00:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: cluster 2026-03-09T16:00:20.550441+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T16:00:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: audit 2026-03-09T16:00:20.564702+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: audit 2026-03-09T16:00:20.564702+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: audit 2026-03-09T16:00:20.565122+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: audit 2026-03-09T16:00:20.565122+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: cluster 2026-03-09T16:00:20.732148+0000 mgr.y (mgr.14520) 270 : cluster [DBG] pgmap v398: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:21 vm01 bash[28152]: cluster 2026-03-09T16:00:20.732148+0000 mgr.y (mgr.14520) 270 : cluster [DBG] pgmap v398: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: audit 2026-03-09T16:00:20.548063+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: audit 2026-03-09T16:00:20.548063+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm01-59610-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: audit 2026-03-09T16:00:20.548130+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: audit 2026-03-09T16:00:20.548130+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: cluster 2026-03-09T16:00:20.550441+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: cluster 2026-03-09T16:00:20.550441+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: audit 2026-03-09T16:00:20.564702+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: audit 2026-03-09T16:00:20.564702+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: audit 2026-03-09T16:00:20.565122+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: audit 2026-03-09T16:00:20.565122+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: cluster 2026-03-09T16:00:20.732148+0000 mgr.y (mgr.14520) 270 : cluster [DBG] pgmap v398: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:21 vm01 bash[20728]: cluster 2026-03-09T16:00:20.732148+0000 mgr.y (mgr.14520) 270 : cluster [DBG] pgmap v398: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.564327+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.564327+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: cluster 2026-03-09T16:00:21.577868+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: cluster 2026-03-09T16:00:21.577868+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.868123+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.868123+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.937894+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.937894+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.938450+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.938450+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.939119+0000 mon.c (mon.2) 296 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.939119+0000 mon.c (mon.2) 296 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.939555+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:22 vm01 bash[28152]: audit 2026-03-09T16:00:21.939555+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.564327+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.564327+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: cluster 2026-03-09T16:00:21.577868+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: cluster 2026-03-09T16:00:21.577868+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.868123+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.868123+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.937894+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.937894+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.938450+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.938450+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.939119+0000 mon.c (mon.2) 296 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.939119+0000 mon.c (mon.2) 296 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.939555+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.879 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:22 vm01 bash[20728]: audit 2026-03-09T16:00:21.939555+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.564327+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.564327+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: cluster 2026-03-09T16:00:21.577868+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: cluster 2026-03-09T16:00:21.577868+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.868123+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.868123+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm01-59821-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.937894+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.937894+0000 mon.c (mon.2) 295 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.938450+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.938450+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.939119+0000 mon.c (mon.2) 296 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.939119+0000 mon.c (mon.2) 296 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.939555+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:22.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:22 vm09 bash[22983]: audit 2026-03-09T16:00:21.939555+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]: dispatch 2026-03-09T16:00:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:00:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:00:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: audit 2026-03-09T16:00:22.577651+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]': finished 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: audit 2026-03-09T16:00:22.577651+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]': finished 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: cluster 2026-03-09T16:00:22.580296+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: cluster 2026-03-09T16:00:22.580296+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: audit 2026-03-09T16:00:22.588414+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: audit 2026-03-09T16:00:22.588414+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: audit 2026-03-09T16:00:22.588903+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: audit 2026-03-09T16:00:22.588903+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: cluster 2026-03-09T16:00:22.732540+0000 mgr.y (mgr.14520) 271 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:23.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:23 vm09 bash[22983]: cluster 2026-03-09T16:00:22.732540+0000 mgr.y (mgr.14520) 271 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: audit 2026-03-09T16:00:22.577651+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]': finished 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: audit 2026-03-09T16:00:22.577651+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]': finished 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: cluster 2026-03-09T16:00:22.580296+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: cluster 2026-03-09T16:00:22.580296+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: audit 2026-03-09T16:00:22.588414+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: audit 2026-03-09T16:00:22.588414+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: audit 2026-03-09T16:00:22.588903+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: audit 2026-03-09T16:00:22.588903+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: cluster 2026-03-09T16:00:22.732540+0000 mgr.y (mgr.14520) 271 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:23 vm01 bash[28152]: cluster 2026-03-09T16:00:22.732540+0000 mgr.y (mgr.14520) 271 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:23.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: audit 2026-03-09T16:00:22.577651+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]': finished 2026-03-09T16:00:23.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: audit 2026-03-09T16:00:22.577651+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-43"}]': finished 2026-03-09T16:00:23.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: cluster 2026-03-09T16:00:22.580296+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T16:00:23.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: cluster 2026-03-09T16:00:22.580296+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T16:00:23.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: audit 2026-03-09T16:00:22.588414+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: audit 2026-03-09T16:00:22.588414+0000 mon.c (mon.2) 297 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: audit 2026-03-09T16:00:22.588903+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: audit 2026-03-09T16:00:22.588903+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:23.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: cluster 2026-03-09T16:00:22.732540+0000 mgr.y (mgr.14520) 271 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:23.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:23 vm01 bash[20728]: cluster 2026-03-09T16:00:22.732540+0000 mgr.y (mgr.14520) 271 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: audit 2026-03-09T16:00:23.596870+0000 mon.a (mon.0) 2422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: audit 2026-03-09T16:00:23.596870+0000 mon.a (mon.0) 2422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: audit 2026-03-09T16:00:23.609470+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: audit 2026-03-09T16:00:23.609470+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: cluster 2026-03-09T16:00:23.615705+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: cluster 2026-03-09T16:00:23.615705+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: audit 2026-03-09T16:00:23.617818+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: audit 2026-03-09T16:00:23.617818+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: cluster 2026-03-09T16:00:23.825419+0000 mon.a (mon.0) 2425 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:24.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:24 vm09 bash[22983]: cluster 2026-03-09T16:00:23.825419+0000 mon.a (mon.0) 2425 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: audit 2026-03-09T16:00:23.596870+0000 mon.a (mon.0) 2422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: audit 2026-03-09T16:00:23.596870+0000 mon.a (mon.0) 2422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: audit 2026-03-09T16:00:23.609470+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: audit 2026-03-09T16:00:23.609470+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: cluster 2026-03-09T16:00:23.615705+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: cluster 2026-03-09T16:00:23.615705+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: audit 2026-03-09T16:00:23.617818+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: audit 2026-03-09T16:00:23.617818+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: cluster 2026-03-09T16:00:23.825419+0000 mon.a (mon.0) 2425 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:24 vm01 bash[28152]: cluster 2026-03-09T16:00:23.825419+0000 mon.a (mon.0) 2425 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: audit 2026-03-09T16:00:23.596870+0000 mon.a (mon.0) 2422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: audit 2026-03-09T16:00:23.596870+0000 mon.a (mon.0) 2422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:24.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: audit 2026-03-09T16:00:23.609470+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: audit 2026-03-09T16:00:23.609470+0000 mon.c (mon.2) 298 : audit [INF] from='client.? 192.168.123.101:0/3776342044' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: cluster 2026-03-09T16:00:23.615705+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T16:00:24.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: cluster 2026-03-09T16:00:23.615705+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T16:00:24.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: audit 2026-03-09T16:00:23.617818+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: audit 2026-03-09T16:00:23.617818+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]: dispatch 2026-03-09T16:00:24.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: cluster 2026-03-09T16:00:23.825419+0000 mon.a (mon.0) 2425 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:24.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:24 vm01 bash[20728]: cluster 2026-03-09T16:00:23.825419+0000 mon.a (mon.0) 2425 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:25.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.600969+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:25.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.600969+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: cluster 2026-03-09T16:00:24.605667+0000 mon.a (mon.0) 2427 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: cluster 2026-03-09T16:00:24.605667+0000 mon.a (mon.0) 2427 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.623446+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.623446+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.648654+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.648654+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.654625+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.654625+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.655817+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.655817+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.656631+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.656631+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.658593+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.658593+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.659872+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.659872+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.660667+0000 mon.a (mon.0) 2431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:24.660667+0000 mon.a (mon.0) 2431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: cluster 2026-03-09T16:00:24.732822+0000 mgr.y (mgr.14520) 272 : cluster [DBG] pgmap v404: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: cluster 2026-03-09T16:00:24.732822+0000 mgr.y (mgr.14520) 272 : cluster [DBG] pgmap v404: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:25.604832+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:25.604832+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:25.604977+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:25.604977+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:25.608014+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:25.608014+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:25.620300+0000 mon.c (mon.2) 300 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: audit 2026-03-09T16:00:25.620300+0000 mon.c (mon.2) 300 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: cluster 2026-03-09T16:00:25.620765+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:25 vm01 bash[28152]: cluster 2026-03-09T16:00:25.620765+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.600969+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.600969+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: cluster 2026-03-09T16:00:24.605667+0000 mon.a (mon.0) 2427 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: cluster 2026-03-09T16:00:24.605667+0000 mon.a (mon.0) 2427 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.623446+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.623446+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.648654+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.648654+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.654625+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.654625+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.655817+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.655817+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.656631+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.656631+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.658593+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.658593+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.659872+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.659872+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.660667+0000 mon.a (mon.0) 2431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:24.660667+0000 mon.a (mon.0) 2431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: cluster 2026-03-09T16:00:24.732822+0000 mgr.y (mgr.14520) 272 : cluster [DBG] pgmap v404: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: cluster 2026-03-09T16:00:24.732822+0000 mgr.y (mgr.14520) 272 : cluster [DBG] pgmap v404: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:25.604832+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:25.604832+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:25.604977+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:25.604977+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:25.608014+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:25.608014+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:25.620300+0000 mon.c (mon.2) 300 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: audit 2026-03-09T16:00:25.620300+0000 mon.c (mon.2) 300 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: cluster 2026-03-09T16:00:25.620765+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T16:00:25.928 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:25 vm01 bash[20728]: cluster 2026-03-09T16:00:25.620765+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.600969+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.600969+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm01-59610-57"}]': finished 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: cluster 2026-03-09T16:00:24.605667+0000 mon.a (mon.0) 2427 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: cluster 2026-03-09T16:00:24.605667+0000 mon.a (mon.0) 2427 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.623446+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.623446+0000 mon.c (mon.2) 299 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.648654+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.648654+0000 mon.a (mon.0) 2428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.654625+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.654625+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.655817+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.655817+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.656631+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.656631+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.658593+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.658593+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.659872+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.659872+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.660667+0000 mon.a (mon.0) 2431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:24.660667+0000 mon.a (mon.0) 2431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: cluster 2026-03-09T16:00:24.732822+0000 mgr.y (mgr.14520) 272 : cluster [DBG] pgmap v404: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: cluster 2026-03-09T16:00:24.732822+0000 mgr.y (mgr.14520) 272 : cluster [DBG] pgmap v404: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:25.604832+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:25.604832+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:25.604977+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:25.604977+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm01-59610-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:25.608014+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:25.608014+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:25.620300+0000 mon.c (mon.2) 300 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: audit 2026-03-09T16:00:25.620300+0000 mon.c (mon.2) 300 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: cluster 2026-03-09T16:00:25.620765+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T16:00:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:25 vm09 bash[22983]: cluster 2026-03-09T16:00:25.620765+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T16:00:26.883 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:00:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:25.628508+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:25.628508+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:25.628694+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:25.628694+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:26.608517+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:26.608517+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:26.611644+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:26.611644+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: cluster 2026-03-09T16:00:26.619292+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: cluster 2026-03-09T16:00:26.619292+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:26.619694+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:26 vm09 bash[22983]: audit 2026-03-09T16:00:26.619694+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:25.628508+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:25.628508+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:25.628694+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:25.628694+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:26.608517+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:26.608517+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:26.611644+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:26.611644+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: cluster 2026-03-09T16:00:26.619292+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: cluster 2026-03-09T16:00:26.619292+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:26.619694+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:26 vm01 bash[28152]: audit 2026-03-09T16:00:26.619694+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:25.628508+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:25.628508+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:25.628694+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:25.628694+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:26.608517+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:26.608517+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:26.611644+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:26.611644+0000 mon.c (mon.2) 301 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: cluster 2026-03-09T16:00:26.619292+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: cluster 2026-03-09T16:00:26.619292+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:26.619694+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:26.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:26 vm01 bash[20728]: audit 2026-03-09T16:00:26.619694+0000 mon.a (mon.0) 2439 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:00:27.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:26.411638+0000 mgr.y (mgr.14520) 273 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:27.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:26.411638+0000 mgr.y (mgr.14520) 273 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:27.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: cluster 2026-03-09T16:00:26.733117+0000 mgr.y (mgr.14520) 274 : cluster [DBG] pgmap v407: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:27.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: cluster 2026-03-09T16:00:26.733117+0000 mgr.y (mgr.14520) 274 : cluster [DBG] pgmap v407: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:27.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:27.611985+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:27.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:27.611985+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:27.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:27.612129+0000 mon.a (mon.0) 2441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:27.612129+0000 mon.a (mon.0) 2441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:27.615322+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:27.615322+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: cluster 2026-03-09T16:00:27.618778+0000 mon.a (mon.0) 2442 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: cluster 2026-03-09T16:00:27.618778+0000 mon.a (mon.0) 2442 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:27.629428+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:27 vm01 bash[28152]: audit 2026-03-09T16:00:27.629428+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:26.411638+0000 mgr.y (mgr.14520) 273 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:26.411638+0000 mgr.y (mgr.14520) 273 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: cluster 2026-03-09T16:00:26.733117+0000 mgr.y (mgr.14520) 274 : cluster [DBG] pgmap v407: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: cluster 2026-03-09T16:00:26.733117+0000 mgr.y (mgr.14520) 274 : cluster [DBG] pgmap v407: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:27.611985+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:27.611985+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:27.612129+0000 mon.a (mon.0) 2441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:27.612129+0000 mon.a (mon.0) 2441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:27.615322+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:27.615322+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: cluster 2026-03-09T16:00:27.618778+0000 mon.a (mon.0) 2442 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: cluster 2026-03-09T16:00:27.618778+0000 mon.a (mon.0) 2442 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:27.629428+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:27.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:27 vm01 bash[20728]: audit 2026-03-09T16:00:27.629428+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:26.411638+0000 mgr.y (mgr.14520) 273 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:26.411638+0000 mgr.y (mgr.14520) 273 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: cluster 2026-03-09T16:00:26.733117+0000 mgr.y (mgr.14520) 274 : cluster [DBG] pgmap v407: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: cluster 2026-03-09T16:00:26.733117+0000 mgr.y (mgr.14520) 274 : cluster [DBG] pgmap v407: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:27.611985+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:27.611985+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm01-59610-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:27.612129+0000 mon.a (mon.0) 2441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:27.612129+0000 mon.a (mon.0) 2441 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:27.615322+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:27.615322+0000 mon.c (mon.2) 302 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: cluster 2026-03-09T16:00:27.618778+0000 mon.a (mon.0) 2442 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: cluster 2026-03-09T16:00:27.618778+0000 mon.a (mon.0) 2442 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:27.629428+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:27 vm09 bash[22983]: audit 2026-03-09T16:00:27.629428+0000 mon.a (mon.0) 2443 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: audit 2026-03-09T16:00:28.667126+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: audit 2026-03-09T16:00:28.667126+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: cluster 2026-03-09T16:00:28.675805+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: cluster 2026-03-09T16:00:28.675805+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: audit 2026-03-09T16:00:28.676670+0000 mon.c (mon.2) 303 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: audit 2026-03-09T16:00:28.676670+0000 mon.c (mon.2) 303 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: audit 2026-03-09T16:00:28.680164+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: audit 2026-03-09T16:00:28.680164+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: cluster 2026-03-09T16:00:28.733511+0000 mgr.y (mgr.14520) 275 : cluster [DBG] pgmap v410: 300 pgs: 1 creating+activating, 30 unknown, 269 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T16:00:29.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: cluster 2026-03-09T16:00:28.733511+0000 mgr.y (mgr.14520) 275 : cluster [DBG] pgmap v410: 300 pgs: 1 creating+activating, 30 unknown, 269 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: audit 2026-03-09T16:00:29.164105+0000 mon.a (mon.0) 2447 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:29 vm01 bash[28152]: audit 2026-03-09T16:00:29.164105+0000 mon.a (mon.0) 2447 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: audit 2026-03-09T16:00:28.667126+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: audit 2026-03-09T16:00:28.667126+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: cluster 2026-03-09T16:00:28.675805+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: cluster 2026-03-09T16:00:28.675805+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: audit 2026-03-09T16:00:28.676670+0000 mon.c (mon.2) 303 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: audit 2026-03-09T16:00:28.676670+0000 mon.c (mon.2) 303 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: audit 2026-03-09T16:00:28.680164+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: audit 2026-03-09T16:00:28.680164+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: cluster 2026-03-09T16:00:28.733511+0000 mgr.y (mgr.14520) 275 : cluster [DBG] pgmap v410: 300 pgs: 1 creating+activating, 30 unknown, 269 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: cluster 2026-03-09T16:00:28.733511+0000 mgr.y (mgr.14520) 275 : cluster [DBG] pgmap v410: 300 pgs: 1 creating+activating, 30 unknown, 269 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: audit 2026-03-09T16:00:29.164105+0000 mon.a (mon.0) 2447 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:29.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:29 vm01 bash[20728]: audit 2026-03-09T16:00:29.164105+0000 mon.a (mon.0) 2447 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: audit 2026-03-09T16:00:28.667126+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: audit 2026-03-09T16:00:28.667126+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: cluster 2026-03-09T16:00:28.675805+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: cluster 2026-03-09T16:00:28.675805+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: audit 2026-03-09T16:00:28.676670+0000 mon.c (mon.2) 303 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: audit 2026-03-09T16:00:28.676670+0000 mon.c (mon.2) 303 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: audit 2026-03-09T16:00:28.680164+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: audit 2026-03-09T16:00:28.680164+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: cluster 2026-03-09T16:00:28.733511+0000 mgr.y (mgr.14520) 275 : cluster [DBG] pgmap v410: 300 pgs: 1 creating+activating, 30 unknown, 269 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: cluster 2026-03-09T16:00:28.733511+0000 mgr.y (mgr.14520) 275 : cluster [DBG] pgmap v410: 300 pgs: 1 creating+activating, 30 unknown, 269 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: audit 2026-03-09T16:00:29.164105+0000 mon.a (mon.0) 2447 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:29 vm09 bash[22983]: audit 2026-03-09T16:00:29.164105+0000 mon.a (mon.0) 2447 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:30.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: cluster 2026-03-09T16:00:29.666989+0000 mon.a (mon.0) 2448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:30.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: cluster 2026-03-09T16:00:29.666989+0000 mon.a (mon.0) 2448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:30.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.675142+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.675142+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: cluster 2026-03-09T16:00:29.678997+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: cluster 2026-03-09T16:00:29.678997+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.680365+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.680365+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.686727+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.686727+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.699976+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.699976+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.700107+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:30 vm01 bash[28152]: audit 2026-03-09T16:00:29.700107+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: cluster 2026-03-09T16:00:29.666989+0000 mon.a (mon.0) 2448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: cluster 2026-03-09T16:00:29.666989+0000 mon.a (mon.0) 2448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.675142+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.675142+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: cluster 2026-03-09T16:00:29.678997+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: cluster 2026-03-09T16:00:29.678997+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.680365+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.680365+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.686727+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.686727+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.699976+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.699976+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.700107+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:30.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:30 vm01 bash[20728]: audit 2026-03-09T16:00:29.700107+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: cluster 2026-03-09T16:00:29.666989+0000 mon.a (mon.0) 2448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: cluster 2026-03-09T16:00:29.666989+0000 mon.a (mon.0) 2448 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.675142+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.675142+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: cluster 2026-03-09T16:00:29.678997+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: cluster 2026-03-09T16:00:29.678997+0000 mon.a (mon.0) 2450 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.680365+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.680365+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.686727+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.686727+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.699976+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.699976+0000 mon.c (mon.2) 304 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.700107+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:31.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:30 vm09 bash[22983]: audit 2026-03-09T16:00:29.700107+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: audit 2026-03-09T16:00:30.686200+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: audit 2026-03-09T16:00:30.686200+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: audit 2026-03-09T16:00:30.686259+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: audit 2026-03-09T16:00:30.686259+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: audit 2026-03-09T16:00:30.686813+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: audit 2026-03-09T16:00:30.686813+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: cluster 2026-03-09T16:00:30.689237+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: cluster 2026-03-09T16:00:30.689237+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: audit 2026-03-09T16:00:30.692295+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: audit 2026-03-09T16:00:30.692295+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: cluster 2026-03-09T16:00:30.733805+0000 mgr.y (mgr.14520) 276 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:32.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:31 vm09 bash[22983]: cluster 2026-03-09T16:00:30.733805+0000 mgr.y (mgr.14520) 276 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:32.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: audit 2026-03-09T16:00:30.686200+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:32.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: audit 2026-03-09T16:00:30.686200+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: audit 2026-03-09T16:00:30.686259+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: audit 2026-03-09T16:00:30.686259+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: audit 2026-03-09T16:00:30.686813+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: audit 2026-03-09T16:00:30.686813+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: cluster 2026-03-09T16:00:30.689237+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: cluster 2026-03-09T16:00:30.689237+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: audit 2026-03-09T16:00:30.692295+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: audit 2026-03-09T16:00:30.692295+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: cluster 2026-03-09T16:00:30.733805+0000 mgr.y (mgr.14520) 276 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:31 vm01 bash[28152]: cluster 2026-03-09T16:00:30.733805+0000 mgr.y (mgr.14520) 276 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: audit 2026-03-09T16:00:30.686200+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: audit 2026-03-09T16:00:30.686200+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: audit 2026-03-09T16:00:30.686259+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: audit 2026-03-09T16:00:30.686259+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: audit 2026-03-09T16:00:30.686813+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: audit 2026-03-09T16:00:30.686813+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.101:0/4222832824' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: cluster 2026-03-09T16:00:30.689237+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: cluster 2026-03-09T16:00:30.689237+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: audit 2026-03-09T16:00:30.692295+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: audit 2026-03-09T16:00:30.692295+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]: dispatch 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: cluster 2026-03-09T16:00:30.733805+0000 mgr.y (mgr.14520) 276 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:32.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:31 vm01 bash[20728]: cluster 2026-03-09T16:00:30.733805+0000 mgr.y (mgr.14520) 276 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.693634+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.693634+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: cluster 2026-03-09T16:00:31.712022+0000 mon.a (mon.0) 2458 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: cluster 2026-03-09T16:00:31.712022+0000 mon.a (mon.0) 2458 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.724124+0000 mon.c (mon.2) 305 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.724124+0000 mon.c (mon.2) 305 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.725213+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.725213+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.725736+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.725736+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.725893+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.725893+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.726358+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.726358+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.726567+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:32 vm09 bash[22983]: audit 2026-03-09T16:00:31.726567+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.693634+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.693634+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: cluster 2026-03-09T16:00:31.712022+0000 mon.a (mon.0) 2458 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: cluster 2026-03-09T16:00:31.712022+0000 mon.a (mon.0) 2458 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.724124+0000 mon.c (mon.2) 305 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.724124+0000 mon.c (mon.2) 305 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.725213+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.725213+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.725736+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.725736+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.725893+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.725893+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.726358+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.726358+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.726567+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:32 vm01 bash[28152]: audit 2026-03-09T16:00:31.726567+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:00:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:00:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.693634+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.693634+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm01-59610-58"}]': finished 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: cluster 2026-03-09T16:00:31.712022+0000 mon.a (mon.0) 2458 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: cluster 2026-03-09T16:00:31.712022+0000 mon.a (mon.0) 2458 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.724124+0000 mon.c (mon.2) 305 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.724124+0000 mon.c (mon.2) 305 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.725213+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.725213+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.725736+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.725736+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.725893+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.725893+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.726358+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.726358+0000 mon.c (mon.2) 307 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.726567+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:32 vm01 bash[20728]: audit 2026-03-09T16:00:31.726567+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:32.725613+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:32.725613+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:32.730321+0000 mon.c (mon.2) 308 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:32.730321+0000 mon.c (mon.2) 308 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: cluster 2026-03-09T16:00:32.731110+0000 mon.a (mon.0) 2463 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: cluster 2026-03-09T16:00:32.731110+0000 mon.a (mon.0) 2463 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: cluster 2026-03-09T16:00:32.734121+0000 mgr.y (mgr.14520) 277 : cluster [DBG] pgmap v416: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: cluster 2026-03-09T16:00:32.734121+0000 mgr.y (mgr.14520) 277 : cluster [DBG] pgmap v416: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:32.734148+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:32.734148+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:33.260637+0000 mon.a (mon.0) 2465 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:33.260637+0000 mon.a (mon.0) 2465 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:33.584582+0000 mon.a (mon.0) 2466 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:33.584582+0000 mon.a (mon.0) 2466 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:33.585235+0000 mon.a (mon.0) 2467 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:33.585235+0000 mon.a (mon.0) 2467 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:33.622609+0000 mon.a (mon.0) 2468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:33 vm09 bash[22983]: audit 2026-03-09T16:00:33.622609+0000 mon.a (mon.0) 2468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:32.725613+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:32.725613+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:32.730321+0000 mon.c (mon.2) 308 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:32.730321+0000 mon.c (mon.2) 308 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: cluster 2026-03-09T16:00:32.731110+0000 mon.a (mon.0) 2463 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: cluster 2026-03-09T16:00:32.731110+0000 mon.a (mon.0) 2463 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: cluster 2026-03-09T16:00:32.734121+0000 mgr.y (mgr.14520) 277 : cluster [DBG] pgmap v416: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: cluster 2026-03-09T16:00:32.734121+0000 mgr.y (mgr.14520) 277 : cluster [DBG] pgmap v416: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:32.734148+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:32.734148+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:33.260637+0000 mon.a (mon.0) 2465 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:33.260637+0000 mon.a (mon.0) 2465 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:33.584582+0000 mon.a (mon.0) 2466 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:33.584582+0000 mon.a (mon.0) 2466 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:33.585235+0000 mon.a (mon.0) 2467 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:33.585235+0000 mon.a (mon.0) 2467 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:33.622609+0000 mon.a (mon.0) 2468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:33 vm01 bash[28152]: audit 2026-03-09T16:00:33.622609+0000 mon.a (mon.0) 2468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:32.725613+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:34.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:32.725613+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:32.730321+0000 mon.c (mon.2) 308 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:32.730321+0000 mon.c (mon.2) 308 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: cluster 2026-03-09T16:00:32.731110+0000 mon.a (mon.0) 2463 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: cluster 2026-03-09T16:00:32.731110+0000 mon.a (mon.0) 2463 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: cluster 2026-03-09T16:00:32.734121+0000 mgr.y (mgr.14520) 277 : cluster [DBG] pgmap v416: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: cluster 2026-03-09T16:00:32.734121+0000 mgr.y (mgr.14520) 277 : cluster [DBG] pgmap v416: 292 pgs: 292 active+clean; 8.3 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:32.734148+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:32.734148+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:33.260637+0000 mon.a (mon.0) 2465 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:33.260637+0000 mon.a (mon.0) 2465 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:33.584582+0000 mon.a (mon.0) 2466 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:33.584582+0000 mon.a (mon.0) 2466 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:33.585235+0000 mon.a (mon.0) 2467 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:33.585235+0000 mon.a (mon.0) 2467 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:33.622609+0000 mon.a (mon.0) 2468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:34.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:33 vm01 bash[20728]: audit 2026-03-09T16:00:33.622609+0000 mon.a (mon.0) 2468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:00:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:34 vm09 bash[22983]: cluster 2026-03-09T16:00:33.750354+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T16:00:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:34 vm09 bash[22983]: cluster 2026-03-09T16:00:33.750354+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T16:00:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:34 vm09 bash[22983]: audit 2026-03-09T16:00:34.739774+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:34 vm09 bash[22983]: audit 2026-03-09T16:00:34.739774+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:34 vm09 bash[22983]: cluster 2026-03-09T16:00:34.743773+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T16:00:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:34 vm09 bash[22983]: cluster 2026-03-09T16:00:34.743773+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T16:00:35.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:34 vm01 bash[28152]: cluster 2026-03-09T16:00:33.750354+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T16:00:35.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:34 vm01 bash[28152]: cluster 2026-03-09T16:00:33.750354+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T16:00:35.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:34 vm01 bash[28152]: audit 2026-03-09T16:00:34.739774+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:35.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:34 vm01 bash[28152]: audit 2026-03-09T16:00:34.739774+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:35.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:34 vm01 bash[28152]: cluster 2026-03-09T16:00:34.743773+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T16:00:35.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:34 vm01 bash[28152]: cluster 2026-03-09T16:00:34.743773+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T16:00:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:34 vm01 bash[20728]: cluster 2026-03-09T16:00:33.750354+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T16:00:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:34 vm01 bash[20728]: cluster 2026-03-09T16:00:33.750354+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T16:00:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:34 vm01 bash[20728]: audit 2026-03-09T16:00:34.739774+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:34 vm01 bash[20728]: audit 2026-03-09T16:00:34.739774+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm01-59610-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:34 vm01 bash[20728]: cluster 2026-03-09T16:00:34.743773+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T16:00:35.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:34 vm01 bash[20728]: cluster 2026-03-09T16:00:34.743773+0000 mon.a (mon.0) 2471 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T16:00:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:35 vm09 bash[22983]: cluster 2026-03-09T16:00:34.734930+0000 mgr.y (mgr.14520) 278 : cluster [DBG] pgmap v418: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:35 vm09 bash[22983]: cluster 2026-03-09T16:00:34.734930+0000 mgr.y (mgr.14520) 278 : cluster [DBG] pgmap v418: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:35 vm09 bash[22983]: cluster 2026-03-09T16:00:35.747299+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T16:00:36.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:35 vm09 bash[22983]: cluster 2026-03-09T16:00:35.747299+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T16:00:36.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:35 vm01 bash[28152]: cluster 2026-03-09T16:00:34.734930+0000 mgr.y (mgr.14520) 278 : cluster [DBG] pgmap v418: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:36.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:35 vm01 bash[28152]: cluster 2026-03-09T16:00:34.734930+0000 mgr.y (mgr.14520) 278 : cluster [DBG] pgmap v418: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:36.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:35 vm01 bash[28152]: cluster 2026-03-09T16:00:35.747299+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T16:00:36.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:35 vm01 bash[28152]: cluster 2026-03-09T16:00:35.747299+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T16:00:36.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:35 vm01 bash[20728]: cluster 2026-03-09T16:00:34.734930+0000 mgr.y (mgr.14520) 278 : cluster [DBG] pgmap v418: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:36.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:35 vm01 bash[20728]: cluster 2026-03-09T16:00:34.734930+0000 mgr.y (mgr.14520) 278 : cluster [DBG] pgmap v418: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:36.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:35 vm01 bash[20728]: cluster 2026-03-09T16:00:35.747299+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T16:00:36.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:35 vm01 bash[20728]: cluster 2026-03-09T16:00:35.747299+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T16:00:36.883 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:00:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: audit 2026-03-09T16:00:36.422287+0000 mgr.y (mgr.14520) 279 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: audit 2026-03-09T16:00:36.422287+0000 mgr.y (mgr.14520) 279 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: cluster 2026-03-09T16:00:36.735291+0000 mgr.y (mgr.14520) 280 : cluster [DBG] pgmap v421: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: cluster 2026-03-09T16:00:36.735291+0000 mgr.y (mgr.14520) 280 : cluster [DBG] pgmap v421: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: cluster 2026-03-09T16:00:36.747951+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: cluster 2026-03-09T16:00:36.747951+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: audit 2026-03-09T16:00:36.750923+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: audit 2026-03-09T16:00:36.750923+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: audit 2026-03-09T16:00:36.751177+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: audit 2026-03-09T16:00:36.751177+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: cluster 2026-03-09T16:00:36.765903+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:38.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:37 vm09 bash[22983]: cluster 2026-03-09T16:00:36.765903+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: audit 2026-03-09T16:00:36.422287+0000 mgr.y (mgr.14520) 279 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: audit 2026-03-09T16:00:36.422287+0000 mgr.y (mgr.14520) 279 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: cluster 2026-03-09T16:00:36.735291+0000 mgr.y (mgr.14520) 280 : cluster [DBG] pgmap v421: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: cluster 2026-03-09T16:00:36.735291+0000 mgr.y (mgr.14520) 280 : cluster [DBG] pgmap v421: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: cluster 2026-03-09T16:00:36.747951+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: cluster 2026-03-09T16:00:36.747951+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: audit 2026-03-09T16:00:36.750923+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: audit 2026-03-09T16:00:36.750923+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: audit 2026-03-09T16:00:36.751177+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: audit 2026-03-09T16:00:36.751177+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: cluster 2026-03-09T16:00:36.765903+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:38.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:37 vm01 bash[28152]: cluster 2026-03-09T16:00:36.765903+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: audit 2026-03-09T16:00:36.422287+0000 mgr.y (mgr.14520) 279 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: audit 2026-03-09T16:00:36.422287+0000 mgr.y (mgr.14520) 279 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: cluster 2026-03-09T16:00:36.735291+0000 mgr.y (mgr.14520) 280 : cluster [DBG] pgmap v421: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: cluster 2026-03-09T16:00:36.735291+0000 mgr.y (mgr.14520) 280 : cluster [DBG] pgmap v421: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: cluster 2026-03-09T16:00:36.747951+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: cluster 2026-03-09T16:00:36.747951+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: audit 2026-03-09T16:00:36.750923+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: audit 2026-03-09T16:00:36.750923+0000 mon.c (mon.2) 309 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: audit 2026-03-09T16:00:36.751177+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: audit 2026-03-09T16:00:36.751177+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: cluster 2026-03-09T16:00:36.765903+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:38.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:37 vm01 bash[20728]: cluster 2026-03-09T16:00:36.765903+0000 mon.a (mon.0) 2475 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:7462ddf6:::.RoundTripAppendPP (3017 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RacingRemovePP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RacingRemovePP (3067 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP (2123 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP2 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP2 (3215 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolEIOFlag 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: setting pool EIO 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: max_success 100, min_failed 101 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolEIOFlag (3990 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiReads 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiReads (2747 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio (116811 ms total) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.ReadIntoBufferlist 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioPP.ReadIntoBufferlist (3009 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.XattrsRoundTripPP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioPP.XattrsRoundTripPP (9061 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RmXattrPP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RmXattrPP (15076 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RemoveTestPP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RemoveTestPP (3113 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP (30259 ms total) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosIoPP.XattrListPP (3015 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP (3015 ms total) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleWritePP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleWritePP (13802 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.WaitForSafePP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.WaitForSafePP (6980 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP (7140 ms) 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP2 2026-03-09T16:00:38.835 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP2 (7033 ms) 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP3 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP3 (3095 ms) 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripSparseReadPP 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripSparseReadPP (7063 ms) 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripAppendPP 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripAppendPP (6655 ms) 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsCompletePP 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsCompletePP (7395 ms) 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsSafePP 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsSafePP (7112 ms) 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ReturnValuePP 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ReturnValuePP (7044 ms) 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushPP 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushPP (7226 ms) 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushAsyncPP 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushAsyncPP (7065 ms) 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP 2026-03-09T16:00:38.836 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP (7123 ms) 2026-03-09T16:00:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:38 vm09 bash[22983]: audit 2026-03-09T16:00:37.748394+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:38 vm09 bash[22983]: audit 2026-03-09T16:00:37.748394+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:38 vm09 bash[22983]: audit 2026-03-09T16:00:37.753962+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:38 vm09 bash[22983]: audit 2026-03-09T16:00:37.753962+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:38 vm09 bash[22983]: cluster 2026-03-09T16:00:37.755096+0000 mon.a (mon.0) 2477 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T16:00:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:38 vm09 bash[22983]: cluster 2026-03-09T16:00:37.755096+0000 mon.a (mon.0) 2477 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T16:00:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:38 vm09 bash[22983]: audit 2026-03-09T16:00:37.755755+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:38 vm09 bash[22983]: audit 2026-03-09T16:00:37.755755+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:38 vm01 bash[28152]: audit 2026-03-09T16:00:37.748394+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:38 vm01 bash[28152]: audit 2026-03-09T16:00:37.748394+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:38 vm01 bash[28152]: audit 2026-03-09T16:00:37.753962+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:38 vm01 bash[28152]: audit 2026-03-09T16:00:37.753962+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:38 vm01 bash[28152]: cluster 2026-03-09T16:00:37.755096+0000 mon.a (mon.0) 2477 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:38 vm01 bash[28152]: cluster 2026-03-09T16:00:37.755096+0000 mon.a (mon.0) 2477 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:38 vm01 bash[28152]: audit 2026-03-09T16:00:37.755755+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:38 vm01 bash[28152]: audit 2026-03-09T16:00:37.755755+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:38 vm01 bash[20728]: audit 2026-03-09T16:00:37.748394+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:38 vm01 bash[20728]: audit 2026-03-09T16:00:37.748394+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:38 vm01 bash[20728]: audit 2026-03-09T16:00:37.753962+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:38 vm01 bash[20728]: audit 2026-03-09T16:00:37.753962+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.101:0/2986974663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:38 vm01 bash[20728]: cluster 2026-03-09T16:00:37.755096+0000 mon.a (mon.0) 2477 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:38 vm01 bash[20728]: cluster 2026-03-09T16:00:37.755096+0000 mon.a (mon.0) 2477 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:38 vm01 bash[20728]: audit 2026-03-09T16:00:37.755755+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:38 vm01 bash[20728]: audit 2026-03-09T16:00:37.755755+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]: dispatch 2026-03-09T16:00:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:39 vm09 bash[22983]: cluster 2026-03-09T16:00:38.735720+0000 mgr.y (mgr.14520) 281 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:39 vm09 bash[22983]: cluster 2026-03-09T16:00:38.735720+0000 mgr.y (mgr.14520) 281 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:39 vm09 bash[22983]: audit 2026-03-09T16:00:38.826095+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:39 vm09 bash[22983]: audit 2026-03-09T16:00:38.826095+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:39 vm09 bash[22983]: cluster 2026-03-09T16:00:38.829179+0000 mon.a (mon.0) 2480 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T16:00:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:39 vm09 bash[22983]: cluster 2026-03-09T16:00:38.829179+0000 mon.a (mon.0) 2480 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T16:00:40.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:39 vm01 bash[28152]: cluster 2026-03-09T16:00:38.735720+0000 mgr.y (mgr.14520) 281 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:40.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:39 vm01 bash[28152]: cluster 2026-03-09T16:00:38.735720+0000 mgr.y (mgr.14520) 281 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:40.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:39 vm01 bash[28152]: audit 2026-03-09T16:00:38.826095+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:40.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:39 vm01 bash[28152]: audit 2026-03-09T16:00:38.826095+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:40.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:39 vm01 bash[28152]: cluster 2026-03-09T16:00:38.829179+0000 mon.a (mon.0) 2480 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T16:00:40.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:39 vm01 bash[28152]: cluster 2026-03-09T16:00:38.829179+0000 mon.a (mon.0) 2480 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T16:00:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:39 vm01 bash[20728]: cluster 2026-03-09T16:00:38.735720+0000 mgr.y (mgr.14520) 281 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:39 vm01 bash[20728]: cluster 2026-03-09T16:00:38.735720+0000 mgr.y (mgr.14520) 281 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 761 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:39 vm01 bash[20728]: audit 2026-03-09T16:00:38.826095+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:39 vm01 bash[20728]: audit 2026-03-09T16:00:38.826095+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm01-59610-59"}]': finished 2026-03-09T16:00:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:39 vm01 bash[20728]: cluster 2026-03-09T16:00:38.829179+0000 mon.a (mon.0) 2480 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T16:00:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:39 vm01 bash[20728]: cluster 2026-03-09T16:00:38.829179+0000 mon.a (mon.0) 2480 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T16:00:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:40 vm09 bash[22983]: cluster 2026-03-09T16:00:39.868854+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T16:00:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:40 vm09 bash[22983]: cluster 2026-03-09T16:00:39.868854+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T16:00:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:40 vm09 bash[22983]: audit 2026-03-09T16:00:39.883337+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.101:0/3980883359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:40 vm09 bash[22983]: audit 2026-03-09T16:00:39.883337+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.101:0/3980883359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:40 vm09 bash[22983]: audit 2026-03-09T16:00:39.884048+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:40 vm09 bash[22983]: audit 2026-03-09T16:00:39.884048+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:40 vm01 bash[28152]: cluster 2026-03-09T16:00:39.868854+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T16:00:41.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:40 vm01 bash[28152]: cluster 2026-03-09T16:00:39.868854+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T16:00:41.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:40 vm01 bash[28152]: audit 2026-03-09T16:00:39.883337+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.101:0/3980883359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:40 vm01 bash[28152]: audit 2026-03-09T16:00:39.883337+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.101:0/3980883359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:40 vm01 bash[28152]: audit 2026-03-09T16:00:39.884048+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:40 vm01 bash[28152]: audit 2026-03-09T16:00:39.884048+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:40 vm01 bash[20728]: cluster 2026-03-09T16:00:39.868854+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T16:00:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:40 vm01 bash[20728]: cluster 2026-03-09T16:00:39.868854+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-09T16:00:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:40 vm01 bash[20728]: audit 2026-03-09T16:00:39.883337+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.101:0/3980883359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:40 vm01 bash[20728]: audit 2026-03-09T16:00:39.883337+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.101:0/3980883359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:40 vm01 bash[20728]: audit 2026-03-09T16:00:39.884048+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:40 vm01 bash[20728]: audit 2026-03-09T16:00:39.884048+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:41 vm01 bash[28152]: cluster 2026-03-09T16:00:40.736101+0000 mgr.y (mgr.14520) 282 : cluster [DBG] pgmap v427: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:41 vm01 bash[28152]: cluster 2026-03-09T16:00:40.736101+0000 mgr.y (mgr.14520) 282 : cluster [DBG] pgmap v427: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:41 vm01 bash[28152]: audit 2026-03-09T16:00:40.864084+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:41 vm01 bash[28152]: audit 2026-03-09T16:00:40.864084+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:41 vm01 bash[28152]: cluster 2026-03-09T16:00:40.867685+0000 mon.a (mon.0) 2484 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T16:00:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:41 vm01 bash[28152]: cluster 2026-03-09T16:00:40.867685+0000 mon.a (mon.0) 2484 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T16:00:42.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:41 vm01 bash[20728]: cluster 2026-03-09T16:00:40.736101+0000 mgr.y (mgr.14520) 282 : cluster [DBG] pgmap v427: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:41 vm01 bash[20728]: cluster 2026-03-09T16:00:40.736101+0000 mgr.y (mgr.14520) 282 : cluster [DBG] pgmap v427: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:41 vm01 bash[20728]: audit 2026-03-09T16:00:40.864084+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:41 vm01 bash[20728]: audit 2026-03-09T16:00:40.864084+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:41 vm01 bash[20728]: cluster 2026-03-09T16:00:40.867685+0000 mon.a (mon.0) 2484 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T16:00:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:41 vm01 bash[20728]: cluster 2026-03-09T16:00:40.867685+0000 mon.a (mon.0) 2484 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T16:00:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:41 vm09 bash[22983]: cluster 2026-03-09T16:00:40.736101+0000 mgr.y (mgr.14520) 282 : cluster [DBG] pgmap v427: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:41 vm09 bash[22983]: cluster 2026-03-09T16:00:40.736101+0000 mgr.y (mgr.14520) 282 : cluster [DBG] pgmap v427: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:41 vm09 bash[22983]: audit 2026-03-09T16:00:40.864084+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:41 vm09 bash[22983]: audit 2026-03-09T16:00:40.864084+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm01-59610-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:41 vm09 bash[22983]: cluster 2026-03-09T16:00:40.867685+0000 mon.a (mon.0) 2484 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T16:00:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:41 vm09 bash[22983]: cluster 2026-03-09T16:00:40.867685+0000 mon.a (mon.0) 2484 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: cluster 2026-03-09T16:00:41.890121+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: cluster 2026-03-09T16:00:41.890121+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.916014+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.916014+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.916678+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.916678+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.921195+0000 mon.c (mon.2) 313 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.921195+0000 mon.c (mon.2) 313 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.921430+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.921430+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.922624+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.922624+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.922866+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:41.922866+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.770300+0000 mon.a (mon.0) 2489 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.770300+0000 mon.a (mon.0) 2489 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.770367+0000 mon.a (mon.0) 2490 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.770367+0000 mon.a (mon.0) 2490 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.770407+0000 mon.a (mon.0) 2491 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.770407+0000 mon.a (mon.0) 2491 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.773258+0000 mon.a (mon.0) 2492 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.773258+0000 mon.a (mon.0) 2492 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.778710+0000 mon.a (mon.0) 2493 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.778710+0000 mon.a (mon.0) 2493 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.841774+0000 mon.c (mon.2) 315 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.841774+0000 mon.c (mon.2) 315 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.841979+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.841979+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.842379+0000 mon.c (mon.2) 316 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.842379+0000 mon.c (mon.2) 316 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.842555+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:42 vm01 bash[28152]: audit 2026-03-09T16:00:42.842555+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:00:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:00:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: cluster 2026-03-09T16:00:41.890121+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: cluster 2026-03-09T16:00:41.890121+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.916014+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.916014+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.916678+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.916678+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.921195+0000 mon.c (mon.2) 313 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.921195+0000 mon.c (mon.2) 313 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.921430+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.921430+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.922624+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.922624+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.922866+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:41.922866+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.770300+0000 mon.a (mon.0) 2489 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.770300+0000 mon.a (mon.0) 2489 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.770367+0000 mon.a (mon.0) 2490 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.770367+0000 mon.a (mon.0) 2490 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.770407+0000 mon.a (mon.0) 2491 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.770407+0000 mon.a (mon.0) 2491 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.773258+0000 mon.a (mon.0) 2492 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.773258+0000 mon.a (mon.0) 2492 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.778710+0000 mon.a (mon.0) 2493 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.778710+0000 mon.a (mon.0) 2493 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.841774+0000 mon.c (mon.2) 315 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.841774+0000 mon.c (mon.2) 315 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.841979+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.841979+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.842379+0000 mon.c (mon.2) 316 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.842379+0000 mon.c (mon.2) 316 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.842555+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.178 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:42 vm01 bash[20728]: audit 2026-03-09T16:00:42.842555+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: cluster 2026-03-09T16:00:41.890121+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: cluster 2026-03-09T16:00:41.890121+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.916014+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.916014+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.916678+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.916678+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.921195+0000 mon.c (mon.2) 313 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.921195+0000 mon.c (mon.2) 313 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.921430+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.921430+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.922624+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.922624+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.922866+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:41.922866+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.770300+0000 mon.a (mon.0) 2489 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.770300+0000 mon.a (mon.0) 2489 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.770367+0000 mon.a (mon.0) 2490 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.770367+0000 mon.a (mon.0) 2490 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.770407+0000 mon.a (mon.0) 2491 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.770407+0000 mon.a (mon.0) 2491 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.773258+0000 mon.a (mon.0) 2492 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.773258+0000 mon.a (mon.0) 2492 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.778710+0000 mon.a (mon.0) 2493 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.778710+0000 mon.a (mon.0) 2493 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.841774+0000 mon.c (mon.2) 315 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.841774+0000 mon.c (mon.2) 315 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.841979+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.841979+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.842379+0000 mon.c (mon.2) 316 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.842379+0000 mon.c (mon.2) 316 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.842555+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:42 vm09 bash[22983]: audit 2026-03-09T16:00:42.842555+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]: dispatch 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: cluster 2026-03-09T16:00:42.736487+0000 mgr.y (mgr.14520) 283 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: cluster 2026-03-09T16:00:42.736487+0000 mgr.y (mgr.14520) 283 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: cluster 2026-03-09T16:00:42.903505+0000 mon.a (mon.0) 2496 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: cluster 2026-03-09T16:00:42.903505+0000 mon.a (mon.0) 2496 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911288+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911288+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911318+0000 mon.a (mon.0) 2498 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911318+0000 mon.a (mon.0) 2498 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911334+0000 mon.a (mon.0) 2499 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911334+0000 mon.a (mon.0) 2499 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911347+0000 mon.a (mon.0) 2500 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911347+0000 mon.a (mon.0) 2500 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911359+0000 mon.a (mon.0) 2501 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911359+0000 mon.a (mon.0) 2501 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911372+0000 mon.a (mon.0) 2502 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911372+0000 mon.a (mon.0) 2502 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911394+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.911394+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]': finished 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.921467+0000 mon.c (mon.2) 317 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.921467+0000 mon.c (mon.2) 317 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: cluster 2026-03-09T16:00:42.959958+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: cluster 2026-03-09T16:00:42.959958+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.966103+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:43 vm09 bash[22983]: audit 2026-03-09T16:00:42.966103+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: cluster 2026-03-09T16:00:42.736487+0000 mgr.y (mgr.14520) 283 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: cluster 2026-03-09T16:00:42.736487+0000 mgr.y (mgr.14520) 283 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: cluster 2026-03-09T16:00:42.903505+0000 mon.a (mon.0) 2496 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: cluster 2026-03-09T16:00:42.903505+0000 mon.a (mon.0) 2496 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911288+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911288+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911318+0000 mon.a (mon.0) 2498 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]': finished 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911318+0000 mon.a (mon.0) 2498 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]': finished 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911334+0000 mon.a (mon.0) 2499 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911334+0000 mon.a (mon.0) 2499 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911347+0000 mon.a (mon.0) 2500 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]': finished 2026-03-09T16:00:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911347+0000 mon.a (mon.0) 2500 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911359+0000 mon.a (mon.0) 2501 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911359+0000 mon.a (mon.0) 2501 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911372+0000 mon.a (mon.0) 2502 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911372+0000 mon.a (mon.0) 2502 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911394+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.911394+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.921467+0000 mon.c (mon.2) 317 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.921467+0000 mon.c (mon.2) 317 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: cluster 2026-03-09T16:00:42.959958+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: cluster 2026-03-09T16:00:42.959958+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.966103+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:43 vm01 bash[28152]: audit 2026-03-09T16:00:42.966103+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: cluster 2026-03-09T16:00:42.736487+0000 mgr.y (mgr.14520) 283 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: cluster 2026-03-09T16:00:42.736487+0000 mgr.y (mgr.14520) 283 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: cluster 2026-03-09T16:00:42.903505+0000 mon.a (mon.0) 2496 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: cluster 2026-03-09T16:00:42.903505+0000 mon.a (mon.0) 2496 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911288+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911288+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm01-59610-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911318+0000 mon.a (mon.0) 2498 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911318+0000 mon.a (mon.0) 2498 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.b", "id": [3, 7]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911334+0000 mon.a (mon.0) 2499 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911334+0000 mon.a (mon.0) 2499 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911347+0000 mon.a (mon.0) 2500 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911347+0000 mon.a (mon.0) 2500 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.15", "id": [5, 1]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911359+0000 mon.a (mon.0) 2501 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911359+0000 mon.a (mon.0) 2501 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.17", "id": [3, 1]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911372+0000 mon.a (mon.0) 2502 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911372+0000 mon.a (mon.0) 2502 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1c", "id": [6, 2]}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911394+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.911394+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-45"}]': finished 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.921467+0000 mon.c (mon.2) 317 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.921467+0000 mon.c (mon.2) 317 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: cluster 2026-03-09T16:00:42.959958+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: cluster 2026-03-09T16:00:42.959958+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.966103+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:43 vm01 bash[20728]: audit 2026-03-09T16:00:42.966103+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: cluster 2026-03-09T16:00:43.939632+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: cluster 2026-03-09T16:00:43.939632+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: audit 2026-03-09T16:00:44.170296+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: audit 2026-03-09T16:00:44.170296+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: audit 2026-03-09T16:00:44.918997+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: audit 2026-03-09T16:00:44.918997+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: cluster 2026-03-09T16:00:44.927605+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: cluster 2026-03-09T16:00:44.927605+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: audit 2026-03-09T16:00:44.929299+0000 mon.c (mon.2) 318 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: audit 2026-03-09T16:00:44.929299+0000 mon.c (mon.2) 318 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: audit 2026-03-09T16:00:44.937823+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:44 vm09 bash[22983]: audit 2026-03-09T16:00:44.937823+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: cluster 2026-03-09T16:00:43.939632+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T16:00:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: cluster 2026-03-09T16:00:43.939632+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T16:00:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: audit 2026-03-09T16:00:44.170296+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: audit 2026-03-09T16:00:44.170296+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: audit 2026-03-09T16:00:44.918997+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: audit 2026-03-09T16:00:44.918997+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: cluster 2026-03-09T16:00:44.927605+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T16:00:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: cluster 2026-03-09T16:00:44.927605+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T16:00:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: audit 2026-03-09T16:00:44.929299+0000 mon.c (mon.2) 318 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: audit 2026-03-09T16:00:44.929299+0000 mon.c (mon.2) 318 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: audit 2026-03-09T16:00:44.937823+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:44 vm01 bash[28152]: audit 2026-03-09T16:00:44.937823+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: cluster 2026-03-09T16:00:43.939632+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: cluster 2026-03-09T16:00:43.939632+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: audit 2026-03-09T16:00:44.170296+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: audit 2026-03-09T16:00:44.170296+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: audit 2026-03-09T16:00:44.918997+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: audit 2026-03-09T16:00:44.918997+0000 mon.a (mon.0) 2508 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm01-59610-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: cluster 2026-03-09T16:00:44.927605+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: cluster 2026-03-09T16:00:44.927605+0000 mon.a (mon.0) 2509 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: audit 2026-03-09T16:00:44.929299+0000 mon.c (mon.2) 318 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: audit 2026-03-09T16:00:44.929299+0000 mon.c (mon.2) 318 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: audit 2026-03-09T16:00:44.937823+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:44 vm01 bash[20728]: audit 2026-03-09T16:00:44.937823+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:00:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:45 vm09 bash[22983]: cluster 2026-03-09T16:00:44.737046+0000 mgr.y (mgr.14520) 284 : cluster [DBG] pgmap v433: 260 pgs: 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:45 vm09 bash[22983]: cluster 2026-03-09T16:00:44.737046+0000 mgr.y (mgr.14520) 284 : cluster [DBG] pgmap v433: 260 pgs: 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:45 vm09 bash[22983]: cluster 2026-03-09T16:00:44.950700+0000 mon.a (mon.0) 2511 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T16:00:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:45 vm09 bash[22983]: cluster 2026-03-09T16:00:44.950700+0000 mon.a (mon.0) 2511 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T16:00:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:45 vm09 bash[22983]: audit 2026-03-09T16:00:45.921870+0000 mon.a (mon.0) 2512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:45 vm09 bash[22983]: audit 2026-03-09T16:00:45.921870+0000 mon.a (mon.0) 2512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:45 vm09 bash[22983]: cluster 2026-03-09T16:00:45.935830+0000 mon.a (mon.0) 2513 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T16:00:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:45 vm09 bash[22983]: cluster 2026-03-09T16:00:45.935830+0000 mon.a (mon.0) 2513 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T16:00:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:45 vm01 bash[28152]: cluster 2026-03-09T16:00:44.737046+0000 mgr.y (mgr.14520) 284 : cluster [DBG] pgmap v433: 260 pgs: 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:45 vm01 bash[28152]: cluster 2026-03-09T16:00:44.737046+0000 mgr.y (mgr.14520) 284 : cluster [DBG] pgmap v433: 260 pgs: 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:45 vm01 bash[28152]: cluster 2026-03-09T16:00:44.950700+0000 mon.a (mon.0) 2511 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T16:00:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:45 vm01 bash[28152]: cluster 2026-03-09T16:00:44.950700+0000 mon.a (mon.0) 2511 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T16:00:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:45 vm01 bash[28152]: audit 2026-03-09T16:00:45.921870+0000 mon.a (mon.0) 2512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:45 vm01 bash[28152]: audit 2026-03-09T16:00:45.921870+0000 mon.a (mon.0) 2512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:45 vm01 bash[28152]: cluster 2026-03-09T16:00:45.935830+0000 mon.a (mon.0) 2513 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T16:00:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:45 vm01 bash[28152]: cluster 2026-03-09T16:00:45.935830+0000 mon.a (mon.0) 2513 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T16:00:46.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:45 vm01 bash[20728]: cluster 2026-03-09T16:00:44.737046+0000 mgr.y (mgr.14520) 284 : cluster [DBG] pgmap v433: 260 pgs: 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:46.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:45 vm01 bash[20728]: cluster 2026-03-09T16:00:44.737046+0000 mgr.y (mgr.14520) 284 : cluster [DBG] pgmap v433: 260 pgs: 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:46.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:45 vm01 bash[20728]: cluster 2026-03-09T16:00:44.950700+0000 mon.a (mon.0) 2511 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T16:00:46.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:45 vm01 bash[20728]: cluster 2026-03-09T16:00:44.950700+0000 mon.a (mon.0) 2511 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T16:00:46.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:45 vm01 bash[20728]: audit 2026-03-09T16:00:45.921870+0000 mon.a (mon.0) 2512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:46.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:45 vm01 bash[20728]: audit 2026-03-09T16:00:45.921870+0000 mon.a (mon.0) 2512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:00:46.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:45 vm01 bash[20728]: cluster 2026-03-09T16:00:45.935830+0000 mon.a (mon.0) 2513 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T16:00:46.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:45 vm01 bash[20728]: cluster 2026-03-09T16:00:45.935830+0000 mon.a (mon.0) 2513 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T16:00:46.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:00:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:00:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:47 vm09 bash[22983]: audit 2026-03-09T16:00:45.997237+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:47 vm09 bash[22983]: audit 2026-03-09T16:00:45.997237+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:47 vm09 bash[22983]: audit 2026-03-09T16:00:45.997525+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:47 vm09 bash[22983]: audit 2026-03-09T16:00:45.997525+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:47 vm01 bash[28152]: audit 2026-03-09T16:00:45.997237+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:47 vm01 bash[28152]: audit 2026-03-09T16:00:45.997237+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:47 vm01 bash[28152]: audit 2026-03-09T16:00:45.997525+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:47 vm01 bash[28152]: audit 2026-03-09T16:00:45.997525+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:47 vm01 bash[20728]: audit 2026-03-09T16:00:45.997237+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:47 vm01 bash[20728]: audit 2026-03-09T16:00:45.997237+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:47 vm01 bash[20728]: audit 2026-03-09T16:00:45.997525+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:47 vm01 bash[20728]: audit 2026-03-09T16:00:45.997525+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:46.432926+0000 mgr.y (mgr.14520) 285 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:46.432926+0000 mgr.y (mgr.14520) 285 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: cluster 2026-03-09T16:00:46.737484+0000 mgr.y (mgr.14520) 286 : cluster [DBG] pgmap v436: 300 pgs: 40 unknown, 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: cluster 2026-03-09T16:00:46.737484+0000 mgr.y (mgr.14520) 286 : cluster [DBG] pgmap v436: 300 pgs: 40 unknown, 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.038642+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.038642+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: cluster 2026-03-09T16:00:47.041477+0000 mon.a (mon.0) 2516 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: cluster 2026-03-09T16:00:47.041477+0000 mon.a (mon.0) 2516 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.045322+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.045322+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.045491+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.045491+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.060003+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.060003+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.060239+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:48 vm09 bash[22983]: audit 2026-03-09T16:00:47.060239+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:46.432926+0000 mgr.y (mgr.14520) 285 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:46.432926+0000 mgr.y (mgr.14520) 285 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: cluster 2026-03-09T16:00:46.737484+0000 mgr.y (mgr.14520) 286 : cluster [DBG] pgmap v436: 300 pgs: 40 unknown, 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: cluster 2026-03-09T16:00:46.737484+0000 mgr.y (mgr.14520) 286 : cluster [DBG] pgmap v436: 300 pgs: 40 unknown, 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.038642+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.038642+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: cluster 2026-03-09T16:00:47.041477+0000 mon.a (mon.0) 2516 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: cluster 2026-03-09T16:00:47.041477+0000 mon.a (mon.0) 2516 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.045322+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.045322+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.045491+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.045491+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.060003+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.060003+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.060239+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:48 vm01 bash[28152]: audit 2026-03-09T16:00:47.060239+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:46.432926+0000 mgr.y (mgr.14520) 285 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:46.432926+0000 mgr.y (mgr.14520) 285 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: cluster 2026-03-09T16:00:46.737484+0000 mgr.y (mgr.14520) 286 : cluster [DBG] pgmap v436: 300 pgs: 40 unknown, 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: cluster 2026-03-09T16:00:46.737484+0000 mgr.y (mgr.14520) 286 : cluster [DBG] pgmap v436: 300 pgs: 40 unknown, 3 activating, 2 peering, 255 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.038642+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.038642+0000 mon.a (mon.0) 2515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: cluster 2026-03-09T16:00:47.041477+0000 mon.a (mon.0) 2516 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: cluster 2026-03-09T16:00:47.041477+0000 mon.a (mon.0) 2516 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.045322+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.045322+0000 mon.c (mon.2) 320 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.045491+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.045491+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.060003+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.060003+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.060239+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:48 vm01 bash[20728]: audit 2026-03-09T16:00:47.060239+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.073034+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.073034+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.073404+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.073404+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.082099+0000 mon.c (mon.2) 322 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.082099+0000 mon.c (mon.2) 322 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: cluster 2026-03-09T16:00:48.082211+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: cluster 2026-03-09T16:00:48.082211+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.082343+0000 mon.c (mon.2) 323 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.082343+0000 mon.c (mon.2) 323 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.083400+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.083400+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.083520+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: audit 2026-03-09T16:00:48.083520+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: cluster 2026-03-09T16:00:48.738011+0000 mgr.y (mgr.14520) 287 : cluster [DBG] pgmap v439: 292 pgs: 23 unknown, 3 activating, 266 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: cluster 2026-03-09T16:00:48.738011+0000 mgr.y (mgr.14520) 287 : cluster [DBG] pgmap v439: 292 pgs: 23 unknown, 3 activating, 266 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: cluster 2026-03-09T16:00:48.839688+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:49 vm09 bash[22983]: cluster 2026-03-09T16:00:48.839688+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.073034+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.073034+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.073404+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.073404+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.082099+0000 mon.c (mon.2) 322 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.082099+0000 mon.c (mon.2) 322 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: cluster 2026-03-09T16:00:48.082211+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: cluster 2026-03-09T16:00:48.082211+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.082343+0000 mon.c (mon.2) 323 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.082343+0000 mon.c (mon.2) 323 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.083400+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.083400+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.083520+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: audit 2026-03-09T16:00:48.083520+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: cluster 2026-03-09T16:00:48.738011+0000 mgr.y (mgr.14520) 287 : cluster [DBG] pgmap v439: 292 pgs: 23 unknown, 3 activating, 266 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: cluster 2026-03-09T16:00:48.738011+0000 mgr.y (mgr.14520) 287 : cluster [DBG] pgmap v439: 292 pgs: 23 unknown, 3 activating, 266 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: cluster 2026-03-09T16:00:48.839688+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:49 vm01 bash[28152]: cluster 2026-03-09T16:00:48.839688+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.073034+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.073034+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.073404+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.073404+0000 mon.a (mon.0) 2520 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.082099+0000 mon.c (mon.2) 322 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.082099+0000 mon.c (mon.2) 322 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: cluster 2026-03-09T16:00:48.082211+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: cluster 2026-03-09T16:00:48.082211+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.082343+0000 mon.c (mon.2) 323 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.082343+0000 mon.c (mon.2) 323 : audit [INF] from='client.? 192.168.123.101:0/2336991733' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.083400+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.083400+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.083520+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: audit 2026-03-09T16:00:48.083520+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]: dispatch 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: cluster 2026-03-09T16:00:48.738011+0000 mgr.y (mgr.14520) 287 : cluster [DBG] pgmap v439: 292 pgs: 23 unknown, 3 activating, 266 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: cluster 2026-03-09T16:00:48.738011+0000 mgr.y (mgr.14520) 287 : cluster [DBG] pgmap v439: 292 pgs: 23 unknown, 3 activating, 266 active+clean; 8.3 MiB data, 762 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: cluster 2026-03-09T16:00:48.839688+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:49 vm01 bash[20728]: cluster 2026-03-09T16:00:48.839688+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: cluster 2026-03-09T16:00:49.114484+0000 mon.a (mon.0) 2525 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: cluster 2026-03-09T16:00:49.114484+0000 mon.a (mon.0) 2525 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.120457+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]': finished 2026-03-09T16:00:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.120457+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]': finished 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.120529+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.120529+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.132441+0000 mon.c (mon.2) 324 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.132441+0000 mon.c (mon.2) 324 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: cluster 2026-03-09T16:00:49.144997+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: cluster 2026-03-09T16:00:49.144997+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.146532+0000 mon.a (mon.0) 2529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.146532+0000 mon.a (mon.0) 2529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.153044+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.153044+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.154602+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.154602+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.155281+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.155281+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.157995+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.157995+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.158727+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.158727+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.159398+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:50 vm01 bash[28152]: audit 2026-03-09T16:00:49.159398+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: cluster 2026-03-09T16:00:49.114484+0000 mon.a (mon.0) 2525 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: cluster 2026-03-09T16:00:49.114484+0000 mon.a (mon.0) 2525 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.120457+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]': finished 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.120457+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]': finished 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.120529+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.120529+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.132441+0000 mon.c (mon.2) 324 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.132441+0000 mon.c (mon.2) 324 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: cluster 2026-03-09T16:00:49.144997+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: cluster 2026-03-09T16:00:49.144997+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.146532+0000 mon.a (mon.0) 2529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.146532+0000 mon.a (mon.0) 2529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.153044+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.153044+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.154602+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.154602+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.155281+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.155281+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.157995+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.157995+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.158727+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.158727+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.159398+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:50 vm01 bash[20728]: audit 2026-03-09T16:00:49.159398+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: cluster 2026-03-09T16:00:49.114484+0000 mon.a (mon.0) 2525 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: cluster 2026-03-09T16:00:49.114484+0000 mon.a (mon.0) 2525 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.120457+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]': finished 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.120457+0000 mon.a (mon.0) 2526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-47", "mode": "writeback"}]': finished 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.120529+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.120529+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm01-59610-61"}]': finished 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.132441+0000 mon.c (mon.2) 324 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.132441+0000 mon.c (mon.2) 324 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: cluster 2026-03-09T16:00:49.144997+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: cluster 2026-03-09T16:00:49.144997+0000 mon.a (mon.0) 2528 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.146532+0000 mon.a (mon.0) 2529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.146532+0000 mon.a (mon.0) 2529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.153044+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.153044+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.154602+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.154602+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.155281+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.155281+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.157995+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.157995+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.158727+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.158727+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.159398+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:50.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:50 vm09 bash[22983]: audit 2026-03-09T16:00:49.159398+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.124759+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.124759+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.124878+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.124878+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.127491+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.127491+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.130862+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.130862+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: cluster 2026-03-09T16:00:50.139152+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: cluster 2026-03-09T16:00:50.139152+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.140856+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.140856+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.141315+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:50.141315+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: cluster 2026-03-09T16:00:50.738454+0000 mgr.y (mgr.14520) 288 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: cluster 2026-03-09T16:00:50.738454+0000 mgr.y (mgr.14520) 288 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:51.129876+0000 mon.a (mon.0) 2538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:51 vm01 bash[28152]: audit 2026-03-09T16:00:51.129876+0000 mon.a (mon.0) 2538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.124759+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.124759+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.124878+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.124878+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.127491+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.127491+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.130862+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.130862+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: cluster 2026-03-09T16:00:50.139152+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: cluster 2026-03-09T16:00:50.139152+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.140856+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.140856+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.141315+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:50.141315+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: cluster 2026-03-09T16:00:50.738454+0000 mgr.y (mgr.14520) 288 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: cluster 2026-03-09T16:00:50.738454+0000 mgr.y (mgr.14520) 288 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:51.129876+0000 mon.a (mon.0) 2538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:51 vm01 bash[20728]: audit 2026-03-09T16:00:51.129876+0000 mon.a (mon.0) 2538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.124759+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.124759+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.124878+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.124878+0000 mon.a (mon.0) 2534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm01-59610-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.127491+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.127491+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.130862+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.130862+0000 mon.c (mon.2) 325 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: cluster 2026-03-09T16:00:50.139152+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: cluster 2026-03-09T16:00:50.139152+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.140856+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.140856+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.141315+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:50.141315+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: cluster 2026-03-09T16:00:50.738454+0000 mgr.y (mgr.14520) 288 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: cluster 2026-03-09T16:00:50.738454+0000 mgr.y (mgr.14520) 288 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:51.129876+0000 mon.a (mon.0) 2538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:51 vm09 bash[22983]: audit 2026-03-09T16:00:51.129876+0000 mon.a (mon.0) 2538 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:00:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:51.136247+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:51.136247+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: cluster 2026-03-09T16:00:51.144377+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T16:00:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: cluster 2026-03-09T16:00:51.144377+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T16:00:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: cluster 2026-03-09T16:00:51.145802+0000 mon.a (mon.0) 2540 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T16:00:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: cluster 2026-03-09T16:00:51.145802+0000 mon.a (mon.0) 2540 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T16:00:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:51.146006+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:51.146006+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: cluster 2026-03-09T16:00:52.130165+0000 mon.a (mon.0) 2542 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: cluster 2026-03-09T16:00:52.130165+0000 mon.a (mon.0) 2542 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:52.146920+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:52.146920+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:52.147191+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:52.147191+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:52.157766+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: audit 2026-03-09T16:00:52.157766+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: cluster 2026-03-09T16:00:52.158965+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:52 vm01 bash[28152]: cluster 2026-03-09T16:00:52.158965+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:51.136247+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:51.136247+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: cluster 2026-03-09T16:00:51.144377+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: cluster 2026-03-09T16:00:51.144377+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: cluster 2026-03-09T16:00:51.145802+0000 mon.a (mon.0) 2540 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: cluster 2026-03-09T16:00:51.145802+0000 mon.a (mon.0) 2540 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:51.146006+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:51.146006+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: cluster 2026-03-09T16:00:52.130165+0000 mon.a (mon.0) 2542 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: cluster 2026-03-09T16:00:52.130165+0000 mon.a (mon.0) 2542 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:52.146920+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:52.146920+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:52.147191+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:52.147191+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:52.157766+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: audit 2026-03-09T16:00:52.157766+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: cluster 2026-03-09T16:00:52.158965+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T16:00:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:52 vm01 bash[20728]: cluster 2026-03-09T16:00:52.158965+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:51.136247+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:51.136247+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: cluster 2026-03-09T16:00:51.144377+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: cluster 2026-03-09T16:00:51.144377+0000 mon.a (mon.0) 2539 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: cluster 2026-03-09T16:00:51.145802+0000 mon.a (mon.0) 2540 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: cluster 2026-03-09T16:00:51.145802+0000 mon.a (mon.0) 2540 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:51.146006+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:51.146006+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: cluster 2026-03-09T16:00:52.130165+0000 mon.a (mon.0) 2542 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: cluster 2026-03-09T16:00:52.130165+0000 mon.a (mon.0) 2542 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:52.146920+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:52.146920+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm01-59610-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:52.147191+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:52.147191+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:52.157766+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: audit 2026-03-09T16:00:52.157766+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: cluster 2026-03-09T16:00:52.158965+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T16:00:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:52 vm09 bash[22983]: cluster 2026-03-09T16:00:52.158965+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-09T16:00:53.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:00:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:00:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:00:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:53 vm09 bash[22983]: audit 2026-03-09T16:00:52.161440+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:53 vm09 bash[22983]: audit 2026-03-09T16:00:52.161440+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:53 vm09 bash[22983]: cluster 2026-03-09T16:00:52.738928+0000 mgr.y (mgr.14520) 289 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-09T16:00:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:53 vm09 bash[22983]: cluster 2026-03-09T16:00:52.738928+0000 mgr.y (mgr.14520) 289 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-09T16:00:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:53 vm09 bash[22983]: audit 2026-03-09T16:00:53.150550+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:00:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:53 vm09 bash[22983]: audit 2026-03-09T16:00:53.150550+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:00:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:53 vm09 bash[22983]: cluster 2026-03-09T16:00:53.157965+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T16:00:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:53 vm09 bash[22983]: cluster 2026-03-09T16:00:53.157965+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T16:00:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:53 vm01 bash[28152]: audit 2026-03-09T16:00:52.161440+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:53 vm01 bash[28152]: audit 2026-03-09T16:00:52.161440+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:53 vm01 bash[28152]: cluster 2026-03-09T16:00:52.738928+0000 mgr.y (mgr.14520) 289 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-09T16:00:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:53 vm01 bash[28152]: cluster 2026-03-09T16:00:52.738928+0000 mgr.y (mgr.14520) 289 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-09T16:00:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:53 vm01 bash[28152]: audit 2026-03-09T16:00:53.150550+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:00:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:53 vm01 bash[28152]: audit 2026-03-09T16:00:53.150550+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:00:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:53 vm01 bash[28152]: cluster 2026-03-09T16:00:53.157965+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T16:00:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:53 vm01 bash[28152]: cluster 2026-03-09T16:00:53.157965+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T16:00:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:53 vm01 bash[20728]: audit 2026-03-09T16:00:52.161440+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:53 vm01 bash[20728]: audit 2026-03-09T16:00:52.161440+0000 mon.a (mon.0) 2546 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:00:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:53 vm01 bash[20728]: cluster 2026-03-09T16:00:52.738928+0000 mgr.y (mgr.14520) 289 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-09T16:00:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:53 vm01 bash[20728]: cluster 2026-03-09T16:00:52.738928+0000 mgr.y (mgr.14520) 289 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-09T16:00:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:53 vm01 bash[20728]: audit 2026-03-09T16:00:53.150550+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:00:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:53 vm01 bash[20728]: audit 2026-03-09T16:00:53.150550+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:00:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:53 vm01 bash[20728]: cluster 2026-03-09T16:00:53.157965+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T16:00:53.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:53 vm01 bash[20728]: cluster 2026-03-09T16:00:53.157965+0000 mon.a (mon.0) 2548 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T16:00:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:54 vm09 bash[22983]: audit 2026-03-09T16:00:53.167551+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:54 vm09 bash[22983]: audit 2026-03-09T16:00:53.167551+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:54 vm09 bash[22983]: audit 2026-03-09T16:00:53.176866+0000 mon.a (mon.0) 2549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:54 vm09 bash[22983]: audit 2026-03-09T16:00:53.176866+0000 mon.a (mon.0) 2549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:54 vm01 bash[28152]: audit 2026-03-09T16:00:53.167551+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:54 vm01 bash[28152]: audit 2026-03-09T16:00:53.167551+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:54 vm01 bash[28152]: audit 2026-03-09T16:00:53.176866+0000 mon.a (mon.0) 2549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:54 vm01 bash[28152]: audit 2026-03-09T16:00:53.176866+0000 mon.a (mon.0) 2549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:54 vm01 bash[20728]: audit 2026-03-09T16:00:53.167551+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:54 vm01 bash[20728]: audit 2026-03-09T16:00:53.167551+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:54 vm01 bash[20728]: audit 2026-03-09T16:00:53.176866+0000 mon.a (mon.0) 2549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:54 vm01 bash[20728]: audit 2026-03-09T16:00:53.176866+0000 mon.a (mon.0) 2549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.197848+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.197848+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.199590+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.199590+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: cluster 2026-03-09T16:00:54.201675+0000 mon.a (mon.0) 2551 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: cluster 2026-03-09T16:00:54.201675+0000 mon.a (mon.0) 2551 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.205357+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.205357+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.208455+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.208455+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.216973+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: audit 2026-03-09T16:00:54.216973+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: cluster 2026-03-09T16:00:54.739312+0000 mgr.y (mgr.14520) 290 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:55 vm09 bash[22983]: cluster 2026-03-09T16:00:54.739312+0000 mgr.y (mgr.14520) 290 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.197848+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.197848+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.199590+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.199590+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: cluster 2026-03-09T16:00:54.201675+0000 mon.a (mon.0) 2551 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: cluster 2026-03-09T16:00:54.201675+0000 mon.a (mon.0) 2551 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.205357+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.205357+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.208455+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.208455+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.216973+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: audit 2026-03-09T16:00:54.216973+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: cluster 2026-03-09T16:00:54.739312+0000 mgr.y (mgr.14520) 290 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:55 vm01 bash[28152]: cluster 2026-03-09T16:00:54.739312+0000 mgr.y (mgr.14520) 290 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.197848+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.197848+0000 mon.a (mon.0) 2550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.199590+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.199590+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: cluster 2026-03-09T16:00:54.201675+0000 mon.a (mon.0) 2551 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T16:00:55.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: cluster 2026-03-09T16:00:54.201675+0000 mon.a (mon.0) 2551 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T16:00:55.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.205357+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.205357+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:55.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.208455+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.208455+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.216973+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: audit 2026-03-09T16:00:54.216973+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:00:55.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: cluster 2026-03-09T16:00:54.739312+0000 mgr.y (mgr.14520) 290 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:55.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:55 vm01 bash[20728]: cluster 2026-03-09T16:00:54.739312+0000 mgr.y (mgr.14520) 290 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:56.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:00:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.200951+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.200951+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.201374+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.201374+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.201478+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.201478+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: cluster 2026-03-09T16:00:55.207133+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: cluster 2026-03-09T16:00:55.207133+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: cluster 2026-03-09T16:00:55.210474+0000 mon.a (mon.0) 2557 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: cluster 2026-03-09T16:00:55.210474+0000 mon.a (mon.0) 2557 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.212214+0000 mon.a (mon.0) 2558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.212214+0000 mon.a (mon.0) 2558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.261609+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.261609+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.261895+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:55.261895+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:56.205080+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:56.205080+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:56.205114+0000 mon.a (mon.0) 2561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:56.205114+0000 mon.a (mon.0) 2561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:56.208391+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:56.208391+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: cluster 2026-03-09T16:00:56.212782+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: cluster 2026-03-09T16:00:56.212782+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:56.219151+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:56 vm09 bash[22983]: audit 2026-03-09T16:00:56.219151+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.200951+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.200951+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.201374+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.201374+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.201478+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.201478+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: cluster 2026-03-09T16:00:55.207133+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: cluster 2026-03-09T16:00:55.207133+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: cluster 2026-03-09T16:00:55.210474+0000 mon.a (mon.0) 2557 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: cluster 2026-03-09T16:00:55.210474+0000 mon.a (mon.0) 2557 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.212214+0000 mon.a (mon.0) 2558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.212214+0000 mon.a (mon.0) 2558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.261609+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.261609+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.261895+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:55.261895+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:56.205080+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.200951+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.200951+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.101:0/2148064415' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.201374+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.201374+0000 mon.a (mon.0) 2554 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.201478+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.201478+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: cluster 2026-03-09T16:00:55.207133+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: cluster 2026-03-09T16:00:55.207133+0000 mon.a (mon.0) 2556 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: cluster 2026-03-09T16:00:55.210474+0000 mon.a (mon.0) 2557 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: cluster 2026-03-09T16:00:55.210474+0000 mon.a (mon.0) 2557 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.212214+0000 mon.a (mon.0) 2558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.212214+0000 mon.a (mon.0) 2558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.261609+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.261609+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.261895+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:55.261895+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:56.205080+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:56.205080+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:56.205114+0000 mon.a (mon.0) 2561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:56.205114+0000 mon.a (mon.0) 2561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:56.208391+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:56.208391+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: cluster 2026-03-09T16:00:56.212782+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: cluster 2026-03-09T16:00:56.212782+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:56.219151+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:56 vm01 bash[28152]: audit 2026-03-09T16:00:56.219151+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:56.205080+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm01-59610-62"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:56.205114+0000 mon.a (mon.0) 2561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:56.205114+0000 mon.a (mon.0) 2561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:56.208391+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:56.208391+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: cluster 2026-03-09T16:00:56.212782+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: cluster 2026-03-09T16:00:56.212782+0000 mon.a (mon.0) 2562 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T16:00:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:56.219151+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:56.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:56 vm01 bash[20728]: audit 2026-03-09T16:00:56.219151+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:56.233756+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:56.233756+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:56.234213+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:56.234213+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:56.234413+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:56.234413+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:56.435499+0000 mgr.y (mgr.14520) 291 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:56.435499+0000 mgr.y (mgr.14520) 291 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: cluster 2026-03-09T16:00:56.739623+0000 mgr.y (mgr.14520) 292 : cluster [DBG] pgmap v451: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: cluster 2026-03-09T16:00:56.739623+0000 mgr.y (mgr.14520) 292 : cluster [DBG] pgmap v451: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:57.208722+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:57.208722+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:57.208839+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:57.208839+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: cluster 2026-03-09T16:00:57.212767+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: cluster 2026-03-09T16:00:57.212767+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:57.213473+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:57 vm09 bash[22983]: audit 2026-03-09T16:00:57.213473+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:56.233756+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:56.233756+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:56.234213+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:56.234213+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:56.234413+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:56.234413+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:56.435499+0000 mgr.y (mgr.14520) 291 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:56.435499+0000 mgr.y (mgr.14520) 291 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: cluster 2026-03-09T16:00:56.739623+0000 mgr.y (mgr.14520) 292 : cluster [DBG] pgmap v451: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: cluster 2026-03-09T16:00:56.739623+0000 mgr.y (mgr.14520) 292 : cluster [DBG] pgmap v451: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:57.208722+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:57.208722+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:57.208839+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:57.208839+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: cluster 2026-03-09T16:00:57.212767+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: cluster 2026-03-09T16:00:57.212767+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:57.213473+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:57 vm01 bash[28152]: audit 2026-03-09T16:00:57.213473+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:56.233756+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:56.233756+0000 mon.a (mon.0) 2564 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:56.234213+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:56.234213+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:56.234413+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:56.234413+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:56.435499+0000 mgr.y (mgr.14520) 291 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:56.435499+0000 mgr.y (mgr.14520) 291 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: cluster 2026-03-09T16:00:56.739623+0000 mgr.y (mgr.14520) 292 : cluster [DBG] pgmap v451: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: cluster 2026-03-09T16:00:56.739623+0000 mgr.y (mgr.14520) 292 : cluster [DBG] pgmap v451: 292 pgs: 292 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:57.208722+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:57.208722+0000 mon.a (mon.0) 2567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]': finished 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:57.208839+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:57.208839+0000 mon.a (mon.0) 2568 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm01-59610-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: cluster 2026-03-09T16:00:57.212767+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: cluster 2026-03-09T16:00:57.212767+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:57.213473+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:57.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:57 vm01 bash[20728]: audit 2026-03-09T16:00:57.213473+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: audit 2026-03-09T16:00:57.263437+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: audit 2026-03-09T16:00:57.263437+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: audit 2026-03-09T16:00:57.263746+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: audit 2026-03-09T16:00:57.263746+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: audit 2026-03-09T16:00:57.264049+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: audit 2026-03-09T16:00:57.264049+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: audit 2026-03-09T16:00:57.264207+0000 mon.a (mon.0) 2572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: audit 2026-03-09T16:00:57.264207+0000 mon.a (mon.0) 2572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: cluster 2026-03-09T16:00:58.224109+0000 mon.a (mon.0) 2573 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T16:00:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:58 vm09 bash[22983]: cluster 2026-03-09T16:00:58.224109+0000 mon.a (mon.0) 2573 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T16:00:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: audit 2026-03-09T16:00:57.263437+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: audit 2026-03-09T16:00:57.263437+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: audit 2026-03-09T16:00:57.263746+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: audit 2026-03-09T16:00:57.263746+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: audit 2026-03-09T16:00:57.264049+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: audit 2026-03-09T16:00:57.264049+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: audit 2026-03-09T16:00:57.264207+0000 mon.a (mon.0) 2572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: audit 2026-03-09T16:00:57.264207+0000 mon.a (mon.0) 2572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: cluster 2026-03-09T16:00:58.224109+0000 mon.a (mon.0) 2573 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:58 vm01 bash[28152]: cluster 2026-03-09T16:00:58.224109+0000 mon.a (mon.0) 2573 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: audit 2026-03-09T16:00:57.263437+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: audit 2026-03-09T16:00:57.263437+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: audit 2026-03-09T16:00:57.263746+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: audit 2026-03-09T16:00:57.263746+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: audit 2026-03-09T16:00:57.264049+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: audit 2026-03-09T16:00:57.264049+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: audit 2026-03-09T16:00:57.264207+0000 mon.a (mon.0) 2572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: audit 2026-03-09T16:00:57.264207+0000 mon.a (mon.0) 2572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-47"}]: dispatch 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: cluster 2026-03-09T16:00:58.224109+0000 mon.a (mon.0) 2573 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T16:00:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:58 vm01 bash[20728]: cluster 2026-03-09T16:00:58.224109+0000 mon.a (mon.0) 2573 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T16:00:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:59 vm09 bash[22983]: cluster 2026-03-09T16:00:58.740179+0000 mgr.y (mgr.14520) 293 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:59 vm09 bash[22983]: cluster 2026-03-09T16:00:58.740179+0000 mgr.y (mgr.14520) 293 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:59 vm09 bash[22983]: audit 2026-03-09T16:00:59.175912+0000 mon.a (mon.0) 2574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:59 vm09 bash[22983]: audit 2026-03-09T16:00:59.175912+0000 mon.a (mon.0) 2574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:59 vm09 bash[22983]: audit 2026-03-09T16:00:59.215867+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:00:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:00:59 vm09 bash[22983]: audit 2026-03-09T16:00:59.215867+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:59 vm01 bash[28152]: cluster 2026-03-09T16:00:58.740179+0000 mgr.y (mgr.14520) 293 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:59 vm01 bash[28152]: cluster 2026-03-09T16:00:58.740179+0000 mgr.y (mgr.14520) 293 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:59 vm01 bash[28152]: audit 2026-03-09T16:00:59.175912+0000 mon.a (mon.0) 2574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:59 vm01 bash[28152]: audit 2026-03-09T16:00:59.175912+0000 mon.a (mon.0) 2574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:59 vm01 bash[28152]: audit 2026-03-09T16:00:59.215867+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:00:59 vm01 bash[28152]: audit 2026-03-09T16:00:59.215867+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:59 vm01 bash[20728]: cluster 2026-03-09T16:00:58.740179+0000 mgr.y (mgr.14520) 293 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:59 vm01 bash[20728]: cluster 2026-03-09T16:00:58.740179+0000 mgr.y (mgr.14520) 293 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:59 vm01 bash[20728]: audit 2026-03-09T16:00:59.175912+0000 mon.a (mon.0) 2574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:59.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:59 vm01 bash[20728]: audit 2026-03-09T16:00:59.175912+0000 mon.a (mon.0) 2574 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:00:59.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:59 vm01 bash[20728]: audit 2026-03-09T16:00:59.215867+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:00:59.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:00:59 vm01 bash[20728]: audit 2026-03-09T16:00:59.215867+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm01-59610-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: cluster 2026-03-09T16:00:59.239728+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: cluster 2026-03-09T16:00:59.239728+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: audit 2026-03-09T16:00:59.243263+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: audit 2026-03-09T16:00:59.243263+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: audit 2026-03-09T16:00:59.249190+0000 mon.a (mon.0) 2577 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: audit 2026-03-09T16:00:59.249190+0000 mon.a (mon.0) 2577 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: audit 2026-03-09T16:01:00.220143+0000 mon.a (mon.0) 2578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: audit 2026-03-09T16:01:00.220143+0000 mon.a (mon.0) 2578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: cluster 2026-03-09T16:01:00.232824+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T16:01:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:00 vm09 bash[22983]: cluster 2026-03-09T16:01:00.232824+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T16:01:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: cluster 2026-03-09T16:00:59.239728+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T16:01:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: cluster 2026-03-09T16:00:59.239728+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T16:01:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: audit 2026-03-09T16:00:59.243263+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: audit 2026-03-09T16:00:59.243263+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: audit 2026-03-09T16:00:59.249190+0000 mon.a (mon.0) 2577 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: audit 2026-03-09T16:00:59.249190+0000 mon.a (mon.0) 2577 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: audit 2026-03-09T16:01:00.220143+0000 mon.a (mon.0) 2578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: audit 2026-03-09T16:01:00.220143+0000 mon.a (mon.0) 2578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: cluster 2026-03-09T16:01:00.232824+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:00 vm01 bash[28152]: cluster 2026-03-09T16:01:00.232824+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: cluster 2026-03-09T16:00:59.239728+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: cluster 2026-03-09T16:00:59.239728+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: audit 2026-03-09T16:00:59.243263+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: audit 2026-03-09T16:00:59.243263+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: audit 2026-03-09T16:00:59.249190+0000 mon.a (mon.0) 2577 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: audit 2026-03-09T16:00:59.249190+0000 mon.a (mon.0) 2577 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: audit 2026-03-09T16:01:00.220143+0000 mon.a (mon.0) 2578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: audit 2026-03-09T16:01:00.220143+0000 mon.a (mon.0) 2578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: cluster 2026-03-09T16:01:00.232824+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T16:01:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:00 vm01 bash[20728]: cluster 2026-03-09T16:01:00.232824+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:00.257871+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:00.257871+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:00.266452+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:00.266452+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: cluster 2026-03-09T16:01:00.740598+0000 mgr.y (mgr.14520) 294 : cluster [DBG] pgmap v457: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: cluster 2026-03-09T16:01:00.740598+0000 mgr.y (mgr.14520) 294 : cluster [DBG] pgmap v457: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:01.223372+0000 mon.a (mon.0) 2581 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:01.223372+0000 mon.a (mon.0) 2581 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: cluster 2026-03-09T16:01:01.229755+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: cluster 2026-03-09T16:01:01.229755+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:01.233320+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:01.233320+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:01.237262+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:01.237262+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:01.237338+0000 mon.a (mon.0) 2584 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:01 vm09 bash[22983]: audit 2026-03-09T16:01:01.237338+0000 mon.a (mon.0) 2584 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:01.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:00.257871+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:00.257871+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:00.266452+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:00.266452+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: cluster 2026-03-09T16:01:00.740598+0000 mgr.y (mgr.14520) 294 : cluster [DBG] pgmap v457: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: cluster 2026-03-09T16:01:00.740598+0000 mgr.y (mgr.14520) 294 : cluster [DBG] pgmap v457: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:01.223372+0000 mon.a (mon.0) 2581 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:01.223372+0000 mon.a (mon.0) 2581 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: cluster 2026-03-09T16:01:01.229755+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: cluster 2026-03-09T16:01:01.229755+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:01.233320+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:01.233320+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:01.237262+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:01.237262+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:01.237338+0000 mon.a (mon.0) 2584 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:01 vm01 bash[20728]: audit 2026-03-09T16:01:01.237338+0000 mon.a (mon.0) 2584 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:00.257871+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:00.257871+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:00.266452+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:00.266452+0000 mon.a (mon.0) 2580 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: cluster 2026-03-09T16:01:00.740598+0000 mgr.y (mgr.14520) 294 : cluster [DBG] pgmap v457: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: cluster 2026-03-09T16:01:00.740598+0000 mgr.y (mgr.14520) 294 : cluster [DBG] pgmap v457: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:01.223372+0000 mon.a (mon.0) 2581 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:01.223372+0000 mon.a (mon.0) 2581 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: cluster 2026-03-09T16:01:01.229755+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: cluster 2026-03-09T16:01:01.229755+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:01.233320+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:01.233320+0000 mon.c (mon.2) 336 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:01.237262+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:01.237262+0000 mon.a (mon.0) 2583 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:01.237338+0000 mon.a (mon.0) 2584 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:01.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:01 vm01 bash[28152]: audit 2026-03-09T16:01:01.237338+0000 mon.a (mon.0) 2584 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: cluster 2026-03-09T16:01:01.258813+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: cluster 2026-03-09T16:01:01.258813+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.227225+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.227225+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.227321+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.227321+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: cluster 2026-03-09T16:01:02.235868+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: cluster 2026-03-09T16:01:02.235868+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.236285+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.236285+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.236550+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.236550+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.236797+0000 mon.a (mon.0) 2590 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:02 vm09 bash[22983]: audit 2026-03-09T16:01:02.236797+0000 mon.a (mon.0) 2590 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: cluster 2026-03-09T16:01:01.258813+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: cluster 2026-03-09T16:01:01.258813+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.227225+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.227225+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.227321+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.227321+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: cluster 2026-03-09T16:01:02.235868+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: cluster 2026-03-09T16:01:02.235868+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.236285+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.236285+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.236550+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.236550+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.236797+0000 mon.a (mon.0) 2590 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:02 vm01 bash[28152]: audit 2026-03-09T16:01:02.236797+0000 mon.a (mon.0) 2590 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: cluster 2026-03-09T16:01:01.258813+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: cluster 2026-03-09T16:01:01.258813+0000 mon.a (mon.0) 2585 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.227225+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.227225+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.227321+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.227321+0000 mon.a (mon.0) 2587 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: cluster 2026-03-09T16:01:02.235868+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: cluster 2026-03-09T16:01:02.235868+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.236285+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.236285+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.236550+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.236550+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.236797+0000 mon.a (mon.0) 2590 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:02.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:02 vm01 bash[20728]: audit 2026-03-09T16:01:02.236797+0000 mon.a (mon.0) 2590 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]: dispatch 2026-03-09T16:01:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:01:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:01:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: cluster 2026-03-09T16:01:02.741018+0000 mgr.y (mgr.14520) 295 : cluster [DBG] pgmap v460: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: cluster 2026-03-09T16:01:02.741018+0000 mgr.y (mgr.14520) 295 : cluster [DBG] pgmap v460: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: cluster 2026-03-09T16:01:03.228005+0000 mon.a (mon.0) 2591 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: cluster 2026-03-09T16:01:03.228005+0000 mon.a (mon.0) 2591 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.232686+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.232686+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.232956+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]': finished 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.232956+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]': finished 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: cluster 2026-03-09T16:01:03.244412+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: cluster 2026-03-09T16:01:03.244412+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.257531+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.257531+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.261453+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.261453+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.261836+0000 mon.a (mon.0) 2595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.261836+0000 mon.a (mon.0) 2595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.263663+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.263663+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.266607+0000 mon.a (mon.0) 2596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.266607+0000 mon.a (mon.0) 2596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.268814+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:03 vm09 bash[22983]: audit 2026-03-09T16:01:03.268814+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: cluster 2026-03-09T16:01:02.741018+0000 mgr.y (mgr.14520) 295 : cluster [DBG] pgmap v460: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: cluster 2026-03-09T16:01:02.741018+0000 mgr.y (mgr.14520) 295 : cluster [DBG] pgmap v460: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: cluster 2026-03-09T16:01:03.228005+0000 mon.a (mon.0) 2591 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: cluster 2026-03-09T16:01:03.228005+0000 mon.a (mon.0) 2591 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.232686+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.232686+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.232956+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]': finished 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.232956+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]': finished 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: cluster 2026-03-09T16:01:03.244412+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: cluster 2026-03-09T16:01:03.244412+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T16:01:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.257531+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.257531+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.261453+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.261453+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.261836+0000 mon.a (mon.0) 2595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.261836+0000 mon.a (mon.0) 2595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.263663+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.263663+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.266607+0000 mon.a (mon.0) 2596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.266607+0000 mon.a (mon.0) 2596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.268814+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:03 vm01 bash[28152]: audit 2026-03-09T16:01:03.268814+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: cluster 2026-03-09T16:01:02.741018+0000 mgr.y (mgr.14520) 295 : cluster [DBG] pgmap v460: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: cluster 2026-03-09T16:01:02.741018+0000 mgr.y (mgr.14520) 295 : cluster [DBG] pgmap v460: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 786 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: cluster 2026-03-09T16:01:03.228005+0000 mon.a (mon.0) 2591 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: cluster 2026-03-09T16:01:03.228005+0000 mon.a (mon.0) 2591 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.232686+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.232686+0000 mon.a (mon.0) 2592 : audit [INF] from='client.? 192.168.123.101:0/172173095' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm01-59610-63"}]': finished 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.232956+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]': finished 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.232956+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-49", "mode": "readproxy"}]': finished 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: cluster 2026-03-09T16:01:03.244412+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: cluster 2026-03-09T16:01:03.244412+0000 mon.a (mon.0) 2594 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.257531+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.257531+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.261453+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.261453+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.261836+0000 mon.a (mon.0) 2595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.261836+0000 mon.a (mon.0) 2595 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.263663+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.263663+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.266607+0000 mon.a (mon.0) 2596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.266607+0000 mon.a (mon.0) 2596 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.268814+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:03 vm01 bash[20728]: audit 2026-03-09T16:01:03.268814+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: audit 2026-03-09T16:01:04.235964+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: audit 2026-03-09T16:01:04.235964+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: audit 2026-03-09T16:01:04.236486+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: audit 2026-03-09T16:01:04.236486+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: cluster 2026-03-09T16:01:04.240390+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: cluster 2026-03-09T16:01:04.240390+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: audit 2026-03-09T16:01:04.241239+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: audit 2026-03-09T16:01:04.241239+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: cluster 2026-03-09T16:01:04.741479+0000 mgr.y (mgr.14520) 296 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:05 vm09 bash[22983]: cluster 2026-03-09T16:01:04.741479+0000 mgr.y (mgr.14520) 296 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: audit 2026-03-09T16:01:04.235964+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: audit 2026-03-09T16:01:04.235964+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: audit 2026-03-09T16:01:04.236486+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: audit 2026-03-09T16:01:04.236486+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: cluster 2026-03-09T16:01:04.240390+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: cluster 2026-03-09T16:01:04.240390+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: audit 2026-03-09T16:01:04.241239+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: audit 2026-03-09T16:01:04.241239+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: cluster 2026-03-09T16:01:04.741479+0000 mgr.y (mgr.14520) 296 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:05 vm01 bash[28152]: cluster 2026-03-09T16:01:04.741479+0000 mgr.y (mgr.14520) 296 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:05.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: audit 2026-03-09T16:01:04.235964+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: audit 2026-03-09T16:01:04.235964+0000 mon.a (mon.0) 2598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm01-59610-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: audit 2026-03-09T16:01:04.236486+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: audit 2026-03-09T16:01:04.236486+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: cluster 2026-03-09T16:01:04.240390+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T16:01:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: cluster 2026-03-09T16:01:04.240390+0000 mon.a (mon.0) 2599 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T16:01:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: audit 2026-03-09T16:01:04.241239+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: audit 2026-03-09T16:01:04.241239+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: cluster 2026-03-09T16:01:04.741479+0000 mgr.y (mgr.14520) 296 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:05 vm01 bash[20728]: cluster 2026-03-09T16:01:04.741479+0000 mgr.y (mgr.14520) 296 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:06.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:01:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:01:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:06 vm09 bash[22983]: cluster 2026-03-09T16:01:05.251973+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T16:01:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:06 vm09 bash[22983]: cluster 2026-03-09T16:01:05.251973+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T16:01:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:06 vm09 bash[22983]: audit 2026-03-09T16:01:06.242722+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:06 vm09 bash[22983]: audit 2026-03-09T16:01:06.242722+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:06 vm09 bash[22983]: cluster 2026-03-09T16:01:06.255314+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T16:01:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:06 vm09 bash[22983]: cluster 2026-03-09T16:01:06.255314+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:06 vm01 bash[28152]: cluster 2026-03-09T16:01:05.251973+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:06 vm01 bash[28152]: cluster 2026-03-09T16:01:05.251973+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:06 vm01 bash[28152]: audit 2026-03-09T16:01:06.242722+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:06 vm01 bash[28152]: audit 2026-03-09T16:01:06.242722+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:06 vm01 bash[28152]: cluster 2026-03-09T16:01:06.255314+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:06 vm01 bash[28152]: cluster 2026-03-09T16:01:06.255314+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:06 vm01 bash[20728]: cluster 2026-03-09T16:01:05.251973+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:06 vm01 bash[20728]: cluster 2026-03-09T16:01:05.251973+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:06 vm01 bash[20728]: audit 2026-03-09T16:01:06.242722+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:06 vm01 bash[20728]: audit 2026-03-09T16:01:06.242722+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm01-59610-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:06 vm01 bash[20728]: cluster 2026-03-09T16:01:06.255314+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T16:01:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:06 vm01 bash[20728]: cluster 2026-03-09T16:01:06.255314+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T16:01:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:07 vm09 bash[22983]: audit 2026-03-09T16:01:06.443325+0000 mgr.y (mgr.14520) 297 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:07 vm09 bash[22983]: audit 2026-03-09T16:01:06.443325+0000 mgr.y (mgr.14520) 297 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:07 vm09 bash[22983]: cluster 2026-03-09T16:01:06.741873+0000 mgr.y (mgr.14520) 298 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:07 vm09 bash[22983]: cluster 2026-03-09T16:01:06.741873+0000 mgr.y (mgr.14520) 298 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:07 vm09 bash[22983]: cluster 2026-03-09T16:01:07.259853+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T16:01:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:07 vm09 bash[22983]: cluster 2026-03-09T16:01:07.259853+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:07 vm01 bash[28152]: audit 2026-03-09T16:01:06.443325+0000 mgr.y (mgr.14520) 297 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:07 vm01 bash[28152]: audit 2026-03-09T16:01:06.443325+0000 mgr.y (mgr.14520) 297 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:07 vm01 bash[28152]: cluster 2026-03-09T16:01:06.741873+0000 mgr.y (mgr.14520) 298 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:07 vm01 bash[28152]: cluster 2026-03-09T16:01:06.741873+0000 mgr.y (mgr.14520) 298 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:07 vm01 bash[28152]: cluster 2026-03-09T16:01:07.259853+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:07 vm01 bash[28152]: cluster 2026-03-09T16:01:07.259853+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:07 vm01 bash[20728]: audit 2026-03-09T16:01:06.443325+0000 mgr.y (mgr.14520) 297 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:07 vm01 bash[20728]: audit 2026-03-09T16:01:06.443325+0000 mgr.y (mgr.14520) 297 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:07 vm01 bash[20728]: cluster 2026-03-09T16:01:06.741873+0000 mgr.y (mgr.14520) 298 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:07 vm01 bash[20728]: cluster 2026-03-09T16:01:06.741873+0000 mgr.y (mgr.14520) 298 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:07 vm01 bash[20728]: cluster 2026-03-09T16:01:07.259853+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T16:01:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:07 vm01 bash[20728]: cluster 2026-03-09T16:01:07.259853+0000 mon.a (mon.0) 2604 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T16:01:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:08 vm09 bash[22983]: cluster 2026-03-09T16:01:07.267827+0000 mon.a (mon.0) 2605 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:08 vm09 bash[22983]: cluster 2026-03-09T16:01:07.267827+0000 mon.a (mon.0) 2605 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:08 vm09 bash[22983]: cluster 2026-03-09T16:01:08.281058+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T16:01:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:08 vm09 bash[22983]: cluster 2026-03-09T16:01:08.281058+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T16:01:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:08 vm01 bash[28152]: cluster 2026-03-09T16:01:07.267827+0000 mon.a (mon.0) 2605 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:08 vm01 bash[28152]: cluster 2026-03-09T16:01:07.267827+0000 mon.a (mon.0) 2605 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:08 vm01 bash[28152]: cluster 2026-03-09T16:01:08.281058+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T16:01:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:08 vm01 bash[28152]: cluster 2026-03-09T16:01:08.281058+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T16:01:08.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:08 vm01 bash[20728]: cluster 2026-03-09T16:01:07.267827+0000 mon.a (mon.0) 2605 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:08.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:08 vm01 bash[20728]: cluster 2026-03-09T16:01:07.267827+0000 mon.a (mon.0) 2605 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:08.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:08 vm01 bash[20728]: cluster 2026-03-09T16:01:08.281058+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T16:01:08.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:08 vm01 bash[20728]: cluster 2026-03-09T16:01:08.281058+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T16:01:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:09 vm09 bash[22983]: audit 2026-03-09T16:01:08.279948+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:09 vm09 bash[22983]: audit 2026-03-09T16:01:08.279948+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:09 vm09 bash[22983]: audit 2026-03-09T16:01:08.292259+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:09 vm09 bash[22983]: audit 2026-03-09T16:01:08.292259+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:09 vm09 bash[22983]: cluster 2026-03-09T16:01:08.742393+0000 mgr.y (mgr.14520) 299 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:09 vm09 bash[22983]: cluster 2026-03-09T16:01:08.742393+0000 mgr.y (mgr.14520) 299 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:09 vm01 bash[28152]: audit 2026-03-09T16:01:08.279948+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:09 vm01 bash[28152]: audit 2026-03-09T16:01:08.279948+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:09 vm01 bash[28152]: audit 2026-03-09T16:01:08.292259+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:09 vm01 bash[28152]: audit 2026-03-09T16:01:08.292259+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:09 vm01 bash[28152]: cluster 2026-03-09T16:01:08.742393+0000 mgr.y (mgr.14520) 299 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:09 vm01 bash[28152]: cluster 2026-03-09T16:01:08.742393+0000 mgr.y (mgr.14520) 299 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:09.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:09 vm01 bash[20728]: audit 2026-03-09T16:01:08.279948+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:09 vm01 bash[20728]: audit 2026-03-09T16:01:08.279948+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:09 vm01 bash[20728]: audit 2026-03-09T16:01:08.292259+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:09 vm01 bash[20728]: audit 2026-03-09T16:01:08.292259+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:09 vm01 bash[20728]: cluster 2026-03-09T16:01:08.742393+0000 mgr.y (mgr.14520) 299 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:09.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:09 vm01 bash[20728]: cluster 2026-03-09T16:01:08.742393+0000 mgr.y (mgr.14520) 299 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:10 vm09 bash[22983]: audit 2026-03-09T16:01:09.288674+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:10 vm09 bash[22983]: audit 2026-03-09T16:01:09.288674+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:10 vm09 bash[22983]: cluster 2026-03-09T16:01:09.292079+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T16:01:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:10 vm09 bash[22983]: cluster 2026-03-09T16:01:09.292079+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T16:01:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:10 vm09 bash[22983]: audit 2026-03-09T16:01:09.293555+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:10 vm09 bash[22983]: audit 2026-03-09T16:01:09.293555+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:10 vm09 bash[22983]: audit 2026-03-09T16:01:09.306231+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:10 vm09 bash[22983]: audit 2026-03-09T16:01:09.306231+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:10 vm01 bash[28152]: audit 2026-03-09T16:01:09.288674+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:10 vm01 bash[28152]: audit 2026-03-09T16:01:09.288674+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:10 vm01 bash[28152]: cluster 2026-03-09T16:01:09.292079+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T16:01:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:10 vm01 bash[28152]: cluster 2026-03-09T16:01:09.292079+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T16:01:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:10 vm01 bash[28152]: audit 2026-03-09T16:01:09.293555+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:10 vm01 bash[28152]: audit 2026-03-09T16:01:09.293555+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:10 vm01 bash[28152]: audit 2026-03-09T16:01:09.306231+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:10 vm01 bash[28152]: audit 2026-03-09T16:01:09.306231+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:10 vm01 bash[20728]: audit 2026-03-09T16:01:09.288674+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:10.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:10 vm01 bash[20728]: audit 2026-03-09T16:01:09.288674+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:10.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:10 vm01 bash[20728]: cluster 2026-03-09T16:01:09.292079+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T16:01:10.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:10 vm01 bash[20728]: cluster 2026-03-09T16:01:09.292079+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T16:01:10.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:10 vm01 bash[20728]: audit 2026-03-09T16:01:09.293555+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:10 vm01 bash[20728]: audit 2026-03-09T16:01:09.293555+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.101:0/2137161982' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:10 vm01 bash[20728]: audit 2026-03-09T16:01:09.306231+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:10.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:10 vm01 bash[20728]: audit 2026-03-09T16:01:09.306231+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.302465+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.302465+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: cluster 2026-03-09T16:01:10.306447+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: cluster 2026-03-09T16:01:10.306447+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.331734+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.331734+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.332402+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.332402+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.333763+0000 mon.c (mon.2) 339 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.333763+0000 mon.c (mon.2) 339 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.334145+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.334145+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.334819+0000 mon.c (mon.2) 340 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.334819+0000 mon.c (mon.2) 340 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.335268+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: audit 2026-03-09T16:01:10.335268+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: cluster 2026-03-09T16:01:10.742695+0000 mgr.y (mgr.14520) 300 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:11.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:11 vm09 bash[22983]: cluster 2026-03-09T16:01:10.742695+0000 mgr.y (mgr.14520) 300 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.302465+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.302465+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: cluster 2026-03-09T16:01:10.306447+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: cluster 2026-03-09T16:01:10.306447+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.331734+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.331734+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.332402+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.332402+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.333763+0000 mon.c (mon.2) 339 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.333763+0000 mon.c (mon.2) 339 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.334145+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.334145+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.334819+0000 mon.c (mon.2) 340 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.334819+0000 mon.c (mon.2) 340 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.335268+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: audit 2026-03-09T16:01:10.335268+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: cluster 2026-03-09T16:01:10.742695+0000 mgr.y (mgr.14520) 300 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:11 vm01 bash[28152]: cluster 2026-03-09T16:01:10.742695+0000 mgr.y (mgr.14520) 300 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.302465+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.302465+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm01-59610-64"}]': finished 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: cluster 2026-03-09T16:01:10.306447+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: cluster 2026-03-09T16:01:10.306447+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.331734+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.331734+0000 mon.c (mon.2) 338 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.332402+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.332402+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.333763+0000 mon.c (mon.2) 339 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.333763+0000 mon.c (mon.2) 339 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.334145+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.334145+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.334819+0000 mon.c (mon.2) 340 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.334819+0000 mon.c (mon.2) 340 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.335268+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: audit 2026-03-09T16:01:10.335268+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: cluster 2026-03-09T16:01:10.742695+0000 mgr.y (mgr.14520) 300 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:11 vm01 bash[20728]: cluster 2026-03-09T16:01:10.742695+0000 mgr.y (mgr.14520) 300 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:12 vm09 bash[22983]: audit 2026-03-09T16:01:11.323336+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:12 vm09 bash[22983]: audit 2026-03-09T16:01:11.323336+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:12 vm09 bash[22983]: audit 2026-03-09T16:01:11.328607+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:12 vm09 bash[22983]: audit 2026-03-09T16:01:11.328607+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:12 vm09 bash[22983]: cluster 2026-03-09T16:01:11.334022+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T16:01:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:12 vm09 bash[22983]: cluster 2026-03-09T16:01:11.334022+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T16:01:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:12 vm09 bash[22983]: audit 2026-03-09T16:01:11.334967+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:12 vm09 bash[22983]: audit 2026-03-09T16:01:11.334967+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:12 vm01 bash[28152]: audit 2026-03-09T16:01:11.323336+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:12 vm01 bash[28152]: audit 2026-03-09T16:01:11.323336+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:12 vm01 bash[28152]: audit 2026-03-09T16:01:11.328607+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:12 vm01 bash[28152]: audit 2026-03-09T16:01:11.328607+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:12 vm01 bash[28152]: cluster 2026-03-09T16:01:11.334022+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:12 vm01 bash[28152]: cluster 2026-03-09T16:01:11.334022+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:12 vm01 bash[28152]: audit 2026-03-09T16:01:11.334967+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:12 vm01 bash[28152]: audit 2026-03-09T16:01:11.334967+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:12 vm01 bash[20728]: audit 2026-03-09T16:01:11.323336+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:12 vm01 bash[20728]: audit 2026-03-09T16:01:11.323336+0000 mon.a (mon.0) 2616 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm01-59610-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:12 vm01 bash[20728]: audit 2026-03-09T16:01:11.328607+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:12 vm01 bash[20728]: audit 2026-03-09T16:01:11.328607+0000 mon.c (mon.2) 341 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:12 vm01 bash[20728]: cluster 2026-03-09T16:01:11.334022+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T16:01:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:12 vm01 bash[20728]: cluster 2026-03-09T16:01:11.334022+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T16:01:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:12 vm01 bash[20728]: audit 2026-03-09T16:01:11.334967+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:12.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:12 vm01 bash[20728]: audit 2026-03-09T16:01:11.334967+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:13.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:01:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:01:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:01:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:13 vm09 bash[22983]: cluster 2026-03-09T16:01:12.344673+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T16:01:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:13 vm09 bash[22983]: cluster 2026-03-09T16:01:12.344673+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T16:01:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:13 vm09 bash[22983]: cluster 2026-03-09T16:01:12.743078+0000 mgr.y (mgr.14520) 301 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:13 vm09 bash[22983]: cluster 2026-03-09T16:01:12.743078+0000 mgr.y (mgr.14520) 301 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:13 vm09 bash[22983]: audit 2026-03-09T16:01:13.331974+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:13 vm09 bash[22983]: audit 2026-03-09T16:01:13.331974+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:13 vm09 bash[22983]: cluster 2026-03-09T16:01:13.342585+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T16:01:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:13 vm09 bash[22983]: cluster 2026-03-09T16:01:13.342585+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:13 vm01 bash[20728]: cluster 2026-03-09T16:01:12.344673+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:13 vm01 bash[20728]: cluster 2026-03-09T16:01:12.344673+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:13 vm01 bash[20728]: cluster 2026-03-09T16:01:12.743078+0000 mgr.y (mgr.14520) 301 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:13 vm01 bash[20728]: cluster 2026-03-09T16:01:12.743078+0000 mgr.y (mgr.14520) 301 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:13 vm01 bash[20728]: audit 2026-03-09T16:01:13.331974+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:13 vm01 bash[20728]: audit 2026-03-09T16:01:13.331974+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:13 vm01 bash[20728]: cluster 2026-03-09T16:01:13.342585+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:13 vm01 bash[20728]: cluster 2026-03-09T16:01:13.342585+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:13 vm01 bash[28152]: cluster 2026-03-09T16:01:12.344673+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:13 vm01 bash[28152]: cluster 2026-03-09T16:01:12.344673+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:13 vm01 bash[28152]: cluster 2026-03-09T16:01:12.743078+0000 mgr.y (mgr.14520) 301 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:13 vm01 bash[28152]: cluster 2026-03-09T16:01:12.743078+0000 mgr.y (mgr.14520) 301 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:13 vm01 bash[28152]: audit 2026-03-09T16:01:13.331974+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:13 vm01 bash[28152]: audit 2026-03-09T16:01:13.331974+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm01-59610-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:13 vm01 bash[28152]: cluster 2026-03-09T16:01:13.342585+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T16:01:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:13 vm01 bash[28152]: cluster 2026-03-09T16:01:13.342585+0000 mon.a (mon.0) 2621 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:13.520294+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:13.520294+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:13.520662+0000 mon.a (mon.0) 2622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:13.520662+0000 mon.a (mon.0) 2622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: cluster 2026-03-09T16:01:13.842872+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: cluster 2026-03-09T16:01:13.842872+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.208796+0000 mon.a (mon.0) 2624 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.208796+0000 mon.a (mon.0) 2624 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.209462+0000 mon.a (mon.0) 2625 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.209462+0000 mon.a (mon.0) 2625 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.334757+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.334757+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.339037+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.339037+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: cluster 2026-03-09T16:01:14.343861+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: cluster 2026-03-09T16:01:14.343861+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.344977+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:14 vm09 bash[22983]: audit 2026-03-09T16:01:14.344977+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:13.520294+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:13.520294+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:13.520662+0000 mon.a (mon.0) 2622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:13.520662+0000 mon.a (mon.0) 2622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: cluster 2026-03-09T16:01:13.842872+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: cluster 2026-03-09T16:01:13.842872+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.208796+0000 mon.a (mon.0) 2624 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.208796+0000 mon.a (mon.0) 2624 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.209462+0000 mon.a (mon.0) 2625 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.209462+0000 mon.a (mon.0) 2625 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.334757+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.334757+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.339037+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.339037+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: cluster 2026-03-09T16:01:14.343861+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: cluster 2026-03-09T16:01:14.343861+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.344977+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:14 vm01 bash[28152]: audit 2026-03-09T16:01:14.344977+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:13.520294+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:13.520294+0000 mon.c (mon.2) 342 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:13.520662+0000 mon.a (mon.0) 2622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:13.520662+0000 mon.a (mon.0) 2622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: cluster 2026-03-09T16:01:13.842872+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: cluster 2026-03-09T16:01:13.842872+0000 mon.a (mon.0) 2623 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.208796+0000 mon.a (mon.0) 2624 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.208796+0000 mon.a (mon.0) 2624 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.209462+0000 mon.a (mon.0) 2625 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.209462+0000 mon.a (mon.0) 2625 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.334757+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.334757+0000 mon.a (mon.0) 2626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.339037+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.339037+0000 mon.c (mon.2) 343 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: cluster 2026-03-09T16:01:14.343861+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: cluster 2026-03-09T16:01:14.343861+0000 mon.a (mon.0) 2627 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.344977+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:14 vm01 bash[20728]: audit 2026-03-09T16:01:14.344977+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:15 vm09 bash[22983]: cluster 2026-03-09T16:01:14.743528+0000 mgr.y (mgr.14520) 302 : cluster [DBG] pgmap v478: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:15 vm09 bash[22983]: cluster 2026-03-09T16:01:14.743528+0000 mgr.y (mgr.14520) 302 : cluster [DBG] pgmap v478: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:15.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:15 vm01 bash[28152]: cluster 2026-03-09T16:01:14.743528+0000 mgr.y (mgr.14520) 302 : cluster [DBG] pgmap v478: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:15.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:15 vm01 bash[28152]: cluster 2026-03-09T16:01:14.743528+0000 mgr.y (mgr.14520) 302 : cluster [DBG] pgmap v478: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:15.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:15 vm01 bash[20728]: cluster 2026-03-09T16:01:14.743528+0000 mgr.y (mgr.14520) 302 : cluster [DBG] pgmap v478: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:15.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:15 vm01 bash[20728]: cluster 2026-03-09T16:01:14.743528+0000 mgr.y (mgr.14520) 302 : cluster [DBG] pgmap v478: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP2163:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:5d165639:::164:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:f43765fc:::165:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:b4c720e9:::166:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:e694b040:::167:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:afa38db2:::168:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:77ba9f53:::169:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:87495034:::170:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:7c96bf0e:::171:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:dbe346cc:::172:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:e943ec24:::173:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:f97a9c0c:::174:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:6f26e74d:::175:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:4f95e106:::176:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:0e6f2f8f:::177:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:05db05f1:::178:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:38a78d66:::179:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:d095610b:::180:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:a1a9d709:::181:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:1e5d39db:::182:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:f7df4fb9:::183:head 2026-03-09T16:01:16.638 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:03a7f161:::184:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:ba70721e:::185:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:28e5662d:::186:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:973d52de:::187:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:4303eb1c:::188:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:b990b48e:::189:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:29b8165b:::190:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:3547f197:::191:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:7e260936:::192:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:1abec7b1:::193:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:10fdda93:::194:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:15817eea:::195:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:770bab57:::196:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:ed9e13e7:::197:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:71471a8f:::198:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: checking for 269:10fb1d02:::199:head 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetWrite (8213 ms) 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetTrim 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: first is 1773072031 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,1773072033,1773072034,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,1773072033,1773072034,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,1773072033,1773072034,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,1773072033,1773072034,1773072036,1773072037,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,1773072033,1773072034,1773072036,1773072037,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,1773072033,1773072034,1773072036,1773072037,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,1773072033,1773072034,1773072036,1773072037,1773072039,1773072040,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,1773072033,1773072034,1773072036,1773072037,1773072039,1773072040,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072031,1773072033,1773072034,1773072036,1773072037,1773072039,1773072040,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072034,1773072036,1773072037,1773072039,1773072040,1773072042,1773072043,0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: first now 1773072034, trimmed 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetTrim (20324 ms) 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteOn2ndRead 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: foo0 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteOn2ndRead (14288 ms) 2026-03-09T16:01:16.639 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ProxyRead 2026-03-09T16:01:16.883 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:01:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: cluster 2026-03-09T16:01:15.334818+0000 mon.a (mon.0) 2629 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: cluster 2026-03-09T16:01:15.334818+0000 mon.a (mon.0) 2629 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.606254+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.606254+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: cluster 2026-03-09T16:01:15.611375+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: cluster 2026-03-09T16:01:15.611375+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.623469+0000 mon.c (mon.2) 344 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.623469+0000 mon.c (mon.2) 344 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.641666+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.641666+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.664387+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.664387+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.664887+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.664887+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.665589+0000 mon.c (mon.2) 346 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.665589+0000 mon.c (mon.2) 346 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.665835+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:16 vm09 bash[22983]: audit 2026-03-09T16:01:15.665835+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: cluster 2026-03-09T16:01:15.334818+0000 mon.a (mon.0) 2629 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: cluster 2026-03-09T16:01:15.334818+0000 mon.a (mon.0) 2629 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.606254+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.606254+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: cluster 2026-03-09T16:01:15.611375+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: cluster 2026-03-09T16:01:15.611375+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.623469+0000 mon.c (mon.2) 344 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.623469+0000 mon.c (mon.2) 344 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.641666+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.641666+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.664387+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.664387+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.664887+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.664887+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.665589+0000 mon.c (mon.2) 346 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.665589+0000 mon.c (mon.2) 346 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.665835+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:16 vm01 bash[28152]: audit 2026-03-09T16:01:15.665835+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: cluster 2026-03-09T16:01:15.334818+0000 mon.a (mon.0) 2629 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: cluster 2026-03-09T16:01:15.334818+0000 mon.a (mon.0) 2629 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.606254+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.606254+0000 mon.a (mon.0) 2630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]': finished 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: cluster 2026-03-09T16:01:15.611375+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: cluster 2026-03-09T16:01:15.611375+0000 mon.a (mon.0) 2631 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.623469+0000 mon.c (mon.2) 344 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.623469+0000 mon.c (mon.2) 344 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.641666+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.641666+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.664387+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.664387+0000 mon.c (mon.2) 345 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.664887+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.664887+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.665589+0000 mon.c (mon.2) 346 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.665589+0000 mon.c (mon.2) 346 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.665835+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:16.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:16 vm01 bash[20728]: audit 2026-03-09T16:01:15.665835+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-49"}]: dispatch 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: audit 2026-03-09T16:01:16.454176+0000 mgr.y (mgr.14520) 303 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: audit 2026-03-09T16:01:16.454176+0000 mgr.y (mgr.14520) 303 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: audit 2026-03-09T16:01:16.634033+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: audit 2026-03-09T16:01:16.634033+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: cluster 2026-03-09T16:01:16.639015+0000 mon.a (mon.0) 2636 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: cluster 2026-03-09T16:01:16.639015+0000 mon.a (mon.0) 2636 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: audit 2026-03-09T16:01:16.642115+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: audit 2026-03-09T16:01:16.642115+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: audit 2026-03-09T16:01:16.643901+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: audit 2026-03-09T16:01:16.643901+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: cluster 2026-03-09T16:01:16.743956+0000 mgr.y (mgr.14520) 304 : cluster [DBG] pgmap v481: 260 pgs: 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:17 vm09 bash[22983]: cluster 2026-03-09T16:01:16.743956+0000 mgr.y (mgr.14520) 304 : cluster [DBG] pgmap v481: 260 pgs: 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: audit 2026-03-09T16:01:16.454176+0000 mgr.y (mgr.14520) 303 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: audit 2026-03-09T16:01:16.454176+0000 mgr.y (mgr.14520) 303 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: audit 2026-03-09T16:01:16.634033+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: audit 2026-03-09T16:01:16.634033+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: cluster 2026-03-09T16:01:16.639015+0000 mon.a (mon.0) 2636 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: cluster 2026-03-09T16:01:16.639015+0000 mon.a (mon.0) 2636 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: audit 2026-03-09T16:01:16.642115+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: audit 2026-03-09T16:01:16.642115+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: audit 2026-03-09T16:01:16.643901+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: audit 2026-03-09T16:01:16.643901+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: cluster 2026-03-09T16:01:16.743956+0000 mgr.y (mgr.14520) 304 : cluster [DBG] pgmap v481: 260 pgs: 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:17 vm01 bash[28152]: cluster 2026-03-09T16:01:16.743956+0000 mgr.y (mgr.14520) 304 : cluster [DBG] pgmap v481: 260 pgs: 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: audit 2026-03-09T16:01:16.454176+0000 mgr.y (mgr.14520) 303 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: audit 2026-03-09T16:01:16.454176+0000 mgr.y (mgr.14520) 303 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: audit 2026-03-09T16:01:16.634033+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: audit 2026-03-09T16:01:16.634033+0000 mon.a (mon.0) 2635 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: cluster 2026-03-09T16:01:16.639015+0000 mon.a (mon.0) 2636 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: cluster 2026-03-09T16:01:16.639015+0000 mon.a (mon.0) 2636 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: audit 2026-03-09T16:01:16.642115+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: audit 2026-03-09T16:01:16.642115+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.101:0/2356508048' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: audit 2026-03-09T16:01:16.643901+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: audit 2026-03-09T16:01:16.643901+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]: dispatch 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: cluster 2026-03-09T16:01:16.743956+0000 mgr.y (mgr.14520) 304 : cluster [DBG] pgmap v481: 260 pgs: 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:17 vm01 bash[20728]: cluster 2026-03-09T16:01:16.743956+0000 mgr.y (mgr.14520) 304 : cluster [DBG] pgmap v481: 260 pgs: 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.735496+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.735496+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: cluster 2026-03-09T16:01:17.745415+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: cluster 2026-03-09T16:01:17.745415+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.762353+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.762353+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.773152+0000 mon.a (mon.0) 2640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.773152+0000 mon.a (mon.0) 2640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.783291+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.783291+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.784165+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.784165+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.785263+0000 mon.c (mon.2) 350 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.785263+0000 mon.c (mon.2) 350 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.785660+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.785660+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.786296+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.786296+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.786699+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:17.786699+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:18.748771+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:18.748771+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:18.748854+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: audit 2026-03-09T16:01:18.748854+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: cluster 2026-03-09T16:01:18.756833+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T16:01:19.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:18 vm09 bash[22983]: cluster 2026-03-09T16:01:18.756833+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.735496+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.735496+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: cluster 2026-03-09T16:01:17.745415+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: cluster 2026-03-09T16:01:17.745415+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.762353+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.762353+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.773152+0000 mon.a (mon.0) 2640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.773152+0000 mon.a (mon.0) 2640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.783291+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.783291+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.784165+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.784165+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.785263+0000 mon.c (mon.2) 350 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.785263+0000 mon.c (mon.2) 350 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.785660+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.785660+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.786296+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.786296+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.786699+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:17.786699+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:18.748771+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:18.748771+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:18.748854+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: audit 2026-03-09T16:01:18.748854+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: cluster 2026-03-09T16:01:18.756833+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:18 vm01 bash[28152]: cluster 2026-03-09T16:01:18.756833+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.735496+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.735496+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm01-59610-65"}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: cluster 2026-03-09T16:01:17.745415+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: cluster 2026-03-09T16:01:17.745415+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.762353+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.762353+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.773152+0000 mon.a (mon.0) 2640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.773152+0000 mon.a (mon.0) 2640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.783291+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.783291+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.784165+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.784165+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.785263+0000 mon.c (mon.2) 350 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.785263+0000 mon.c (mon.2) 350 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.785660+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.785660+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.786296+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.786296+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.786699+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:17.786699+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:18.748771+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:18.748771+0000 mon.a (mon.0) 2644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:18.748854+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: audit 2026-03-09T16:01:18.748854+0000 mon.a (mon.0) 2645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm01-59610-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: cluster 2026-03-09T16:01:18.756833+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T16:01:19.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:18 vm01 bash[20728]: cluster 2026-03-09T16:01:18.756833+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: cluster 2026-03-09T16:01:18.744400+0000 mgr.y (mgr.14520) 305 : cluster [DBG] pgmap v483: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: cluster 2026-03-09T16:01:18.744400+0000 mgr.y (mgr.14520) 305 : cluster [DBG] pgmap v483: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:18.764751+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:18.764751+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:18.772715+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:18.772715+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:18.810132+0000 mon.c (mon.2) 353 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:18.810132+0000 mon.c (mon.2) 353 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:18.810507+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:18.810507+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: cluster 2026-03-09T16:01:18.843597+0000 mon.a (mon.0) 2649 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: cluster 2026-03-09T16:01:18.843597+0000 mon.a (mon.0) 2649 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:19.752078+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:19.752078+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:19.766293+0000 mon.c (mon.2) 354 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:19.766293+0000 mon.c (mon.2) 354 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: cluster 2026-03-09T16:01:19.768393+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: cluster 2026-03-09T16:01:19.768393+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:19.768838+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:19 vm09 bash[22983]: audit 2026-03-09T16:01:19.768838+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: cluster 2026-03-09T16:01:18.744400+0000 mgr.y (mgr.14520) 305 : cluster [DBG] pgmap v483: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: cluster 2026-03-09T16:01:18.744400+0000 mgr.y (mgr.14520) 305 : cluster [DBG] pgmap v483: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:18.764751+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:18.764751+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:18.772715+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:18.772715+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:18.810132+0000 mon.c (mon.2) 353 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:18.810132+0000 mon.c (mon.2) 353 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:18.810507+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:18.810507+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: cluster 2026-03-09T16:01:18.843597+0000 mon.a (mon.0) 2649 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: cluster 2026-03-09T16:01:18.843597+0000 mon.a (mon.0) 2649 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:19.752078+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:19.752078+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:19.766293+0000 mon.c (mon.2) 354 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:19.766293+0000 mon.c (mon.2) 354 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: cluster 2026-03-09T16:01:19.768393+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: cluster 2026-03-09T16:01:19.768393+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:19.768838+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:19 vm01 bash[20728]: audit 2026-03-09T16:01:19.768838+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: cluster 2026-03-09T16:01:18.744400+0000 mgr.y (mgr.14520) 305 : cluster [DBG] pgmap v483: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: cluster 2026-03-09T16:01:18.744400+0000 mgr.y (mgr.14520) 305 : cluster [DBG] pgmap v483: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 792 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:18.764751+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:18.764751+0000 mon.c (mon.2) 352 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:18.772715+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:18.772715+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:18.810132+0000 mon.c (mon.2) 353 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:18.810132+0000 mon.c (mon.2) 353 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:18.810507+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:18.810507+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:01:20.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: cluster 2026-03-09T16:01:18.843597+0000 mon.a (mon.0) 2649 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: cluster 2026-03-09T16:01:18.843597+0000 mon.a (mon.0) 2649 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:19.752078+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:19.752078+0000 mon.a (mon.0) 2650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:01:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:19.766293+0000 mon.c (mon.2) 354 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:19.766293+0000 mon.c (mon.2) 354 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: cluster 2026-03-09T16:01:19.768393+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T16:01:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: cluster 2026-03-09T16:01:19.768393+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T16:01:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:19.768838+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:20.178 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:19 vm01 bash[28152]: audit 2026-03-09T16:01:19.768838+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: cluster 2026-03-09T16:01:20.744709+0000 mgr.y (mgr.14520) 306 : cluster [DBG] pgmap v486: 292 pgs: 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 498 B/s wr, 1 op/s 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: cluster 2026-03-09T16:01:20.744709+0000 mgr.y (mgr.14520) 306 : cluster [DBG] pgmap v486: 292 pgs: 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 498 B/s wr, 1 op/s 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: audit 2026-03-09T16:01:20.755556+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: audit 2026-03-09T16:01:20.755556+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: audit 2026-03-09T16:01:20.756176+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: audit 2026-03-09T16:01:20.756176+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: audit 2026-03-09T16:01:20.762914+0000 mon.c (mon.2) 355 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: audit 2026-03-09T16:01:20.762914+0000 mon.c (mon.2) 355 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: cluster 2026-03-09T16:01:20.763773+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: cluster 2026-03-09T16:01:20.763773+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: audit 2026-03-09T16:01:20.767889+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:21 vm09 bash[22983]: audit 2026-03-09T16:01:20.767889+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: cluster 2026-03-09T16:01:20.744709+0000 mgr.y (mgr.14520) 306 : cluster [DBG] pgmap v486: 292 pgs: 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 498 B/s wr, 1 op/s 2026-03-09T16:01:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: cluster 2026-03-09T16:01:20.744709+0000 mgr.y (mgr.14520) 306 : cluster [DBG] pgmap v486: 292 pgs: 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 498 B/s wr, 1 op/s 2026-03-09T16:01:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: audit 2026-03-09T16:01:20.755556+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: audit 2026-03-09T16:01:20.755556+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: audit 2026-03-09T16:01:20.756176+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: audit 2026-03-09T16:01:20.756176+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: audit 2026-03-09T16:01:20.762914+0000 mon.c (mon.2) 355 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: audit 2026-03-09T16:01:20.762914+0000 mon.c (mon.2) 355 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: cluster 2026-03-09T16:01:20.763773+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: cluster 2026-03-09T16:01:20.763773+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: audit 2026-03-09T16:01:20.767889+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:21 vm01 bash[28152]: audit 2026-03-09T16:01:20.767889+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: cluster 2026-03-09T16:01:20.744709+0000 mgr.y (mgr.14520) 306 : cluster [DBG] pgmap v486: 292 pgs: 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 498 B/s wr, 1 op/s 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: cluster 2026-03-09T16:01:20.744709+0000 mgr.y (mgr.14520) 306 : cluster [DBG] pgmap v486: 292 pgs: 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 498 B/s wr, 1 op/s 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: audit 2026-03-09T16:01:20.755556+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: audit 2026-03-09T16:01:20.755556+0000 mon.a (mon.0) 2653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm01-59610-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: audit 2026-03-09T16:01:20.756176+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: audit 2026-03-09T16:01:20.756176+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: audit 2026-03-09T16:01:20.762914+0000 mon.c (mon.2) 355 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: audit 2026-03-09T16:01:20.762914+0000 mon.c (mon.2) 355 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: cluster 2026-03-09T16:01:20.763773+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: cluster 2026-03-09T16:01:20.763773+0000 mon.a (mon.0) 2655 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: audit 2026-03-09T16:01:20.767889+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:21 vm01 bash[20728]: audit 2026-03-09T16:01:20.767889+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]: dispatch 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: cluster 2026-03-09T16:01:21.755440+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: cluster 2026-03-09T16:01:21.755440+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: audit 2026-03-09T16:01:21.759327+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]': finished 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: audit 2026-03-09T16:01:21.759327+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]': finished 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: cluster 2026-03-09T16:01:21.769358+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: cluster 2026-03-09T16:01:21.769358+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: audit 2026-03-09T16:01:21.821854+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: audit 2026-03-09T16:01:21.821854+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: audit 2026-03-09T16:01:21.822208+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:22 vm09 bash[22983]: audit 2026-03-09T16:01:21.822208+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: cluster 2026-03-09T16:01:21.755440+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:23.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: cluster 2026-03-09T16:01:21.755440+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:23.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: audit 2026-03-09T16:01:21.759327+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]': finished 2026-03-09T16:01:23.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: audit 2026-03-09T16:01:21.759327+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]': finished 2026-03-09T16:01:23.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: cluster 2026-03-09T16:01:21.769358+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T16:01:23.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: cluster 2026-03-09T16:01:21.769358+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T16:01:23.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: audit 2026-03-09T16:01:21.821854+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: audit 2026-03-09T16:01:21.821854+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: audit 2026-03-09T16:01:21.822208+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:22 vm01 bash[28152]: audit 2026-03-09T16:01:21.822208+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:01:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:01:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: cluster 2026-03-09T16:01:21.755440+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: cluster 2026-03-09T16:01:21.755440+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: audit 2026-03-09T16:01:21.759327+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]': finished 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: audit 2026-03-09T16:01:21.759327+0000 mon.a (mon.0) 2658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-51", "mode": "writeback"}]': finished 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: cluster 2026-03-09T16:01:21.769358+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: cluster 2026-03-09T16:01:21.769358+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: audit 2026-03-09T16:01:21.821854+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: audit 2026-03-09T16:01:21.821854+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: audit 2026-03-09T16:01:21.822208+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:23.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:22 vm01 bash[20728]: audit 2026-03-09T16:01:21.822208+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: cluster 2026-03-09T16:01:22.745035+0000 mgr.y (mgr.14520) 307 : cluster [DBG] pgmap v489: 300 pgs: 8 unknown, 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: cluster 2026-03-09T16:01:22.745035+0000 mgr.y (mgr.14520) 307 : cluster [DBG] pgmap v489: 300 pgs: 8 unknown, 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.794037+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.794037+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.805701+0000 mon.c (mon.2) 357 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.805701+0000 mon.c (mon.2) 357 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: cluster 2026-03-09T16:01:22.809548+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: cluster 2026-03-09T16:01:22.809548+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.810720+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.810720+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.811075+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.811075+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.811600+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:23 vm09 bash[22983]: audit 2026-03-09T16:01:22.811600+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: cluster 2026-03-09T16:01:22.745035+0000 mgr.y (mgr.14520) 307 : cluster [DBG] pgmap v489: 300 pgs: 8 unknown, 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: cluster 2026-03-09T16:01:22.745035+0000 mgr.y (mgr.14520) 307 : cluster [DBG] pgmap v489: 300 pgs: 8 unknown, 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.794037+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.794037+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.805701+0000 mon.c (mon.2) 357 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.805701+0000 mon.c (mon.2) 357 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: cluster 2026-03-09T16:01:22.809548+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: cluster 2026-03-09T16:01:22.809548+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.810720+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.810720+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.811075+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.811075+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.811600+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:23 vm01 bash[28152]: audit 2026-03-09T16:01:22.811600+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: cluster 2026-03-09T16:01:22.745035+0000 mgr.y (mgr.14520) 307 : cluster [DBG] pgmap v489: 300 pgs: 8 unknown, 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: cluster 2026-03-09T16:01:22.745035+0000 mgr.y (mgr.14520) 307 : cluster [DBG] pgmap v489: 300 pgs: 8 unknown, 5 creating+activating, 11 creating+peering, 276 active+clean; 8.3 MiB data, 810 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.794037+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.794037+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.805701+0000 mon.c (mon.2) 357 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.805701+0000 mon.c (mon.2) 357 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: cluster 2026-03-09T16:01:22.809548+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: cluster 2026-03-09T16:01:22.809548+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.810720+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.810720+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.811075+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.811075+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.811600+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:23 vm01 bash[20728]: audit 2026-03-09T16:01:22.811600+0000 mon.a (mon.0) 2664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:24.812 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ 2026-03-09T16:01:24.812 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP2 (3067 ms) 2026-03-09T16:01:24.812 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPP 2026-03-09T16:01:24.812 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPP (7243 ms) 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPPNS 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPPNS (7067 ms) 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.StatRemovePP 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.StatRemovePP (7035 ms) 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ExecuteClassPP 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ExecuteClassPP (7058 ms) 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.OmapPP 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.OmapPP (7466 ms) 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.MultiWritePP 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ OK ] LibRadosAioEC.MultiWritePP (7040 ms) 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC (140709 ms total) 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [----------] Global test environment tear-down 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [==========] 57 tests from 4 test suites ran. (290794 ms total) 2026-03-09T16:01:24.813 INFO:tasks.workunit.client.0.vm01.stdout: api_aio_pp: [ PASSED ] 57 tests. 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.797299+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.797299+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.797417+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.797417+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: cluster 2026-03-09T16:01:23.802624+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: cluster 2026-03-09T16:01:23.802624+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.803397+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.803397+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.803713+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.803713+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.804535+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.804535+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.804612+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:23.804612+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: cluster 2026-03-09T16:01:24.797594+0000 mon.a (mon.0) 2670 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: cluster 2026-03-09T16:01:24.797594+0000 mon.a (mon.0) 2670 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:24.800799+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:24.800799+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:24.800897+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:24.800897+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: cluster 2026-03-09T16:01:24.804771+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: cluster 2026-03-09T16:01:24.804771+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:24.810917+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:24.810917+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:24.811151+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:24 vm09 bash[22983]: audit 2026-03-09T16:01:24.811151+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.797299+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.797299+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.797417+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.797417+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: cluster 2026-03-09T16:01:23.802624+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: cluster 2026-03-09T16:01:23.802624+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.803397+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.803397+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.803713+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.803713+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.804535+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.804535+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.804612+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:23.804612+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: cluster 2026-03-09T16:01:24.797594+0000 mon.a (mon.0) 2670 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: cluster 2026-03-09T16:01:24.797594+0000 mon.a (mon.0) 2670 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:24.800799+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:24.800799+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:24.800897+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:24.800897+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: cluster 2026-03-09T16:01:24.804771+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: cluster 2026-03-09T16:01:24.804771+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:24.810917+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:24.810917+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:24.811151+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:24 vm01 bash[28152]: audit 2026-03-09T16:01:24.811151+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.797299+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.797299+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.797417+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.797417+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: cluster 2026-03-09T16:01:23.802624+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: cluster 2026-03-09T16:01:23.802624+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.803397+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.803397+0000 mon.c (mon.2) 359 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.803713+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.803713+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.101:0/31756559' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.804535+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.804535+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.804612+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:23.804612+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: cluster 2026-03-09T16:01:24.797594+0000 mon.a (mon.0) 2670 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: cluster 2026-03-09T16:01:24.797594+0000 mon.a (mon.0) 2670 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:24.800799+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:24.800799+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:24.800897+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:24.800897+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm01-59610-66"}]': finished 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: cluster 2026-03-09T16:01:24.804771+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: cluster 2026-03-09T16:01:24.804771+0000 mon.a (mon.0) 2673 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:24.810917+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:24.810917+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:24.811151+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:24 vm01 bash[20728]: audit 2026-03-09T16:01:24.811151+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: cluster 2026-03-09T16:01:24.745729+0000 mgr.y (mgr.14520) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: cluster 2026-03-09T16:01:24.745729+0000 mgr.y (mgr.14520) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: cluster 2026-03-09T16:01:24.812433+0000 mon.a (mon.0) 2675 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: cluster 2026-03-09T16:01:24.812433+0000 mon.a (mon.0) 2675 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: audit 2026-03-09T16:01:25.804377+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: audit 2026-03-09T16:01:25.804377+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: cluster 2026-03-09T16:01:25.808239+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: cluster 2026-03-09T16:01:25.808239+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: audit 2026-03-09T16:01:25.808915+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: audit 2026-03-09T16:01:25.808915+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: audit 2026-03-09T16:01:25.809926+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:25 vm09 bash[22983]: audit 2026-03-09T16:01:25.809926+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: cluster 2026-03-09T16:01:24.745729+0000 mgr.y (mgr.14520) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: cluster 2026-03-09T16:01:24.745729+0000 mgr.y (mgr.14520) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: cluster 2026-03-09T16:01:24.812433+0000 mon.a (mon.0) 2675 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: cluster 2026-03-09T16:01:24.812433+0000 mon.a (mon.0) 2675 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: audit 2026-03-09T16:01:25.804377+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: audit 2026-03-09T16:01:25.804377+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: cluster 2026-03-09T16:01:25.808239+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: cluster 2026-03-09T16:01:25.808239+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: audit 2026-03-09T16:01:25.808915+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: audit 2026-03-09T16:01:25.808915+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: audit 2026-03-09T16:01:25.809926+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:25 vm01 bash[28152]: audit 2026-03-09T16:01:25.809926+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: cluster 2026-03-09T16:01:24.745729+0000 mgr.y (mgr.14520) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: cluster 2026-03-09T16:01:24.745729+0000 mgr.y (mgr.14520) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: cluster 2026-03-09T16:01:24.812433+0000 mon.a (mon.0) 2675 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: cluster 2026-03-09T16:01:24.812433+0000 mon.a (mon.0) 2675 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: audit 2026-03-09T16:01:25.804377+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: audit 2026-03-09T16:01:25.804377+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: cluster 2026-03-09T16:01:25.808239+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: cluster 2026-03-09T16:01:25.808239+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: audit 2026-03-09T16:01:25.808915+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: audit 2026-03-09T16:01:25.808915+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: audit 2026-03-09T16:01:25.809926+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:25 vm01 bash[20728]: audit 2026-03-09T16:01:25.809926+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:01:26.883 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:01:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:01:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:27 vm09 bash[22983]: audit 2026-03-09T16:01:26.464890+0000 mgr.y (mgr.14520) 309 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:27 vm09 bash[22983]: audit 2026-03-09T16:01:26.464890+0000 mgr.y (mgr.14520) 309 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:27 vm09 bash[22983]: cluster 2026-03-09T16:01:26.746027+0000 mgr.y (mgr.14520) 310 : cluster [DBG] pgmap v495: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:27 vm09 bash[22983]: cluster 2026-03-09T16:01:26.746027+0000 mgr.y (mgr.14520) 310 : cluster [DBG] pgmap v495: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:27 vm09 bash[22983]: audit 2026-03-09T16:01:26.807965+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:01:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:27 vm09 bash[22983]: audit 2026-03-09T16:01:26.807965+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:01:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:27 vm09 bash[22983]: cluster 2026-03-09T16:01:26.813891+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T16:01:28.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:27 vm09 bash[22983]: cluster 2026-03-09T16:01:26.813891+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:27 vm01 bash[28152]: audit 2026-03-09T16:01:26.464890+0000 mgr.y (mgr.14520) 309 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:27 vm01 bash[28152]: audit 2026-03-09T16:01:26.464890+0000 mgr.y (mgr.14520) 309 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:27 vm01 bash[28152]: cluster 2026-03-09T16:01:26.746027+0000 mgr.y (mgr.14520) 310 : cluster [DBG] pgmap v495: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:27 vm01 bash[28152]: cluster 2026-03-09T16:01:26.746027+0000 mgr.y (mgr.14520) 310 : cluster [DBG] pgmap v495: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:27 vm01 bash[28152]: audit 2026-03-09T16:01:26.807965+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:27 vm01 bash[28152]: audit 2026-03-09T16:01:26.807965+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:27 vm01 bash[28152]: cluster 2026-03-09T16:01:26.813891+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:27 vm01 bash[28152]: cluster 2026-03-09T16:01:26.813891+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:27 vm01 bash[20728]: audit 2026-03-09T16:01:26.464890+0000 mgr.y (mgr.14520) 309 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:27 vm01 bash[20728]: audit 2026-03-09T16:01:26.464890+0000 mgr.y (mgr.14520) 309 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:27 vm01 bash[20728]: cluster 2026-03-09T16:01:26.746027+0000 mgr.y (mgr.14520) 310 : cluster [DBG] pgmap v495: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:27 vm01 bash[20728]: cluster 2026-03-09T16:01:26.746027+0000 mgr.y (mgr.14520) 310 : cluster [DBG] pgmap v495: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T16:01:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:27 vm01 bash[20728]: audit 2026-03-09T16:01:26.807965+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:01:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:27 vm01 bash[20728]: audit 2026-03-09T16:01:26.807965+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:01:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:27 vm01 bash[20728]: cluster 2026-03-09T16:01:26.813891+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T16:01:28.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:27 vm01 bash[20728]: cluster 2026-03-09T16:01:26.813891+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T16:01:29.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:28 vm09 bash[22983]: cluster 2026-03-09T16:01:28.813951+0000 mon.a (mon.0) 2681 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:01:29.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:28 vm09 bash[22983]: cluster 2026-03-09T16:01:28.813951+0000 mon.a (mon.0) 2681 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:01:29.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:28 vm01 bash[28152]: cluster 2026-03-09T16:01:28.813951+0000 mon.a (mon.0) 2681 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:01:29.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:28 vm01 bash[28152]: cluster 2026-03-09T16:01:28.813951+0000 mon.a (mon.0) 2681 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:01:29.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:28 vm01 bash[20728]: cluster 2026-03-09T16:01:28.813951+0000 mon.a (mon.0) 2681 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:01:29.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:28 vm01 bash[20728]: cluster 2026-03-09T16:01:28.813951+0000 mon.a (mon.0) 2681 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:01:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:29 vm09 bash[22983]: cluster 2026-03-09T16:01:28.746552+0000 mgr.y (mgr.14520) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:01:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:29 vm09 bash[22983]: cluster 2026-03-09T16:01:28.746552+0000 mgr.y (mgr.14520) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:01:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:29 vm09 bash[22983]: audit 2026-03-09T16:01:29.221168+0000 mon.a (mon.0) 2682 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:29 vm09 bash[22983]: audit 2026-03-09T16:01:29.221168+0000 mon.a (mon.0) 2682 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:29 vm09 bash[22983]: audit 2026-03-09T16:01:29.222913+0000 mon.a (mon.0) 2683 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:30.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:29 vm09 bash[22983]: audit 2026-03-09T16:01:29.222913+0000 mon.a (mon.0) 2683 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:29 vm01 bash[28152]: cluster 2026-03-09T16:01:28.746552+0000 mgr.y (mgr.14520) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:29 vm01 bash[28152]: cluster 2026-03-09T16:01:28.746552+0000 mgr.y (mgr.14520) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:29 vm01 bash[28152]: audit 2026-03-09T16:01:29.221168+0000 mon.a (mon.0) 2682 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:29 vm01 bash[28152]: audit 2026-03-09T16:01:29.221168+0000 mon.a (mon.0) 2682 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:29 vm01 bash[28152]: audit 2026-03-09T16:01:29.222913+0000 mon.a (mon.0) 2683 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:29 vm01 bash[28152]: audit 2026-03-09T16:01:29.222913+0000 mon.a (mon.0) 2683 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:29 vm01 bash[20728]: cluster 2026-03-09T16:01:28.746552+0000 mgr.y (mgr.14520) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:29 vm01 bash[20728]: cluster 2026-03-09T16:01:28.746552+0000 mgr.y (mgr.14520) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:29 vm01 bash[20728]: audit 2026-03-09T16:01:29.221168+0000 mon.a (mon.0) 2682 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:29 vm01 bash[20728]: audit 2026-03-09T16:01:29.221168+0000 mon.a (mon.0) 2682 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:29 vm01 bash[20728]: audit 2026-03-09T16:01:29.222913+0000 mon.a (mon.0) 2683 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:29 vm01 bash[20728]: audit 2026-03-09T16:01:29.222913+0000 mon.a (mon.0) 2683 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:32.059 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:32 vm01 bash[20728]: cluster 2026-03-09T16:01:30.747249+0000 mgr.y (mgr.14520) 312 : cluster [DBG] pgmap v498: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:32.059 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:32 vm01 bash[20728]: cluster 2026-03-09T16:01:30.747249+0000 mgr.y (mgr.14520) 312 : cluster [DBG] pgmap v498: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:32 vm09 bash[22983]: cluster 2026-03-09T16:01:30.747249+0000 mgr.y (mgr.14520) 312 : cluster [DBG] pgmap v498: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:32 vm09 bash[22983]: cluster 2026-03-09T16:01:30.747249+0000 mgr.y (mgr.14520) 312 : cluster [DBG] pgmap v498: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:32 vm01 bash[28152]: cluster 2026-03-09T16:01:30.747249+0000 mgr.y (mgr.14520) 312 : cluster [DBG] pgmap v498: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:32 vm01 bash[28152]: cluster 2026-03-09T16:01:30.747249+0000 mgr.y (mgr.14520) 312 : cluster [DBG] pgmap v498: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:01:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:01:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: cluster 2026-03-09T16:01:32.747636+0000 mgr.y (mgr.14520) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 644 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: cluster 2026-03-09T16:01:32.747636+0000 mgr.y (mgr.14520) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 644 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: audit 2026-03-09T16:01:33.676538+0000 mon.a (mon.0) 2684 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: audit 2026-03-09T16:01:33.676538+0000 mon.a (mon.0) 2684 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: audit 2026-03-09T16:01:34.002230+0000 mon.a (mon.0) 2685 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: audit 2026-03-09T16:01:34.002230+0000 mon.a (mon.0) 2685 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: audit 2026-03-09T16:01:34.002877+0000 mon.a (mon.0) 2686 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: audit 2026-03-09T16:01:34.002877+0000 mon.a (mon.0) 2686 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: audit 2026-03-09T16:01:34.008794+0000 mon.a (mon.0) 2687 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:34 vm09 bash[22983]: audit 2026-03-09T16:01:34.008794+0000 mon.a (mon.0) 2687 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: cluster 2026-03-09T16:01:32.747636+0000 mgr.y (mgr.14520) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 644 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:01:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: cluster 2026-03-09T16:01:32.747636+0000 mgr.y (mgr.14520) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 644 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:01:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: audit 2026-03-09T16:01:33.676538+0000 mon.a (mon.0) 2684 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:01:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: audit 2026-03-09T16:01:33.676538+0000 mon.a (mon.0) 2684 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:01:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: audit 2026-03-09T16:01:34.002230+0000 mon.a (mon.0) 2685 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:01:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: audit 2026-03-09T16:01:34.002230+0000 mon.a (mon.0) 2685 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: audit 2026-03-09T16:01:34.002877+0000 mon.a (mon.0) 2686 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: audit 2026-03-09T16:01:34.002877+0000 mon.a (mon.0) 2686 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: audit 2026-03-09T16:01:34.008794+0000 mon.a (mon.0) 2687 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:34 vm01 bash[28152]: audit 2026-03-09T16:01:34.008794+0000 mon.a (mon.0) 2687 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: cluster 2026-03-09T16:01:32.747636+0000 mgr.y (mgr.14520) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 644 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: cluster 2026-03-09T16:01:32.747636+0000 mgr.y (mgr.14520) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 644 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: audit 2026-03-09T16:01:33.676538+0000 mon.a (mon.0) 2684 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: audit 2026-03-09T16:01:33.676538+0000 mon.a (mon.0) 2684 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: audit 2026-03-09T16:01:34.002230+0000 mon.a (mon.0) 2685 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: audit 2026-03-09T16:01:34.002230+0000 mon.a (mon.0) 2685 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: audit 2026-03-09T16:01:34.002877+0000 mon.a (mon.0) 2686 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: audit 2026-03-09T16:01:34.002877+0000 mon.a (mon.0) 2686 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: audit 2026-03-09T16:01:34.008794+0000 mon.a (mon.0) 2687 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:34 vm01 bash[20728]: audit 2026-03-09T16:01:34.008794+0000 mon.a (mon.0) 2687 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:36 vm09 bash[22983]: cluster 2026-03-09T16:01:34.748400+0000 mgr.y (mgr.14520) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:36 vm09 bash[22983]: cluster 2026-03-09T16:01:34.748400+0000 mgr.y (mgr.14520) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:36 vm01 bash[28152]: cluster 2026-03-09T16:01:34.748400+0000 mgr.y (mgr.14520) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:36 vm01 bash[28152]: cluster 2026-03-09T16:01:34.748400+0000 mgr.y (mgr.14520) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:36 vm01 bash[20728]: cluster 2026-03-09T16:01:34.748400+0000 mgr.y (mgr.14520) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:36 vm01 bash[20728]: cluster 2026-03-09T16:01:34.748400+0000 mgr.y (mgr.14520) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:36.883 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:01:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:01:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:37 vm09 bash[22983]: audit 2026-03-09T16:01:36.821066+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:37 vm09 bash[22983]: audit 2026-03-09T16:01:36.821066+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:37 vm09 bash[22983]: audit 2026-03-09T16:01:36.821559+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:37 vm09 bash[22983]: audit 2026-03-09T16:01:36.821559+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:37 vm01 bash[28152]: audit 2026-03-09T16:01:36.821066+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:37 vm01 bash[28152]: audit 2026-03-09T16:01:36.821066+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:37 vm01 bash[28152]: audit 2026-03-09T16:01:36.821559+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:37 vm01 bash[28152]: audit 2026-03-09T16:01:36.821559+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:37 vm01 bash[20728]: audit 2026-03-09T16:01:36.821066+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:37 vm01 bash[20728]: audit 2026-03-09T16:01:36.821066+0000 mon.c (mon.2) 363 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:37 vm01 bash[20728]: audit 2026-03-09T16:01:36.821559+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:37 vm01 bash[20728]: audit 2026-03-09T16:01:36.821559+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: audit 2026-03-09T16:01:36.473967+0000 mgr.y (mgr.14520) 315 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: audit 2026-03-09T16:01:36.473967+0000 mgr.y (mgr.14520) 315 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: cluster 2026-03-09T16:01:36.748724+0000 mgr.y (mgr.14520) 316 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: cluster 2026-03-09T16:01:36.748724+0000 mgr.y (mgr.14520) 316 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: audit 2026-03-09T16:01:37.075872+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: audit 2026-03-09T16:01:37.075872+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: cluster 2026-03-09T16:01:37.078881+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: cluster 2026-03-09T16:01:37.078881+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: audit 2026-03-09T16:01:37.089037+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: audit 2026-03-09T16:01:37.089037+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: audit 2026-03-09T16:01:37.090210+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:38 vm09 bash[22983]: audit 2026-03-09T16:01:37.090210+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: audit 2026-03-09T16:01:36.473967+0000 mgr.y (mgr.14520) 315 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: audit 2026-03-09T16:01:36.473967+0000 mgr.y (mgr.14520) 315 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: cluster 2026-03-09T16:01:36.748724+0000 mgr.y (mgr.14520) 316 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: cluster 2026-03-09T16:01:36.748724+0000 mgr.y (mgr.14520) 316 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: audit 2026-03-09T16:01:37.075872+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: audit 2026-03-09T16:01:37.075872+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: cluster 2026-03-09T16:01:37.078881+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: cluster 2026-03-09T16:01:37.078881+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: audit 2026-03-09T16:01:37.089037+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: audit 2026-03-09T16:01:37.089037+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: audit 2026-03-09T16:01:37.090210+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:38 vm01 bash[28152]: audit 2026-03-09T16:01:37.090210+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: audit 2026-03-09T16:01:36.473967+0000 mgr.y (mgr.14520) 315 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: audit 2026-03-09T16:01:36.473967+0000 mgr.y (mgr.14520) 315 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: cluster 2026-03-09T16:01:36.748724+0000 mgr.y (mgr.14520) 316 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: cluster 2026-03-09T16:01:36.748724+0000 mgr.y (mgr.14520) 316 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: audit 2026-03-09T16:01:37.075872+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: audit 2026-03-09T16:01:37.075872+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: cluster 2026-03-09T16:01:37.078881+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: cluster 2026-03-09T16:01:37.078881+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: audit 2026-03-09T16:01:37.089037+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: audit 2026-03-09T16:01:37.089037+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: audit 2026-03-09T16:01:37.090210+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:38 vm01 bash[20728]: audit 2026-03-09T16:01:37.090210+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.082233+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.082233+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: cluster 2026-03-09T16:01:38.089608+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: cluster 2026-03-09T16:01:38.089608+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.116381+0000 mon.c (mon.2) 365 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.116381+0000 mon.c (mon.2) 365 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.116707+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.116707+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.117201+0000 mon.c (mon.2) 366 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.117201+0000 mon.c (mon.2) 366 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.117410+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: audit 2026-03-09T16:01:38.117410+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: cluster 2026-03-09T16:01:38.749141+0000 mgr.y (mgr.14520) 317 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:01:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:39 vm09 bash[22983]: cluster 2026-03-09T16:01:38.749141+0000 mgr.y (mgr.14520) 317 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.082233+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.082233+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: cluster 2026-03-09T16:01:38.089608+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: cluster 2026-03-09T16:01:38.089608+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.116381+0000 mon.c (mon.2) 365 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.116381+0000 mon.c (mon.2) 365 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.116707+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.116707+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.117201+0000 mon.c (mon.2) 366 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.117201+0000 mon.c (mon.2) 366 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.117410+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: audit 2026-03-09T16:01:38.117410+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: cluster 2026-03-09T16:01:38.749141+0000 mgr.y (mgr.14520) 317 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:39 vm01 bash[28152]: cluster 2026-03-09T16:01:38.749141+0000 mgr.y (mgr.14520) 317 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.082233+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.082233+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]': finished 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: cluster 2026-03-09T16:01:38.089608+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: cluster 2026-03-09T16:01:38.089608+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.116381+0000 mon.c (mon.2) 365 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.116381+0000 mon.c (mon.2) 365 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.116707+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.116707+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.117201+0000 mon.c (mon.2) 366 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.117201+0000 mon.c (mon.2) 366 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.117410+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: audit 2026-03-09T16:01:38.117410+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-51"}]: dispatch 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: cluster 2026-03-09T16:01:38.749141+0000 mgr.y (mgr.14520) 317 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:01:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:39 vm01 bash[20728]: cluster 2026-03-09T16:01:38.749141+0000 mgr.y (mgr.14520) 317 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 829 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:01:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:40 vm09 bash[22983]: cluster 2026-03-09T16:01:39.082164+0000 mon.a (mon.0) 2696 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:01:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:40 vm09 bash[22983]: cluster 2026-03-09T16:01:39.082164+0000 mon.a (mon.0) 2696 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:01:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:40 vm09 bash[22983]: cluster 2026-03-09T16:01:39.149410+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T16:01:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:40 vm09 bash[22983]: cluster 2026-03-09T16:01:39.149410+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T16:01:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:40 vm01 bash[28152]: cluster 2026-03-09T16:01:39.082164+0000 mon.a (mon.0) 2696 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:01:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:40 vm01 bash[28152]: cluster 2026-03-09T16:01:39.082164+0000 mon.a (mon.0) 2696 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:01:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:40 vm01 bash[28152]: cluster 2026-03-09T16:01:39.149410+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T16:01:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:40 vm01 bash[28152]: cluster 2026-03-09T16:01:39.149410+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T16:01:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:40 vm01 bash[20728]: cluster 2026-03-09T16:01:39.082164+0000 mon.a (mon.0) 2696 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:01:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:40 vm01 bash[20728]: cluster 2026-03-09T16:01:39.082164+0000 mon.a (mon.0) 2696 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:01:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:40 vm01 bash[20728]: cluster 2026-03-09T16:01:39.149410+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T16:01:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:40 vm01 bash[20728]: cluster 2026-03-09T16:01:39.149410+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: cluster 2026-03-09T16:01:40.098127+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: cluster 2026-03-09T16:01:40.098127+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: audit 2026-03-09T16:01:40.108280+0000 mon.c (mon.2) 367 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: audit 2026-03-09T16:01:40.108280+0000 mon.c (mon.2) 367 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: audit 2026-03-09T16:01:40.115420+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: audit 2026-03-09T16:01:40.115420+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: cluster 2026-03-09T16:01:40.749477+0000 mgr.y (mgr.14520) 318 : cluster [DBG] pgmap v507: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: cluster 2026-03-09T16:01:40.749477+0000 mgr.y (mgr.14520) 318 : cluster [DBG] pgmap v507: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: audit 2026-03-09T16:01:41.098203+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: audit 2026-03-09T16:01:41.098203+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: cluster 2026-03-09T16:01:41.105618+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T16:01:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:41 vm09 bash[22983]: cluster 2026-03-09T16:01:41.105618+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: cluster 2026-03-09T16:01:40.098127+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: cluster 2026-03-09T16:01:40.098127+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: audit 2026-03-09T16:01:40.108280+0000 mon.c (mon.2) 367 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: audit 2026-03-09T16:01:40.108280+0000 mon.c (mon.2) 367 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: audit 2026-03-09T16:01:40.115420+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: audit 2026-03-09T16:01:40.115420+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: cluster 2026-03-09T16:01:40.749477+0000 mgr.y (mgr.14520) 318 : cluster [DBG] pgmap v507: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: cluster 2026-03-09T16:01:40.749477+0000 mgr.y (mgr.14520) 318 : cluster [DBG] pgmap v507: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: audit 2026-03-09T16:01:41.098203+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: audit 2026-03-09T16:01:41.098203+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: cluster 2026-03-09T16:01:41.105618+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T16:01:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:41 vm01 bash[28152]: cluster 2026-03-09T16:01:41.105618+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: cluster 2026-03-09T16:01:40.098127+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: cluster 2026-03-09T16:01:40.098127+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: audit 2026-03-09T16:01:40.108280+0000 mon.c (mon.2) 367 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: audit 2026-03-09T16:01:40.108280+0000 mon.c (mon.2) 367 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: audit 2026-03-09T16:01:40.115420+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: audit 2026-03-09T16:01:40.115420+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: cluster 2026-03-09T16:01:40.749477+0000 mgr.y (mgr.14520) 318 : cluster [DBG] pgmap v507: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: cluster 2026-03-09T16:01:40.749477+0000 mgr.y (mgr.14520) 318 : cluster [DBG] pgmap v507: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: audit 2026-03-09T16:01:41.098203+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: audit 2026-03-09T16:01:41.098203+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: cluster 2026-03-09T16:01:41.105618+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T16:01:41.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:41 vm01 bash[20728]: cluster 2026-03-09T16:01:41.105618+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: cluster 2026-03-09T16:01:41.111403+0000 mon.a (mon.0) 2702 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: cluster 2026-03-09T16:01:41.111403+0000 mon.a (mon.0) 2702 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: audit 2026-03-09T16:01:41.163723+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: audit 2026-03-09T16:01:41.163723+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: audit 2026-03-09T16:01:41.164093+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: audit 2026-03-09T16:01:41.164093+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: audit 2026-03-09T16:01:41.164513+0000 mon.c (mon.2) 369 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: audit 2026-03-09T16:01:41.164513+0000 mon.c (mon.2) 369 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: audit 2026-03-09T16:01:41.164822+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:42 vm01 bash[28152]: audit 2026-03-09T16:01:41.164822+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: cluster 2026-03-09T16:01:41.111403+0000 mon.a (mon.0) 2702 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: cluster 2026-03-09T16:01:41.111403+0000 mon.a (mon.0) 2702 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: audit 2026-03-09T16:01:41.163723+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: audit 2026-03-09T16:01:41.163723+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: audit 2026-03-09T16:01:41.164093+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: audit 2026-03-09T16:01:41.164093+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: audit 2026-03-09T16:01:41.164513+0000 mon.c (mon.2) 369 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: audit 2026-03-09T16:01:41.164513+0000 mon.c (mon.2) 369 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: audit 2026-03-09T16:01:41.164822+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:42 vm01 bash[20728]: audit 2026-03-09T16:01:41.164822+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: cluster 2026-03-09T16:01:41.111403+0000 mon.a (mon.0) 2702 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: cluster 2026-03-09T16:01:41.111403+0000 mon.a (mon.0) 2702 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: audit 2026-03-09T16:01:41.163723+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: audit 2026-03-09T16:01:41.163723+0000 mon.c (mon.2) 368 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: audit 2026-03-09T16:01:41.164093+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: audit 2026-03-09T16:01:41.164093+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: audit 2026-03-09T16:01:41.164513+0000 mon.c (mon.2) 369 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: audit 2026-03-09T16:01:41.164513+0000 mon.c (mon.2) 369 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: audit 2026-03-09T16:01:41.164822+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:42 vm09 bash[22983]: audit 2026-03-09T16:01:41.164822+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-53"}]: dispatch 2026-03-09T16:01:43.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:01:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:01:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:01:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:43 vm09 bash[22983]: cluster 2026-03-09T16:01:42.134031+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T16:01:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:43 vm09 bash[22983]: cluster 2026-03-09T16:01:42.134031+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T16:01:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:43 vm09 bash[22983]: cluster 2026-03-09T16:01:42.749791+0000 mgr.y (mgr.14520) 319 : cluster [DBG] pgmap v510: 260 pgs: 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:43 vm09 bash[22983]: cluster 2026-03-09T16:01:42.749791+0000 mgr.y (mgr.14520) 319 : cluster [DBG] pgmap v510: 260 pgs: 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:43 vm01 bash[20728]: cluster 2026-03-09T16:01:42.134031+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T16:01:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:43 vm01 bash[20728]: cluster 2026-03-09T16:01:42.134031+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T16:01:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:43 vm01 bash[20728]: cluster 2026-03-09T16:01:42.749791+0000 mgr.y (mgr.14520) 319 : cluster [DBG] pgmap v510: 260 pgs: 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:43 vm01 bash[20728]: cluster 2026-03-09T16:01:42.749791+0000 mgr.y (mgr.14520) 319 : cluster [DBG] pgmap v510: 260 pgs: 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:43 vm01 bash[28152]: cluster 2026-03-09T16:01:42.134031+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T16:01:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:43 vm01 bash[28152]: cluster 2026-03-09T16:01:42.134031+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T16:01:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:43 vm01 bash[28152]: cluster 2026-03-09T16:01:42.749791+0000 mgr.y (mgr.14520) 319 : cluster [DBG] pgmap v510: 260 pgs: 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:43 vm01 bash[28152]: cluster 2026-03-09T16:01:42.749791+0000 mgr.y (mgr.14520) 319 : cluster [DBG] pgmap v510: 260 pgs: 260 active+clean; 8.3 MiB data, 865 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: cluster 2026-03-09T16:01:43.169957+0000 mon.a (mon.0) 2706 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: cluster 2026-03-09T16:01:43.169957+0000 mon.a (mon.0) 2706 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: audit 2026-03-09T16:01:43.171075+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: audit 2026-03-09T16:01:43.171075+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: audit 2026-03-09T16:01:43.189618+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: audit 2026-03-09T16:01:43.189618+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: audit 2026-03-09T16:01:44.169441+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: audit 2026-03-09T16:01:44.169441+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: cluster 2026-03-09T16:01:44.179030+0000 mon.a (mon.0) 2709 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: cluster 2026-03-09T16:01:44.179030+0000 mon.a (mon.0) 2709 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: audit 2026-03-09T16:01:44.179530+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:44 vm09 bash[22983]: audit 2026-03-09T16:01:44.179530+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: cluster 2026-03-09T16:01:43.169957+0000 mon.a (mon.0) 2706 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T16:01:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: cluster 2026-03-09T16:01:43.169957+0000 mon.a (mon.0) 2706 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T16:01:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: audit 2026-03-09T16:01:43.171075+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: audit 2026-03-09T16:01:43.171075+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: audit 2026-03-09T16:01:43.189618+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: audit 2026-03-09T16:01:43.189618+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: audit 2026-03-09T16:01:44.169441+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: audit 2026-03-09T16:01:44.169441+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: cluster 2026-03-09T16:01:44.179030+0000 mon.a (mon.0) 2709 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: cluster 2026-03-09T16:01:44.179030+0000 mon.a (mon.0) 2709 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: audit 2026-03-09T16:01:44.179530+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:44 vm01 bash[28152]: audit 2026-03-09T16:01:44.179530+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: cluster 2026-03-09T16:01:43.169957+0000 mon.a (mon.0) 2706 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: cluster 2026-03-09T16:01:43.169957+0000 mon.a (mon.0) 2706 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: audit 2026-03-09T16:01:43.171075+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: audit 2026-03-09T16:01:43.171075+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: audit 2026-03-09T16:01:43.189618+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: audit 2026-03-09T16:01:43.189618+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: audit 2026-03-09T16:01:44.169441+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: audit 2026-03-09T16:01:44.169441+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: cluster 2026-03-09T16:01:44.179030+0000 mon.a (mon.0) 2709 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: cluster 2026-03-09T16:01:44.179030+0000 mon.a (mon.0) 2709 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: audit 2026-03-09T16:01:44.179530+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:44 vm01 bash[20728]: audit 2026-03-09T16:01:44.179530+0000 mon.c (mon.2) 371 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.234029+0000 mon.a (mon.0) 2710 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.234029+0000 mon.a (mon.0) 2710 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.235243+0000 mon.a (mon.0) 2711 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.235243+0000 mon.a (mon.0) 2711 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.520547+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.520547+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.520817+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.520817+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.521741+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.521741+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.521966+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: audit 2026-03-09T16:01:44.521966+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: cluster 2026-03-09T16:01:44.750232+0000 mgr.y (mgr.14520) 320 : cluster [DBG] pgmap v513: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:45 vm09 bash[22983]: cluster 2026-03-09T16:01:44.750232+0000 mgr.y (mgr.14520) 320 : cluster [DBG] pgmap v513: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.234029+0000 mon.a (mon.0) 2710 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.234029+0000 mon.a (mon.0) 2710 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.235243+0000 mon.a (mon.0) 2711 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.235243+0000 mon.a (mon.0) 2711 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.520547+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.520547+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.520817+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.520817+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.521741+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.521741+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.521966+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: audit 2026-03-09T16:01:44.521966+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: cluster 2026-03-09T16:01:44.750232+0000 mgr.y (mgr.14520) 320 : cluster [DBG] pgmap v513: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:45 vm01 bash[28152]: cluster 2026-03-09T16:01:44.750232+0000 mgr.y (mgr.14520) 320 : cluster [DBG] pgmap v513: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.234029+0000 mon.a (mon.0) 2710 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.234029+0000 mon.a (mon.0) 2710 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.235243+0000 mon.a (mon.0) 2711 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.235243+0000 mon.a (mon.0) 2711 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.520547+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.520547+0000 mon.c (mon.2) 372 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.520817+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.520817+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.521741+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.521741+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.521966+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: audit 2026-03-09T16:01:44.521966+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-55"}]: dispatch 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: cluster 2026-03-09T16:01:44.750232+0000 mgr.y (mgr.14520) 320 : cluster [DBG] pgmap v513: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:45 vm01 bash[20728]: cluster 2026-03-09T16:01:44.750232+0000 mgr.y (mgr.14520) 320 : cluster [DBG] pgmap v513: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:46 vm09 bash[22983]: cluster 2026-03-09T16:01:45.244568+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T16:01:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:46 vm09 bash[22983]: cluster 2026-03-09T16:01:45.244568+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T16:01:46.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:01:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:01:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:46 vm01 bash[28152]: cluster 2026-03-09T16:01:45.244568+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T16:01:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:46 vm01 bash[28152]: cluster 2026-03-09T16:01:45.244568+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T16:01:46.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:46 vm01 bash[20728]: cluster 2026-03-09T16:01:45.244568+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T16:01:46.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:46 vm01 bash[20728]: cluster 2026-03-09T16:01:45.244568+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: cluster 2026-03-09T16:01:46.299806+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: cluster 2026-03-09T16:01:46.299806+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: audit 2026-03-09T16:01:46.301545+0000 mon.c (mon.2) 374 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: audit 2026-03-09T16:01:46.301545+0000 mon.c (mon.2) 374 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: audit 2026-03-09T16:01:46.308994+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: audit 2026-03-09T16:01:46.308994+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: audit 2026-03-09T16:01:46.483288+0000 mgr.y (mgr.14520) 321 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: audit 2026-03-09T16:01:46.483288+0000 mgr.y (mgr.14520) 321 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: cluster 2026-03-09T16:01:46.750579+0000 mgr.y (mgr.14520) 322 : cluster [DBG] pgmap v516: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:47.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:47 vm09 bash[22983]: cluster 2026-03-09T16:01:46.750579+0000 mgr.y (mgr.14520) 322 : cluster [DBG] pgmap v516: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: cluster 2026-03-09T16:01:46.299806+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: cluster 2026-03-09T16:01:46.299806+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: audit 2026-03-09T16:01:46.301545+0000 mon.c (mon.2) 374 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: audit 2026-03-09T16:01:46.301545+0000 mon.c (mon.2) 374 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: audit 2026-03-09T16:01:46.308994+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: audit 2026-03-09T16:01:46.308994+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: audit 2026-03-09T16:01:46.483288+0000 mgr.y (mgr.14520) 321 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: audit 2026-03-09T16:01:46.483288+0000 mgr.y (mgr.14520) 321 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: cluster 2026-03-09T16:01:46.750579+0000 mgr.y (mgr.14520) 322 : cluster [DBG] pgmap v516: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:47.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:47 vm01 bash[28152]: cluster 2026-03-09T16:01:46.750579+0000 mgr.y (mgr.14520) 322 : cluster [DBG] pgmap v516: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: cluster 2026-03-09T16:01:46.299806+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: cluster 2026-03-09T16:01:46.299806+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: audit 2026-03-09T16:01:46.301545+0000 mon.c (mon.2) 374 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: audit 2026-03-09T16:01:46.301545+0000 mon.c (mon.2) 374 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: audit 2026-03-09T16:01:46.308994+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: audit 2026-03-09T16:01:46.308994+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: audit 2026-03-09T16:01:46.483288+0000 mgr.y (mgr.14520) 321 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: audit 2026-03-09T16:01:46.483288+0000 mgr.y (mgr.14520) 321 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: cluster 2026-03-09T16:01:46.750579+0000 mgr.y (mgr.14520) 322 : cluster [DBG] pgmap v516: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:47.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:47 vm01 bash[20728]: cluster 2026-03-09T16:01:46.750579+0000 mgr.y (mgr.14520) 322 : cluster [DBG] pgmap v516: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: cluster 2026-03-09T16:01:47.451423+0000 mon.a (mon.0) 2717 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: cluster 2026-03-09T16:01:47.451423+0000 mon.a (mon.0) 2717 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.453888+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.453888+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: cluster 2026-03-09T16:01:47.471273+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: cluster 2026-03-09T16:01:47.471273+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.526463+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.526463+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.526754+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.526754+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.527261+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.527261+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.527488+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:48 vm09 bash[22983]: audit 2026-03-09T16:01:47.527488+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: cluster 2026-03-09T16:01:47.451423+0000 mon.a (mon.0) 2717 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: cluster 2026-03-09T16:01:47.451423+0000 mon.a (mon.0) 2717 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.453888+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.453888+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: cluster 2026-03-09T16:01:47.471273+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: cluster 2026-03-09T16:01:47.471273+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.526463+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.526463+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.526754+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.526754+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.527261+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.527261+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.527488+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:48 vm01 bash[28152]: audit 2026-03-09T16:01:47.527488+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: cluster 2026-03-09T16:01:47.451423+0000 mon.a (mon.0) 2717 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:48.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: cluster 2026-03-09T16:01:47.451423+0000 mon.a (mon.0) 2717 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.453888+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.453888+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: cluster 2026-03-09T16:01:47.471273+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: cluster 2026-03-09T16:01:47.471273+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.526463+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.526463+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.526754+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.526754+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.527261+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.527261+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.527488+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:48.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:48 vm01 bash[20728]: audit 2026-03-09T16:01:47.527488+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-57"}]: dispatch 2026-03-09T16:01:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:49 vm09 bash[22983]: cluster 2026-03-09T16:01:48.539256+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T16:01:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:49 vm09 bash[22983]: cluster 2026-03-09T16:01:48.539256+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T16:01:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:49 vm09 bash[22983]: cluster 2026-03-09T16:01:48.751013+0000 mgr.y (mgr.14520) 323 : cluster [DBG] pgmap v519: 260 pgs: 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:01:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:49 vm09 bash[22983]: cluster 2026-03-09T16:01:48.751013+0000 mgr.y (mgr.14520) 323 : cluster [DBG] pgmap v519: 260 pgs: 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:01:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:49 vm01 bash[28152]: cluster 2026-03-09T16:01:48.539256+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T16:01:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:49 vm01 bash[28152]: cluster 2026-03-09T16:01:48.539256+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T16:01:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:49 vm01 bash[28152]: cluster 2026-03-09T16:01:48.751013+0000 mgr.y (mgr.14520) 323 : cluster [DBG] pgmap v519: 260 pgs: 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:01:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:49 vm01 bash[28152]: cluster 2026-03-09T16:01:48.751013+0000 mgr.y (mgr.14520) 323 : cluster [DBG] pgmap v519: 260 pgs: 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:01:49.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:49 vm01 bash[20728]: cluster 2026-03-09T16:01:48.539256+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T16:01:49.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:49 vm01 bash[20728]: cluster 2026-03-09T16:01:48.539256+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T16:01:49.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:49 vm01 bash[20728]: cluster 2026-03-09T16:01:48.751013+0000 mgr.y (mgr.14520) 323 : cluster [DBG] pgmap v519: 260 pgs: 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:01:49.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:49 vm01 bash[20728]: cluster 2026-03-09T16:01:48.751013+0000 mgr.y (mgr.14520) 323 : cluster [DBG] pgmap v519: 260 pgs: 260 active+clean; 8.3 MiB data, 853 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:01:50.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:50 vm09 bash[22983]: cluster 2026-03-09T16:01:49.597942+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T16:01:50.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:50 vm09 bash[22983]: cluster 2026-03-09T16:01:49.597942+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T16:01:50.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:50 vm09 bash[22983]: audit 2026-03-09T16:01:49.598541+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:50 vm09 bash[22983]: audit 2026-03-09T16:01:49.598541+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:50 vm09 bash[22983]: audit 2026-03-09T16:01:49.602132+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:50 vm09 bash[22983]: audit 2026-03-09T16:01:49.602132+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:50 vm01 bash[28152]: cluster 2026-03-09T16:01:49.597942+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:50 vm01 bash[28152]: cluster 2026-03-09T16:01:49.597942+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:50 vm01 bash[28152]: audit 2026-03-09T16:01:49.598541+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:50 vm01 bash[28152]: audit 2026-03-09T16:01:49.598541+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:50 vm01 bash[28152]: audit 2026-03-09T16:01:49.602132+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:50 vm01 bash[28152]: audit 2026-03-09T16:01:49.602132+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:50 vm01 bash[20728]: cluster 2026-03-09T16:01:49.597942+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:50 vm01 bash[20728]: cluster 2026-03-09T16:01:49.597942+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:50 vm01 bash[20728]: audit 2026-03-09T16:01:49.598541+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:50 vm01 bash[20728]: audit 2026-03-09T16:01:49.598541+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:50 vm01 bash[20728]: audit 2026-03-09T16:01:49.602132+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:50.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:50 vm01 bash[20728]: audit 2026-03-09T16:01:49.602132+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.572945+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.572945+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.583182+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.583182+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: cluster 2026-03-09T16:01:50.592690+0000 mon.a (mon.0) 2726 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: cluster 2026-03-09T16:01:50.592690+0000 mon.a (mon.0) 2726 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.652265+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.652265+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.652571+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.652571+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.652911+0000 mon.c (mon.2) 380 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.652911+0000 mon.c (mon.2) 380 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.653664+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: audit 2026-03-09T16:01:50.653664+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: cluster 2026-03-09T16:01:50.751320+0000 mgr.y (mgr.14520) 324 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: cluster 2026-03-09T16:01:50.751320+0000 mgr.y (mgr.14520) 324 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: cluster 2026-03-09T16:01:51.578399+0000 mon.a (mon.0) 2729 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T16:01:51.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:51 vm09 bash[22983]: cluster 2026-03-09T16:01:51.578399+0000 mon.a (mon.0) 2729 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.572945+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.572945+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.583182+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.583182+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: cluster 2026-03-09T16:01:50.592690+0000 mon.a (mon.0) 2726 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: cluster 2026-03-09T16:01:50.592690+0000 mon.a (mon.0) 2726 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.652265+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.652265+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.652571+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.652571+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.652911+0000 mon.c (mon.2) 380 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.652911+0000 mon.c (mon.2) 380 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.653664+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: audit 2026-03-09T16:01:50.653664+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: cluster 2026-03-09T16:01:50.751320+0000 mgr.y (mgr.14520) 324 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: cluster 2026-03-09T16:01:50.751320+0000 mgr.y (mgr.14520) 324 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: cluster 2026-03-09T16:01:51.578399+0000 mon.a (mon.0) 2729 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:51 vm01 bash[28152]: cluster 2026-03-09T16:01:51.578399+0000 mon.a (mon.0) 2729 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.572945+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.572945+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.583182+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.583182+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: cluster 2026-03-09T16:01:50.592690+0000 mon.a (mon.0) 2726 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: cluster 2026-03-09T16:01:50.592690+0000 mon.a (mon.0) 2726 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.652265+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.652265+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.652571+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.652571+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.652911+0000 mon.c (mon.2) 380 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.652911+0000 mon.c (mon.2) 380 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.653664+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: audit 2026-03-09T16:01:50.653664+0000 mon.a (mon.0) 2728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-59"}]: dispatch 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: cluster 2026-03-09T16:01:50.751320+0000 mgr.y (mgr.14520) 324 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: cluster 2026-03-09T16:01:50.751320+0000 mgr.y (mgr.14520) 324 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: cluster 2026-03-09T16:01:51.578399+0000 mon.a (mon.0) 2729 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T16:01:51.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:51 vm01 bash[20728]: cluster 2026-03-09T16:01:51.578399+0000 mon.a (mon.0) 2729 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T16:01:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:01:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:01:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:01:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:53 vm09 bash[22983]: cluster 2026-03-09T16:01:52.751670+0000 mgr.y (mgr.14520) 325 : cluster [DBG] pgmap v524: 260 pgs: 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:53 vm09 bash[22983]: cluster 2026-03-09T16:01:52.751670+0000 mgr.y (mgr.14520) 325 : cluster [DBG] pgmap v524: 260 pgs: 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:53 vm09 bash[22983]: cluster 2026-03-09T16:01:52.762590+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T16:01:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:53 vm09 bash[22983]: cluster 2026-03-09T16:01:52.762590+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T16:01:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:53 vm09 bash[22983]: audit 2026-03-09T16:01:52.779992+0000 mon.c (mon.2) 381 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:53 vm09 bash[22983]: audit 2026-03-09T16:01:52.779992+0000 mon.c (mon.2) 381 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:53 vm09 bash[22983]: audit 2026-03-09T16:01:52.780262+0000 mon.a (mon.0) 2731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:53 vm09 bash[22983]: audit 2026-03-09T16:01:52.780262+0000 mon.a (mon.0) 2731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:53 vm01 bash[28152]: cluster 2026-03-09T16:01:52.751670+0000 mgr.y (mgr.14520) 325 : cluster [DBG] pgmap v524: 260 pgs: 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:53 vm01 bash[28152]: cluster 2026-03-09T16:01:52.751670+0000 mgr.y (mgr.14520) 325 : cluster [DBG] pgmap v524: 260 pgs: 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:53 vm01 bash[28152]: cluster 2026-03-09T16:01:52.762590+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:53 vm01 bash[28152]: cluster 2026-03-09T16:01:52.762590+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:53 vm01 bash[28152]: audit 2026-03-09T16:01:52.779992+0000 mon.c (mon.2) 381 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:53 vm01 bash[28152]: audit 2026-03-09T16:01:52.779992+0000 mon.c (mon.2) 381 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:53 vm01 bash[28152]: audit 2026-03-09T16:01:52.780262+0000 mon.a (mon.0) 2731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:53 vm01 bash[28152]: audit 2026-03-09T16:01:52.780262+0000 mon.a (mon.0) 2731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:53 vm01 bash[20728]: cluster 2026-03-09T16:01:52.751670+0000 mgr.y (mgr.14520) 325 : cluster [DBG] pgmap v524: 260 pgs: 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:53 vm01 bash[20728]: cluster 2026-03-09T16:01:52.751670+0000 mgr.y (mgr.14520) 325 : cluster [DBG] pgmap v524: 260 pgs: 260 active+clean; 8.3 MiB data, 876 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:53 vm01 bash[20728]: cluster 2026-03-09T16:01:52.762590+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:53 vm01 bash[20728]: cluster 2026-03-09T16:01:52.762590+0000 mon.a (mon.0) 2730 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:53 vm01 bash[20728]: audit 2026-03-09T16:01:52.779992+0000 mon.c (mon.2) 381 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:53 vm01 bash[20728]: audit 2026-03-09T16:01:52.779992+0000 mon.c (mon.2) 381 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:53 vm01 bash[20728]: audit 2026-03-09T16:01:52.780262+0000 mon.a (mon.0) 2731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:53 vm01 bash[20728]: audit 2026-03-09T16:01:52.780262+0000 mon.a (mon.0) 2731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: audit 2026-03-09T16:01:53.773848+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: audit 2026-03-09T16:01:53.773848+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: audit 2026-03-09T16:01:53.785704+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: audit 2026-03-09T16:01:53.785704+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: audit 2026-03-09T16:01:53.787468+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: audit 2026-03-09T16:01:53.787468+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: cluster 2026-03-09T16:01:53.802969+0000 mon.a (mon.0) 2733 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: cluster 2026-03-09T16:01:53.802969+0000 mon.a (mon.0) 2733 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: audit 2026-03-09T16:01:53.817694+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: audit 2026-03-09T16:01:53.817694+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: cluster 2026-03-09T16:01:53.847951+0000 mon.a (mon.0) 2735 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:54 vm09 bash[22983]: cluster 2026-03-09T16:01:53.847951+0000 mon.a (mon.0) 2735 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: audit 2026-03-09T16:01:53.773848+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: audit 2026-03-09T16:01:53.773848+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: audit 2026-03-09T16:01:53.785704+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: audit 2026-03-09T16:01:53.785704+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: audit 2026-03-09T16:01:53.787468+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: audit 2026-03-09T16:01:53.787468+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: cluster 2026-03-09T16:01:53.802969+0000 mon.a (mon.0) 2733 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: cluster 2026-03-09T16:01:53.802969+0000 mon.a (mon.0) 2733 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: audit 2026-03-09T16:01:53.817694+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: audit 2026-03-09T16:01:53.817694+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: cluster 2026-03-09T16:01:53.847951+0000 mon.a (mon.0) 2735 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:54 vm01 bash[28152]: cluster 2026-03-09T16:01:53.847951+0000 mon.a (mon.0) 2735 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: audit 2026-03-09T16:01:53.773848+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: audit 2026-03-09T16:01:53.773848+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: audit 2026-03-09T16:01:53.785704+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: audit 2026-03-09T16:01:53.785704+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: audit 2026-03-09T16:01:53.787468+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: audit 2026-03-09T16:01:53.787468+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: cluster 2026-03-09T16:01:53.802969+0000 mon.a (mon.0) 2733 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: cluster 2026-03-09T16:01:53.802969+0000 mon.a (mon.0) 2733 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: audit 2026-03-09T16:01:53.817694+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: audit 2026-03-09T16:01:53.817694+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: cluster 2026-03-09T16:01:53.847951+0000 mon.a (mon.0) 2735 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:54 vm01 bash[20728]: cluster 2026-03-09T16:01:53.847951+0000 mon.a (mon.0) 2735 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: cluster 2026-03-09T16:01:54.752405+0000 mgr.y (mgr.14520) 326 : cluster [DBG] pgmap v527: 292 pgs: 13 creating+peering, 279 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 735 B/s wr, 2 op/s 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: cluster 2026-03-09T16:01:54.752405+0000 mgr.y (mgr.14520) 326 : cluster [DBG] pgmap v527: 292 pgs: 13 creating+peering, 279 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 735 B/s wr, 2 op/s 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.783505+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.783505+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: cluster 2026-03-09T16:01:54.806425+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: cluster 2026-03-09T16:01:54.806425+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.839321+0000 mon.c (mon.2) 384 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.839321+0000 mon.c (mon.2) 384 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.839861+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.839861+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.840438+0000 mon.c (mon.2) 385 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.840438+0000 mon.c (mon.2) 385 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.841071+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:55 vm09 bash[22983]: audit 2026-03-09T16:01:54.841071+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: cluster 2026-03-09T16:01:54.752405+0000 mgr.y (mgr.14520) 326 : cluster [DBG] pgmap v527: 292 pgs: 13 creating+peering, 279 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 735 B/s wr, 2 op/s 2026-03-09T16:01:56.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: cluster 2026-03-09T16:01:54.752405+0000 mgr.y (mgr.14520) 326 : cluster [DBG] pgmap v527: 292 pgs: 13 creating+peering, 279 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 735 B/s wr, 2 op/s 2026-03-09T16:01:56.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.783505+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:56.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.783505+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: cluster 2026-03-09T16:01:54.806425+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: cluster 2026-03-09T16:01:54.806425+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.839321+0000 mon.c (mon.2) 384 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.839321+0000 mon.c (mon.2) 384 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.839861+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.839861+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.840438+0000 mon.c (mon.2) 385 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.840438+0000 mon.c (mon.2) 385 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.841071+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:55 vm01 bash[28152]: audit 2026-03-09T16:01:54.841071+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: cluster 2026-03-09T16:01:54.752405+0000 mgr.y (mgr.14520) 326 : cluster [DBG] pgmap v527: 292 pgs: 13 creating+peering, 279 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 735 B/s wr, 2 op/s 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: cluster 2026-03-09T16:01:54.752405+0000 mgr.y (mgr.14520) 326 : cluster [DBG] pgmap v527: 292 pgs: 13 creating+peering, 279 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 735 B/s wr, 2 op/s 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.783505+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.783505+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: cluster 2026-03-09T16:01:54.806425+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: cluster 2026-03-09T16:01:54.806425+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.839321+0000 mon.c (mon.2) 384 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.839321+0000 mon.c (mon.2) 384 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.839861+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.839861+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.840438+0000 mon.c (mon.2) 385 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.840438+0000 mon.c (mon.2) 385 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.841071+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:55 vm01 bash[20728]: audit 2026-03-09T16:01:54.841071+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-61"}]: dispatch 2026-03-09T16:01:56.815 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:01:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:01:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:56 vm09 bash[22983]: cluster 2026-03-09T16:01:55.791856+0000 mon.a (mon.0) 2740 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T16:01:57.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:56 vm09 bash[22983]: cluster 2026-03-09T16:01:55.791856+0000 mon.a (mon.0) 2740 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T16:01:57.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:56 vm01 bash[28152]: cluster 2026-03-09T16:01:55.791856+0000 mon.a (mon.0) 2740 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T16:01:57.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:56 vm01 bash[28152]: cluster 2026-03-09T16:01:55.791856+0000 mon.a (mon.0) 2740 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T16:01:57.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:56 vm01 bash[20728]: cluster 2026-03-09T16:01:55.791856+0000 mon.a (mon.0) 2740 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T16:01:57.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:56 vm01 bash[20728]: cluster 2026-03-09T16:01:55.791856+0000 mon.a (mon.0) 2740 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: audit 2026-03-09T16:01:56.489422+0000 mgr.y (mgr.14520) 327 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: audit 2026-03-09T16:01:56.489422+0000 mgr.y (mgr.14520) 327 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: cluster 2026-03-09T16:01:56.752719+0000 mgr.y (mgr.14520) 328 : cluster [DBG] pgmap v530: 260 pgs: 260 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: cluster 2026-03-09T16:01:56.752719+0000 mgr.y (mgr.14520) 328 : cluster [DBG] pgmap v530: 260 pgs: 260 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: cluster 2026-03-09T16:01:56.822866+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: cluster 2026-03-09T16:01:56.822866+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: audit 2026-03-09T16:01:56.830608+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: audit 2026-03-09T16:01:56.830608+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: audit 2026-03-09T16:01:56.831136+0000 mon.a (mon.0) 2742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:57 vm09 bash[22983]: audit 2026-03-09T16:01:56.831136+0000 mon.a (mon.0) 2742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: audit 2026-03-09T16:01:56.489422+0000 mgr.y (mgr.14520) 327 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: audit 2026-03-09T16:01:56.489422+0000 mgr.y (mgr.14520) 327 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: cluster 2026-03-09T16:01:56.752719+0000 mgr.y (mgr.14520) 328 : cluster [DBG] pgmap v530: 260 pgs: 260 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:01:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: cluster 2026-03-09T16:01:56.752719+0000 mgr.y (mgr.14520) 328 : cluster [DBG] pgmap v530: 260 pgs: 260 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:01:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: cluster 2026-03-09T16:01:56.822866+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T16:01:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: cluster 2026-03-09T16:01:56.822866+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: audit 2026-03-09T16:01:56.830608+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: audit 2026-03-09T16:01:56.830608+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: audit 2026-03-09T16:01:56.831136+0000 mon.a (mon.0) 2742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:57 vm01 bash[28152]: audit 2026-03-09T16:01:56.831136+0000 mon.a (mon.0) 2742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: audit 2026-03-09T16:01:56.489422+0000 mgr.y (mgr.14520) 327 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: audit 2026-03-09T16:01:56.489422+0000 mgr.y (mgr.14520) 327 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: cluster 2026-03-09T16:01:56.752719+0000 mgr.y (mgr.14520) 328 : cluster [DBG] pgmap v530: 260 pgs: 260 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: cluster 2026-03-09T16:01:56.752719+0000 mgr.y (mgr.14520) 328 : cluster [DBG] pgmap v530: 260 pgs: 260 active+clean; 8.3 MiB data, 884 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: cluster 2026-03-09T16:01:56.822866+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: cluster 2026-03-09T16:01:56.822866+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: audit 2026-03-09T16:01:56.830608+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: audit 2026-03-09T16:01:56.830608+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: audit 2026-03-09T16:01:56.831136+0000 mon.a (mon.0) 2742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:57 vm01 bash[20728]: audit 2026-03-09T16:01:56.831136+0000 mon.a (mon.0) 2742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:57.819985+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:57.819985+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:57.830336+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:57.830336+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: cluster 2026-03-09T16:01:57.831155+0000 mon.a (mon.0) 2744 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: cluster 2026-03-09T16:01:57.831155+0000 mon.a (mon.0) 2744 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:57.832751+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:57.832751+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:57.840767+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:57.840767+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:58.823806+0000 mon.a (mon.0) 2746 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: audit 2026-03-09T16:01:58.823806+0000 mon.a (mon.0) 2746 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: cluster 2026-03-09T16:01:58.830168+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T16:01:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:58 vm09 bash[22983]: cluster 2026-03-09T16:01:58.830168+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:57.819985+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:57.819985+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:57.830336+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:57.830336+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: cluster 2026-03-09T16:01:57.831155+0000 mon.a (mon.0) 2744 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: cluster 2026-03-09T16:01:57.831155+0000 mon.a (mon.0) 2744 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:57.832751+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:57.832751+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:57.840767+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:57.840767+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:58.823806+0000 mon.a (mon.0) 2746 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: audit 2026-03-09T16:01:58.823806+0000 mon.a (mon.0) 2746 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: cluster 2026-03-09T16:01:58.830168+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:58 vm01 bash[28152]: cluster 2026-03-09T16:01:58.830168+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:57.819985+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:57.819985+0000 mon.a (mon.0) 2743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:57.830336+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:57.830336+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: cluster 2026-03-09T16:01:57.831155+0000 mon.a (mon.0) 2744 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: cluster 2026-03-09T16:01:57.831155+0000 mon.a (mon.0) 2744 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:57.832751+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:57.832751+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:57.840767+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:57.840767+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:58.823806+0000 mon.a (mon.0) 2746 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: audit 2026-03-09T16:01:58.823806+0000 mon.a (mon.0) 2746 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: cluster 2026-03-09T16:01:58.830168+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T16:01:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:58 vm01 bash[20728]: cluster 2026-03-09T16:01:58.830168+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T16:02:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:59 vm09 bash[22983]: cluster 2026-03-09T16:01:58.753214+0000 mgr.y (mgr.14520) 329 : cluster [DBG] pgmap v533: 292 pgs: 23 unknown, 269 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:59 vm09 bash[22983]: cluster 2026-03-09T16:01:58.753214+0000 mgr.y (mgr.14520) 329 : cluster [DBG] pgmap v533: 292 pgs: 23 unknown, 269 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:59 vm09 bash[22983]: audit 2026-03-09T16:01:59.241581+0000 mon.a (mon.0) 2748 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:00.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:01:59 vm09 bash[22983]: audit 2026-03-09T16:01:59.241581+0000 mon.a (mon.0) 2748 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:59 vm01 bash[28152]: cluster 2026-03-09T16:01:58.753214+0000 mgr.y (mgr.14520) 329 : cluster [DBG] pgmap v533: 292 pgs: 23 unknown, 269 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:59 vm01 bash[28152]: cluster 2026-03-09T16:01:58.753214+0000 mgr.y (mgr.14520) 329 : cluster [DBG] pgmap v533: 292 pgs: 23 unknown, 269 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:59 vm01 bash[28152]: audit 2026-03-09T16:01:59.241581+0000 mon.a (mon.0) 2748 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:01:59 vm01 bash[28152]: audit 2026-03-09T16:01:59.241581+0000 mon.a (mon.0) 2748 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:59 vm01 bash[20728]: cluster 2026-03-09T16:01:58.753214+0000 mgr.y (mgr.14520) 329 : cluster [DBG] pgmap v533: 292 pgs: 23 unknown, 269 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:59 vm01 bash[20728]: cluster 2026-03-09T16:01:58.753214+0000 mgr.y (mgr.14520) 329 : cluster [DBG] pgmap v533: 292 pgs: 23 unknown, 269 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:59 vm01 bash[20728]: audit 2026-03-09T16:01:59.241581+0000 mon.a (mon.0) 2748 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:01:59 vm01 bash[20728]: audit 2026-03-09T16:01:59.241581+0000 mon.a (mon.0) 2748 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:00 vm01 bash[28152]: cluster 2026-03-09T16:01:59.862408+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T16:02:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:00 vm01 bash[28152]: cluster 2026-03-09T16:01:59.862408+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T16:02:01.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:00 vm01 bash[20728]: cluster 2026-03-09T16:01:59.862408+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T16:02:01.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:00 vm01 bash[20728]: cluster 2026-03-09T16:01:59.862408+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T16:02:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:00 vm09 bash[22983]: cluster 2026-03-09T16:01:59.862408+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T16:02:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:00 vm09 bash[22983]: cluster 2026-03-09T16:01:59.862408+0000 mon.a (mon.0) 2749 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T16:02:02.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:01 vm01 bash[28152]: cluster 2026-03-09T16:02:00.753559+0000 mgr.y (mgr.14520) 330 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:02.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:01 vm01 bash[28152]: cluster 2026-03-09T16:02:00.753559+0000 mgr.y (mgr.14520) 330 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:02.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:01 vm01 bash[28152]: cluster 2026-03-09T16:02:00.891101+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T16:02:02.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:01 vm01 bash[28152]: cluster 2026-03-09T16:02:00.891101+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T16:02:02.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:01 vm01 bash[20728]: cluster 2026-03-09T16:02:00.753559+0000 mgr.y (mgr.14520) 330 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:02.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:01 vm01 bash[20728]: cluster 2026-03-09T16:02:00.753559+0000 mgr.y (mgr.14520) 330 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:02.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:01 vm01 bash[20728]: cluster 2026-03-09T16:02:00.891101+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T16:02:02.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:01 vm01 bash[20728]: cluster 2026-03-09T16:02:00.891101+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T16:02:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:01 vm09 bash[22983]: cluster 2026-03-09T16:02:00.753559+0000 mgr.y (mgr.14520) 330 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:01 vm09 bash[22983]: cluster 2026-03-09T16:02:00.753559+0000 mgr.y (mgr.14520) 330 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:01 vm09 bash[22983]: cluster 2026-03-09T16:02:00.891101+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T16:02:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:01 vm09 bash[22983]: cluster 2026-03-09T16:02:00.891101+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T16:02:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:02:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:02:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:02:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:03 vm09 bash[22983]: cluster 2026-03-09T16:02:01.934376+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T16:02:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:03 vm09 bash[22983]: cluster 2026-03-09T16:02:01.934376+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T16:02:03.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:03 vm01 bash[28152]: cluster 2026-03-09T16:02:01.934376+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T16:02:03.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:03 vm01 bash[28152]: cluster 2026-03-09T16:02:01.934376+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T16:02:03.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:03 vm01 bash[20728]: cluster 2026-03-09T16:02:01.934376+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T16:02:03.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:03 vm01 bash[20728]: cluster 2026-03-09T16:02:01.934376+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T16:02:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:04 vm09 bash[22983]: cluster 2026-03-09T16:02:02.753923+0000 mgr.y (mgr.14520) 331 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:04 vm09 bash[22983]: cluster 2026-03-09T16:02:02.753923+0000 mgr.y (mgr.14520) 331 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:04 vm09 bash[22983]: cluster 2026-03-09T16:02:03.609464+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T16:02:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:04 vm09 bash[22983]: cluster 2026-03-09T16:02:03.609464+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T16:02:04.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:04 vm01 bash[28152]: cluster 2026-03-09T16:02:02.753923+0000 mgr.y (mgr.14520) 331 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:04.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:04 vm01 bash[28152]: cluster 2026-03-09T16:02:02.753923+0000 mgr.y (mgr.14520) 331 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:04.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:04 vm01 bash[28152]: cluster 2026-03-09T16:02:03.609464+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T16:02:04.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:04 vm01 bash[28152]: cluster 2026-03-09T16:02:03.609464+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T16:02:04.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:04 vm01 bash[20728]: cluster 2026-03-09T16:02:02.753923+0000 mgr.y (mgr.14520) 331 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:04.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:04 vm01 bash[20728]: cluster 2026-03-09T16:02:02.753923+0000 mgr.y (mgr.14520) 331 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:02:04.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:04 vm01 bash[20728]: cluster 2026-03-09T16:02:03.609464+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T16:02:04.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:04 vm01 bash[20728]: cluster 2026-03-09T16:02:03.609464+0000 mon.a (mon.0) 2752 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T16:02:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:05 vm09 bash[22983]: cluster 2026-03-09T16:02:04.754690+0000 mgr.y (mgr.14520) 332 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 5 op/s 2026-03-09T16:02:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:05 vm09 bash[22983]: cluster 2026-03-09T16:02:04.754690+0000 mgr.y (mgr.14520) 332 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 5 op/s 2026-03-09T16:02:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:05 vm01 bash[28152]: cluster 2026-03-09T16:02:04.754690+0000 mgr.y (mgr.14520) 332 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 5 op/s 2026-03-09T16:02:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:05 vm01 bash[28152]: cluster 2026-03-09T16:02:04.754690+0000 mgr.y (mgr.14520) 332 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 5 op/s 2026-03-09T16:02:05.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:05 vm01 bash[20728]: cluster 2026-03-09T16:02:04.754690+0000 mgr.y (mgr.14520) 332 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 5 op/s 2026-03-09T16:02:05.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:05 vm01 bash[20728]: cluster 2026-03-09T16:02:04.754690+0000 mgr.y (mgr.14520) 332 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 5 op/s 2026-03-09T16:02:06.883 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:02:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:02:08.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:07 vm09 bash[22983]: audit 2026-03-09T16:02:06.500017+0000 mgr.y (mgr.14520) 333 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:08.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:07 vm09 bash[22983]: audit 2026-03-09T16:02:06.500017+0000 mgr.y (mgr.14520) 333 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:08.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:07 vm09 bash[22983]: cluster 2026-03-09T16:02:06.755030+0000 mgr.y (mgr.14520) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:02:08.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:07 vm09 bash[22983]: cluster 2026-03-09T16:02:06.755030+0000 mgr.y (mgr.14520) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:02:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:07 vm01 bash[28152]: audit 2026-03-09T16:02:06.500017+0000 mgr.y (mgr.14520) 333 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:07 vm01 bash[28152]: audit 2026-03-09T16:02:06.500017+0000 mgr.y (mgr.14520) 333 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:07 vm01 bash[28152]: cluster 2026-03-09T16:02:06.755030+0000 mgr.y (mgr.14520) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:02:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:07 vm01 bash[28152]: cluster 2026-03-09T16:02:06.755030+0000 mgr.y (mgr.14520) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:02:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:07 vm01 bash[20728]: audit 2026-03-09T16:02:06.500017+0000 mgr.y (mgr.14520) 333 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:07 vm01 bash[20728]: audit 2026-03-09T16:02:06.500017+0000 mgr.y (mgr.14520) 333 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:07 vm01 bash[20728]: cluster 2026-03-09T16:02:06.755030+0000 mgr.y (mgr.14520) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:02:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:07 vm01 bash[20728]: cluster 2026-03-09T16:02:06.755030+0000 mgr.y (mgr.14520) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:02:10.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:09 vm09 bash[22983]: cluster 2026-03-09T16:02:08.755561+0000 mgr.y (mgr.14520) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:02:10.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:09 vm09 bash[22983]: cluster 2026-03-09T16:02:08.755561+0000 mgr.y (mgr.14520) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:02:10.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:09 vm01 bash[28152]: cluster 2026-03-09T16:02:08.755561+0000 mgr.y (mgr.14520) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:02:10.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:09 vm01 bash[28152]: cluster 2026-03-09T16:02:08.755561+0000 mgr.y (mgr.14520) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:02:10.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:09 vm01 bash[20728]: cluster 2026-03-09T16:02:08.755561+0000 mgr.y (mgr.14520) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:02:10.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:09 vm01 bash[20728]: cluster 2026-03-09T16:02:08.755561+0000 mgr.y (mgr.14520) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:02:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:11 vm09 bash[22983]: cluster 2026-03-09T16:02:10.756262+0000 mgr.y (mgr.14520) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:02:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:11 vm09 bash[22983]: cluster 2026-03-09T16:02:10.756262+0000 mgr.y (mgr.14520) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:02:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:11 vm01 bash[28152]: cluster 2026-03-09T16:02:10.756262+0000 mgr.y (mgr.14520) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:02:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:11 vm01 bash[28152]: cluster 2026-03-09T16:02:10.756262+0000 mgr.y (mgr.14520) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:02:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:11 vm01 bash[20728]: cluster 2026-03-09T16:02:10.756262+0000 mgr.y (mgr.14520) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:02:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:11 vm01 bash[20728]: cluster 2026-03-09T16:02:10.756262+0000 mgr.y (mgr.14520) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:02:13.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:02:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:02:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:02:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:13 vm09 bash[22983]: cluster 2026-03-09T16:02:12.756576+0000 mgr.y (mgr.14520) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T16:02:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:13 vm09 bash[22983]: cluster 2026-03-09T16:02:12.756576+0000 mgr.y (mgr.14520) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T16:02:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:13 vm09 bash[22983]: cluster 2026-03-09T16:02:13.876741+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T16:02:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:13 vm09 bash[22983]: cluster 2026-03-09T16:02:13.876741+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T16:02:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:13 vm01 bash[28152]: cluster 2026-03-09T16:02:12.756576+0000 mgr.y (mgr.14520) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T16:02:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:13 vm01 bash[28152]: cluster 2026-03-09T16:02:12.756576+0000 mgr.y (mgr.14520) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T16:02:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:13 vm01 bash[28152]: cluster 2026-03-09T16:02:13.876741+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T16:02:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:13 vm01 bash[28152]: cluster 2026-03-09T16:02:13.876741+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T16:02:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:13 vm01 bash[20728]: cluster 2026-03-09T16:02:12.756576+0000 mgr.y (mgr.14520) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T16:02:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:13 vm01 bash[20728]: cluster 2026-03-09T16:02:12.756576+0000 mgr.y (mgr.14520) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T16:02:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:13 vm01 bash[20728]: cluster 2026-03-09T16:02:13.876741+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T16:02:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:13 vm01 bash[20728]: cluster 2026-03-09T16:02:13.876741+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T16:02:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:15 vm09 bash[22983]: audit 2026-03-09T16:02:14.247874+0000 mon.a (mon.0) 2754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:15 vm09 bash[22983]: audit 2026-03-09T16:02:14.247874+0000 mon.a (mon.0) 2754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:15 vm01 bash[28152]: audit 2026-03-09T16:02:14.247874+0000 mon.a (mon.0) 2754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:15 vm01 bash[28152]: audit 2026-03-09T16:02:14.247874+0000 mon.a (mon.0) 2754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:15 vm01 bash[20728]: audit 2026-03-09T16:02:14.247874+0000 mon.a (mon.0) 2754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:15 vm01 bash[20728]: audit 2026-03-09T16:02:14.247874+0000 mon.a (mon.0) 2754 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:16 vm09 bash[22983]: cluster 2026-03-09T16:02:14.757170+0000 mgr.y (mgr.14520) 338 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:16 vm09 bash[22983]: cluster 2026-03-09T16:02:14.757170+0000 mgr.y (mgr.14520) 338 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:16 vm01 bash[28152]: cluster 2026-03-09T16:02:14.757170+0000 mgr.y (mgr.14520) 338 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:16 vm01 bash[28152]: cluster 2026-03-09T16:02:14.757170+0000 mgr.y (mgr.14520) 338 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:16 vm01 bash[20728]: cluster 2026-03-09T16:02:14.757170+0000 mgr.y (mgr.14520) 338 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:16 vm01 bash[20728]: cluster 2026-03-09T16:02:14.757170+0000 mgr.y (mgr.14520) 338 : cluster [DBG] pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:16.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:02:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:02:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:17 vm09 bash[22983]: audit 2026-03-09T16:02:16.510025+0000 mgr.y (mgr.14520) 339 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:17 vm09 bash[22983]: audit 2026-03-09T16:02:16.510025+0000 mgr.y (mgr.14520) 339 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:17 vm09 bash[22983]: cluster 2026-03-09T16:02:16.757722+0000 mgr.y (mgr.14520) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:17 vm09 bash[22983]: cluster 2026-03-09T16:02:16.757722+0000 mgr.y (mgr.14520) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:17 vm01 bash[28152]: audit 2026-03-09T16:02:16.510025+0000 mgr.y (mgr.14520) 339 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:17 vm01 bash[28152]: audit 2026-03-09T16:02:16.510025+0000 mgr.y (mgr.14520) 339 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:17 vm01 bash[28152]: cluster 2026-03-09T16:02:16.757722+0000 mgr.y (mgr.14520) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:17 vm01 bash[28152]: cluster 2026-03-09T16:02:16.757722+0000 mgr.y (mgr.14520) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:17 vm01 bash[20728]: audit 2026-03-09T16:02:16.510025+0000 mgr.y (mgr.14520) 339 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:17 vm01 bash[20728]: audit 2026-03-09T16:02:16.510025+0000 mgr.y (mgr.14520) 339 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:17 vm01 bash[20728]: cluster 2026-03-09T16:02:16.757722+0000 mgr.y (mgr.14520) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:17 vm01 bash[20728]: cluster 2026-03-09T16:02:16.757722+0000 mgr.y (mgr.14520) 340 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:19 vm09 bash[22983]: cluster 2026-03-09T16:02:18.758346+0000 mgr.y (mgr.14520) 341 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:19 vm09 bash[22983]: cluster 2026-03-09T16:02:18.758346+0000 mgr.y (mgr.14520) 341 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:19 vm01 bash[28152]: cluster 2026-03-09T16:02:18.758346+0000 mgr.y (mgr.14520) 341 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:19 vm01 bash[28152]: cluster 2026-03-09T16:02:18.758346+0000 mgr.y (mgr.14520) 341 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:19 vm01 bash[20728]: cluster 2026-03-09T16:02:18.758346+0000 mgr.y (mgr.14520) 341 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:19 vm01 bash[20728]: cluster 2026-03-09T16:02:18.758346+0000 mgr.y (mgr.14520) 341 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:21 vm09 bash[22983]: cluster 2026-03-09T16:02:20.759347+0000 mgr.y (mgr.14520) 342 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:21 vm09 bash[22983]: cluster 2026-03-09T16:02:20.759347+0000 mgr.y (mgr.14520) 342 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:21 vm01 bash[28152]: cluster 2026-03-09T16:02:20.759347+0000 mgr.y (mgr.14520) 342 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:21 vm01 bash[28152]: cluster 2026-03-09T16:02:20.759347+0000 mgr.y (mgr.14520) 342 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:22.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:21 vm01 bash[20728]: cluster 2026-03-09T16:02:20.759347+0000 mgr.y (mgr.14520) 342 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:22.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:21 vm01 bash[20728]: cluster 2026-03-09T16:02:20.759347+0000 mgr.y (mgr.14520) 342 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:02:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:02:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:02:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:23 vm09 bash[22983]: cluster 2026-03-09T16:02:22.759702+0000 mgr.y (mgr.14520) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:23 vm09 bash[22983]: cluster 2026-03-09T16:02:22.759702+0000 mgr.y (mgr.14520) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:23 vm09 bash[22983]: cluster 2026-03-09T16:02:23.868078+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T16:02:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:23 vm09 bash[22983]: cluster 2026-03-09T16:02:23.868078+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T16:02:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:23 vm01 bash[28152]: cluster 2026-03-09T16:02:22.759702+0000 mgr.y (mgr.14520) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:23 vm01 bash[28152]: cluster 2026-03-09T16:02:22.759702+0000 mgr.y (mgr.14520) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:23 vm01 bash[28152]: cluster 2026-03-09T16:02:23.868078+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T16:02:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:23 vm01 bash[28152]: cluster 2026-03-09T16:02:23.868078+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T16:02:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:23 vm01 bash[20728]: cluster 2026-03-09T16:02:22.759702+0000 mgr.y (mgr.14520) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:23 vm01 bash[20728]: cluster 2026-03-09T16:02:22.759702+0000 mgr.y (mgr.14520) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:23 vm01 bash[20728]: cluster 2026-03-09T16:02:23.868078+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T16:02:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:23 vm01 bash[20728]: cluster 2026-03-09T16:02:23.868078+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T16:02:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:25 vm01 bash[28152]: cluster 2026-03-09T16:02:24.760288+0000 mgr.y (mgr.14520) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:25 vm01 bash[28152]: cluster 2026-03-09T16:02:24.760288+0000 mgr.y (mgr.14520) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:25 vm01 bash[28152]: cluster 2026-03-09T16:02:24.896404+0000 mon.a (mon.0) 2756 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T16:02:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:25 vm01 bash[28152]: cluster 2026-03-09T16:02:24.896404+0000 mon.a (mon.0) 2756 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T16:02:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:25 vm01 bash[20728]: cluster 2026-03-09T16:02:24.760288+0000 mgr.y (mgr.14520) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:25 vm01 bash[20728]: cluster 2026-03-09T16:02:24.760288+0000 mgr.y (mgr.14520) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:25 vm01 bash[20728]: cluster 2026-03-09T16:02:24.896404+0000 mon.a (mon.0) 2756 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T16:02:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:25 vm01 bash[20728]: cluster 2026-03-09T16:02:24.896404+0000 mon.a (mon.0) 2756 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T16:02:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:25 vm09 bash[22983]: cluster 2026-03-09T16:02:24.760288+0000 mgr.y (mgr.14520) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:25 vm09 bash[22983]: cluster 2026-03-09T16:02:24.760288+0000 mgr.y (mgr.14520) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:25 vm09 bash[22983]: cluster 2026-03-09T16:02:24.896404+0000 mon.a (mon.0) 2756 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T16:02:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:25 vm09 bash[22983]: cluster 2026-03-09T16:02:24.896404+0000 mon.a (mon.0) 2756 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T16:02:26.784 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:02:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:02:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:27 vm01 bash[28152]: audit 2026-03-09T16:02:26.519782+0000 mgr.y (mgr.14520) 345 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:27 vm01 bash[28152]: audit 2026-03-09T16:02:26.519782+0000 mgr.y (mgr.14520) 345 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:27 vm01 bash[28152]: cluster 2026-03-09T16:02:26.760664+0000 mgr.y (mgr.14520) 346 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:27 vm01 bash[28152]: cluster 2026-03-09T16:02:26.760664+0000 mgr.y (mgr.14520) 346 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:27 vm01 bash[20728]: audit 2026-03-09T16:02:26.519782+0000 mgr.y (mgr.14520) 345 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:27 vm01 bash[20728]: audit 2026-03-09T16:02:26.519782+0000 mgr.y (mgr.14520) 345 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:27 vm01 bash[20728]: cluster 2026-03-09T16:02:26.760664+0000 mgr.y (mgr.14520) 346 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:27 vm01 bash[20728]: cluster 2026-03-09T16:02:26.760664+0000 mgr.y (mgr.14520) 346 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:27 vm09 bash[22983]: audit 2026-03-09T16:02:26.519782+0000 mgr.y (mgr.14520) 345 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:27 vm09 bash[22983]: audit 2026-03-09T16:02:26.519782+0000 mgr.y (mgr.14520) 345 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:27 vm09 bash[22983]: cluster 2026-03-09T16:02:26.760664+0000 mgr.y (mgr.14520) 346 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:27 vm09 bash[22983]: cluster 2026-03-09T16:02:26.760664+0000 mgr.y (mgr.14520) 346 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:02:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:29 vm09 bash[22983]: cluster 2026-03-09T16:02:28.761188+0000 mgr.y (mgr.14520) 347 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:02:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:29 vm09 bash[22983]: cluster 2026-03-09T16:02:28.761188+0000 mgr.y (mgr.14520) 347 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:02:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:29 vm09 bash[22983]: audit 2026-03-09T16:02:29.253987+0000 mon.a (mon.0) 2757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:29 vm09 bash[22983]: audit 2026-03-09T16:02:29.253987+0000 mon.a (mon.0) 2757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:29 vm01 bash[28152]: cluster 2026-03-09T16:02:28.761188+0000 mgr.y (mgr.14520) 347 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:02:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:29 vm01 bash[28152]: cluster 2026-03-09T16:02:28.761188+0000 mgr.y (mgr.14520) 347 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:02:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:29 vm01 bash[28152]: audit 2026-03-09T16:02:29.253987+0000 mon.a (mon.0) 2757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:29 vm01 bash[28152]: audit 2026-03-09T16:02:29.253987+0000 mon.a (mon.0) 2757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:29 vm01 bash[20728]: cluster 2026-03-09T16:02:28.761188+0000 mgr.y (mgr.14520) 347 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:02:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:29 vm01 bash[20728]: cluster 2026-03-09T16:02:28.761188+0000 mgr.y (mgr.14520) 347 : cluster [DBG] pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:02:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:29 vm01 bash[20728]: audit 2026-03-09T16:02:29.253987+0000 mon.a (mon.0) 2757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:29 vm01 bash[20728]: audit 2026-03-09T16:02:29.253987+0000 mon.a (mon.0) 2757 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:31 vm09 bash[22983]: cluster 2026-03-09T16:02:30.762038+0000 mgr.y (mgr.14520) 348 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:02:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:31 vm09 bash[22983]: cluster 2026-03-09T16:02:30.762038+0000 mgr.y (mgr.14520) 348 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:02:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:31 vm01 bash[28152]: cluster 2026-03-09T16:02:30.762038+0000 mgr.y (mgr.14520) 348 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:02:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:31 vm01 bash[28152]: cluster 2026-03-09T16:02:30.762038+0000 mgr.y (mgr.14520) 348 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:02:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:31 vm01 bash[20728]: cluster 2026-03-09T16:02:30.762038+0000 mgr.y (mgr.14520) 348 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:02:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:31 vm01 bash[20728]: cluster 2026-03-09T16:02:30.762038+0000 mgr.y (mgr.14520) 348 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:02:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:02:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:02:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:02:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:33 vm09 bash[22983]: cluster 2026-03-09T16:02:32.762352+0000 mgr.y (mgr.14520) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:33 vm09 bash[22983]: cluster 2026-03-09T16:02:32.762352+0000 mgr.y (mgr.14520) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:33 vm09 bash[22983]: cluster 2026-03-09T16:02:33.864544+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T16:02:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:33 vm09 bash[22983]: cluster 2026-03-09T16:02:33.864544+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T16:02:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:33 vm01 bash[28152]: cluster 2026-03-09T16:02:32.762352+0000 mgr.y (mgr.14520) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:33 vm01 bash[28152]: cluster 2026-03-09T16:02:32.762352+0000 mgr.y (mgr.14520) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:33 vm01 bash[28152]: cluster 2026-03-09T16:02:33.864544+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T16:02:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:33 vm01 bash[28152]: cluster 2026-03-09T16:02:33.864544+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T16:02:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:33 vm01 bash[20728]: cluster 2026-03-09T16:02:32.762352+0000 mgr.y (mgr.14520) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:33 vm01 bash[20728]: cluster 2026-03-09T16:02:32.762352+0000 mgr.y (mgr.14520) 349 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:33 vm01 bash[20728]: cluster 2026-03-09T16:02:33.864544+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T16:02:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:33 vm01 bash[20728]: cluster 2026-03-09T16:02:33.864544+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.053004+0000 mon.a (mon.0) 2759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.053004+0000 mon.a (mon.0) 2759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.389920+0000 mon.a (mon.0) 2760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.389920+0000 mon.a (mon.0) 2760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.390535+0000 mon.a (mon.0) 2761 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.390535+0000 mon.a (mon.0) 2761 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.397747+0000 mon.a (mon.0) 2762 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.397747+0000 mon.a (mon.0) 2762 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.913093+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.913093+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.913487+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.913487+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.914013+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.914013+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.914218+0000 mon.a (mon.0) 2764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:35 vm09 bash[22983]: audit 2026-03-09T16:02:34.914218+0000 mon.a (mon.0) 2764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.053004+0000 mon.a (mon.0) 2759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.053004+0000 mon.a (mon.0) 2759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.389920+0000 mon.a (mon.0) 2760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.389920+0000 mon.a (mon.0) 2760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.390535+0000 mon.a (mon.0) 2761 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.390535+0000 mon.a (mon.0) 2761 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.397747+0000 mon.a (mon.0) 2762 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.397747+0000 mon.a (mon.0) 2762 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.913093+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.913093+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.913487+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.913487+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.914013+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.914013+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.914218+0000 mon.a (mon.0) 2764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:35 vm01 bash[28152]: audit 2026-03-09T16:02:34.914218+0000 mon.a (mon.0) 2764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.053004+0000 mon.a (mon.0) 2759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.053004+0000 mon.a (mon.0) 2759 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.389920+0000 mon.a (mon.0) 2760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.389920+0000 mon.a (mon.0) 2760 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.390535+0000 mon.a (mon.0) 2761 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:02:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.390535+0000 mon.a (mon.0) 2761 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.397747+0000 mon.a (mon.0) 2762 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.397747+0000 mon.a (mon.0) 2762 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.913093+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.913093+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.913487+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.913487+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.914013+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.914013+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.914218+0000 mon.a (mon.0) 2764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:35 vm01 bash[20728]: audit 2026-03-09T16:02:34.914218+0000 mon.a (mon.0) 2764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-63"}]: dispatch 2026-03-09T16:02:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:36 vm01 bash[28152]: cluster 2026-03-09T16:02:34.762971+0000 mgr.y (mgr.14520) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:02:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:36 vm01 bash[28152]: cluster 2026-03-09T16:02:34.762971+0000 mgr.y (mgr.14520) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:02:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:36 vm01 bash[20728]: cluster 2026-03-09T16:02:34.762971+0000 mgr.y (mgr.14520) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:02:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:36 vm01 bash[20728]: cluster 2026-03-09T16:02:34.762971+0000 mgr.y (mgr.14520) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:02:36.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:36 vm09 bash[22983]: cluster 2026-03-09T16:02:34.762971+0000 mgr.y (mgr.14520) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:02:36.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:36 vm09 bash[22983]: cluster 2026-03-09T16:02:34.762971+0000 mgr.y (mgr.14520) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:02:36.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:02:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:37 vm01 bash[28152]: cluster 2026-03-09T16:02:36.142611+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:37 vm01 bash[28152]: cluster 2026-03-09T16:02:36.142611+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:37 vm01 bash[28152]: audit 2026-03-09T16:02:36.525664+0000 mgr.y (mgr.14520) 351 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:37 vm01 bash[28152]: audit 2026-03-09T16:02:36.525664+0000 mgr.y (mgr.14520) 351 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:37 vm01 bash[28152]: cluster 2026-03-09T16:02:36.763324+0000 mgr.y (mgr.14520) 352 : cluster [DBG] pgmap v562: 260 pgs: 260 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:37 vm01 bash[28152]: cluster 2026-03-09T16:02:36.763324+0000 mgr.y (mgr.14520) 352 : cluster [DBG] pgmap v562: 260 pgs: 260 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:37 vm01 bash[20728]: cluster 2026-03-09T16:02:36.142611+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:37 vm01 bash[20728]: cluster 2026-03-09T16:02:36.142611+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:37 vm01 bash[20728]: audit 2026-03-09T16:02:36.525664+0000 mgr.y (mgr.14520) 351 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:37 vm01 bash[20728]: audit 2026-03-09T16:02:36.525664+0000 mgr.y (mgr.14520) 351 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:37 vm01 bash[20728]: cluster 2026-03-09T16:02:36.763324+0000 mgr.y (mgr.14520) 352 : cluster [DBG] pgmap v562: 260 pgs: 260 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:37 vm01 bash[20728]: cluster 2026-03-09T16:02:36.763324+0000 mgr.y (mgr.14520) 352 : cluster [DBG] pgmap v562: 260 pgs: 260 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:37 vm09 bash[22983]: cluster 2026-03-09T16:02:36.142611+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T16:02:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:37 vm09 bash[22983]: cluster 2026-03-09T16:02:36.142611+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T16:02:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:37 vm09 bash[22983]: audit 2026-03-09T16:02:36.525664+0000 mgr.y (mgr.14520) 351 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:37 vm09 bash[22983]: audit 2026-03-09T16:02:36.525664+0000 mgr.y (mgr.14520) 351 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:37 vm09 bash[22983]: cluster 2026-03-09T16:02:36.763324+0000 mgr.y (mgr.14520) 352 : cluster [DBG] pgmap v562: 260 pgs: 260 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:37.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:37 vm09 bash[22983]: cluster 2026-03-09T16:02:36.763324+0000 mgr.y (mgr.14520) 352 : cluster [DBG] pgmap v562: 260 pgs: 260 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: cluster 2026-03-09T16:02:37.150999+0000 mon.a (mon.0) 2766 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: cluster 2026-03-09T16:02:37.150999+0000 mon.a (mon.0) 2766 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:37.157592+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:37.157592+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:37.161296+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:37.161296+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:38.136716+0000 mon.a (mon.0) 2768 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:38.136716+0000 mon.a (mon.0) 2768 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: cluster 2026-03-09T16:02:38.141677+0000 mon.a (mon.0) 2769 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T16:02:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: cluster 2026-03-09T16:02:38.141677+0000 mon.a (mon.0) 2769 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:38.158046+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:38.158046+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:38.164263+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:38 vm01 bash[28152]: audit 2026-03-09T16:02:38.164263+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: cluster 2026-03-09T16:02:37.150999+0000 mon.a (mon.0) 2766 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: cluster 2026-03-09T16:02:37.150999+0000 mon.a (mon.0) 2766 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:37.157592+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:37.157592+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:37.161296+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:37.161296+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:38.136716+0000 mon.a (mon.0) 2768 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:38.136716+0000 mon.a (mon.0) 2768 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: cluster 2026-03-09T16:02:38.141677+0000 mon.a (mon.0) 2769 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: cluster 2026-03-09T16:02:38.141677+0000 mon.a (mon.0) 2769 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:38.158046+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:38.158046+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:38.164263+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:38 vm01 bash[20728]: audit 2026-03-09T16:02:38.164263+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: cluster 2026-03-09T16:02:37.150999+0000 mon.a (mon.0) 2766 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: cluster 2026-03-09T16:02:37.150999+0000 mon.a (mon.0) 2766 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:37.157592+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:37.157592+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:37.161296+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:37.161296+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:38.136716+0000 mon.a (mon.0) 2768 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:38.136716+0000 mon.a (mon.0) 2768 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: cluster 2026-03-09T16:02:38.141677+0000 mon.a (mon.0) 2769 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: cluster 2026-03-09T16:02:38.141677+0000 mon.a (mon.0) 2769 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:38.158046+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:38.158046+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:38.164263+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:38 vm09 bash[22983]: audit 2026-03-09T16:02:38.164263+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:39 vm01 bash[28152]: audit 2026-03-09T16:02:38.164026+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:39 vm01 bash[28152]: audit 2026-03-09T16:02:38.164026+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:39 vm01 bash[28152]: cluster 2026-03-09T16:02:38.763738+0000 mgr.y (mgr.14520) 353 : cluster [DBG] pgmap v565: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:39 vm01 bash[28152]: cluster 2026-03-09T16:02:38.763738+0000 mgr.y (mgr.14520) 353 : cluster [DBG] pgmap v565: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:39 vm01 bash[28152]: audit 2026-03-09T16:02:39.140283+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:39 vm01 bash[28152]: audit 2026-03-09T16:02:39.140283+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:39 vm01 bash[28152]: cluster 2026-03-09T16:02:39.147553+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:39 vm01 bash[28152]: cluster 2026-03-09T16:02:39.147553+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:39 vm01 bash[20728]: audit 2026-03-09T16:02:38.164026+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:39 vm01 bash[20728]: audit 2026-03-09T16:02:38.164026+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:39 vm01 bash[20728]: cluster 2026-03-09T16:02:38.763738+0000 mgr.y (mgr.14520) 353 : cluster [DBG] pgmap v565: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:39 vm01 bash[20728]: cluster 2026-03-09T16:02:38.763738+0000 mgr.y (mgr.14520) 353 : cluster [DBG] pgmap v565: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:39 vm01 bash[20728]: audit 2026-03-09T16:02:39.140283+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:39 vm01 bash[20728]: audit 2026-03-09T16:02:39.140283+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:39 vm01 bash[20728]: cluster 2026-03-09T16:02:39.147553+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T16:02:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:39 vm01 bash[20728]: cluster 2026-03-09T16:02:39.147553+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T16:02:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:39 vm09 bash[22983]: audit 2026-03-09T16:02:38.164026+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:39 vm09 bash[22983]: audit 2026-03-09T16:02:38.164026+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:02:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:39 vm09 bash[22983]: cluster 2026-03-09T16:02:38.763738+0000 mgr.y (mgr.14520) 353 : cluster [DBG] pgmap v565: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:39 vm09 bash[22983]: cluster 2026-03-09T16:02:38.763738+0000 mgr.y (mgr.14520) 353 : cluster [DBG] pgmap v565: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 904 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:39 vm09 bash[22983]: audit 2026-03-09T16:02:39.140283+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:02:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:39 vm09 bash[22983]: audit 2026-03-09T16:02:39.140283+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:02:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:39 vm09 bash[22983]: cluster 2026-03-09T16:02:39.147553+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T16:02:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:39 vm09 bash[22983]: cluster 2026-03-09T16:02:39.147553+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T16:02:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:41 vm09 bash[22983]: cluster 2026-03-09T16:02:40.207004+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T16:02:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:41 vm09 bash[22983]: cluster 2026-03-09T16:02:40.207004+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T16:02:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:41 vm09 bash[22983]: cluster 2026-03-09T16:02:40.764129+0000 mgr.y (mgr.14520) 354 : cluster [DBG] pgmap v568: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:41.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:41 vm09 bash[22983]: cluster 2026-03-09T16:02:40.764129+0000 mgr.y (mgr.14520) 354 : cluster [DBG] pgmap v568: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:41.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:41 vm01 bash[28152]: cluster 2026-03-09T16:02:40.207004+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T16:02:41.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:41 vm01 bash[28152]: cluster 2026-03-09T16:02:40.207004+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T16:02:41.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:41 vm01 bash[28152]: cluster 2026-03-09T16:02:40.764129+0000 mgr.y (mgr.14520) 354 : cluster [DBG] pgmap v568: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:41.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:41 vm01 bash[28152]: cluster 2026-03-09T16:02:40.764129+0000 mgr.y (mgr.14520) 354 : cluster [DBG] pgmap v568: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:41.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:41 vm01 bash[20728]: cluster 2026-03-09T16:02:40.207004+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T16:02:41.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:41 vm01 bash[20728]: cluster 2026-03-09T16:02:40.207004+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T16:02:41.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:41 vm01 bash[20728]: cluster 2026-03-09T16:02:40.764129+0000 mgr.y (mgr.14520) 354 : cluster [DBG] pgmap v568: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:41.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:41 vm01 bash[20728]: cluster 2026-03-09T16:02:40.764129+0000 mgr.y (mgr.14520) 354 : cluster [DBG] pgmap v568: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:42 vm09 bash[22983]: cluster 2026-03-09T16:02:41.224175+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T16:02:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:42 vm09 bash[22983]: cluster 2026-03-09T16:02:41.224175+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T16:02:42.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:42 vm01 bash[28152]: cluster 2026-03-09T16:02:41.224175+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T16:02:42.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:42 vm01 bash[28152]: cluster 2026-03-09T16:02:41.224175+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T16:02:42.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:42 vm01 bash[20728]: cluster 2026-03-09T16:02:41.224175+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T16:02:42.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:42 vm01 bash[20728]: cluster 2026-03-09T16:02:41.224175+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T16:02:43.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:02:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:02:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:02:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:43 vm09 bash[22983]: cluster 2026-03-09T16:02:42.230636+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T16:02:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:43 vm09 bash[22983]: cluster 2026-03-09T16:02:42.230636+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T16:02:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:43 vm09 bash[22983]: cluster 2026-03-09T16:02:42.764513+0000 mgr.y (mgr.14520) 355 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:43 vm09 bash[22983]: cluster 2026-03-09T16:02:42.764513+0000 mgr.y (mgr.14520) 355 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:43 vm01 bash[28152]: cluster 2026-03-09T16:02:42.230636+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T16:02:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:43 vm01 bash[28152]: cluster 2026-03-09T16:02:42.230636+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T16:02:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:43 vm01 bash[28152]: cluster 2026-03-09T16:02:42.764513+0000 mgr.y (mgr.14520) 355 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:43 vm01 bash[28152]: cluster 2026-03-09T16:02:42.764513+0000 mgr.y (mgr.14520) 355 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:43 vm01 bash[20728]: cluster 2026-03-09T16:02:42.230636+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T16:02:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:43 vm01 bash[20728]: cluster 2026-03-09T16:02:42.230636+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T16:02:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:43 vm01 bash[20728]: cluster 2026-03-09T16:02:42.764513+0000 mgr.y (mgr.14520) 355 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:43 vm01 bash[20728]: cluster 2026-03-09T16:02:42.764513+0000 mgr.y (mgr.14520) 355 : cluster [DBG] pgmap v571: 292 pgs: 292 active+clean; 8.3 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:02:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:44 vm09 bash[22983]: audit 2026-03-09T16:02:44.262872+0000 mon.a (mon.0) 2776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:44 vm09 bash[22983]: audit 2026-03-09T16:02:44.262872+0000 mon.a (mon.0) 2776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:44 vm01 bash[28152]: audit 2026-03-09T16:02:44.262872+0000 mon.a (mon.0) 2776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:44 vm01 bash[28152]: audit 2026-03-09T16:02:44.262872+0000 mon.a (mon.0) 2776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:44 vm01 bash[20728]: audit 2026-03-09T16:02:44.262872+0000 mon.a (mon.0) 2776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:44 vm01 bash[20728]: audit 2026-03-09T16:02:44.262872+0000 mon.a (mon.0) 2776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:02:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:45 vm09 bash[22983]: cluster 2026-03-09T16:02:44.765365+0000 mgr.y (mgr.14520) 356 : cluster [DBG] pgmap v572: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.8 KiB/s wr, 7 op/s 2026-03-09T16:02:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:45 vm09 bash[22983]: cluster 2026-03-09T16:02:44.765365+0000 mgr.y (mgr.14520) 356 : cluster [DBG] pgmap v572: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.8 KiB/s wr, 7 op/s 2026-03-09T16:02:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:45 vm01 bash[28152]: cluster 2026-03-09T16:02:44.765365+0000 mgr.y (mgr.14520) 356 : cluster [DBG] pgmap v572: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.8 KiB/s wr, 7 op/s 2026-03-09T16:02:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:45 vm01 bash[28152]: cluster 2026-03-09T16:02:44.765365+0000 mgr.y (mgr.14520) 356 : cluster [DBG] pgmap v572: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.8 KiB/s wr, 7 op/s 2026-03-09T16:02:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:45 vm01 bash[20728]: cluster 2026-03-09T16:02:44.765365+0000 mgr.y (mgr.14520) 356 : cluster [DBG] pgmap v572: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.8 KiB/s wr, 7 op/s 2026-03-09T16:02:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:45 vm01 bash[20728]: cluster 2026-03-09T16:02:44.765365+0000 mgr.y (mgr.14520) 356 : cluster [DBG] pgmap v572: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.8 KiB/s wr, 7 op/s 2026-03-09T16:02:46.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:02:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:02:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:47 vm09 bash[22983]: audit 2026-03-09T16:02:46.528791+0000 mgr.y (mgr.14520) 357 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:47 vm09 bash[22983]: audit 2026-03-09T16:02:46.528791+0000 mgr.y (mgr.14520) 357 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:47 vm09 bash[22983]: cluster 2026-03-09T16:02:46.765788+0000 mgr.y (mgr.14520) 358 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-09T16:02:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:47 vm09 bash[22983]: cluster 2026-03-09T16:02:46.765788+0000 mgr.y (mgr.14520) 358 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-09T16:02:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:47 vm01 bash[28152]: audit 2026-03-09T16:02:46.528791+0000 mgr.y (mgr.14520) 357 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:47 vm01 bash[28152]: audit 2026-03-09T16:02:46.528791+0000 mgr.y (mgr.14520) 357 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:47 vm01 bash[28152]: cluster 2026-03-09T16:02:46.765788+0000 mgr.y (mgr.14520) 358 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-09T16:02:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:47 vm01 bash[28152]: cluster 2026-03-09T16:02:46.765788+0000 mgr.y (mgr.14520) 358 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-09T16:02:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:47 vm01 bash[20728]: audit 2026-03-09T16:02:46.528791+0000 mgr.y (mgr.14520) 357 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:47 vm01 bash[20728]: audit 2026-03-09T16:02:46.528791+0000 mgr.y (mgr.14520) 357 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:47 vm01 bash[20728]: cluster 2026-03-09T16:02:46.765788+0000 mgr.y (mgr.14520) 358 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-09T16:02:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:47 vm01 bash[20728]: cluster 2026-03-09T16:02:46.765788+0000 mgr.y (mgr.14520) 358 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-09T16:02:50.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:49 vm09 bash[22983]: cluster 2026-03-09T16:02:48.766351+0000 mgr.y (mgr.14520) 359 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T16:02:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:49 vm09 bash[22983]: cluster 2026-03-09T16:02:48.766351+0000 mgr.y (mgr.14520) 359 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T16:02:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:49 vm09 bash[22983]: cluster 2026-03-09T16:02:48.873789+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T16:02:50.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:49 vm09 bash[22983]: cluster 2026-03-09T16:02:48.873789+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T16:02:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:49 vm01 bash[28152]: cluster 2026-03-09T16:02:48.766351+0000 mgr.y (mgr.14520) 359 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T16:02:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:49 vm01 bash[28152]: cluster 2026-03-09T16:02:48.766351+0000 mgr.y (mgr.14520) 359 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T16:02:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:49 vm01 bash[28152]: cluster 2026-03-09T16:02:48.873789+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T16:02:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:49 vm01 bash[28152]: cluster 2026-03-09T16:02:48.873789+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T16:02:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:49 vm01 bash[20728]: cluster 2026-03-09T16:02:48.766351+0000 mgr.y (mgr.14520) 359 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T16:02:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:49 vm01 bash[20728]: cluster 2026-03-09T16:02:48.766351+0000 mgr.y (mgr.14520) 359 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-09T16:02:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:49 vm01 bash[20728]: cluster 2026-03-09T16:02:48.873789+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T16:02:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:49 vm01 bash[20728]: cluster 2026-03-09T16:02:48.873789+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T16:02:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:51 vm09 bash[22983]: cluster 2026-03-09T16:02:50.766931+0000 mgr.y (mgr.14520) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T16:02:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:51 vm09 bash[22983]: cluster 2026-03-09T16:02:50.766931+0000 mgr.y (mgr.14520) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T16:02:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:51 vm01 bash[28152]: cluster 2026-03-09T16:02:50.766931+0000 mgr.y (mgr.14520) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T16:02:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:51 vm01 bash[28152]: cluster 2026-03-09T16:02:50.766931+0000 mgr.y (mgr.14520) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T16:02:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:51 vm01 bash[20728]: cluster 2026-03-09T16:02:50.766931+0000 mgr.y (mgr.14520) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T16:02:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:51 vm01 bash[20728]: cluster 2026-03-09T16:02:50.766931+0000 mgr.y (mgr.14520) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T16:02:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:02:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:02:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:02:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:52 vm01 bash[28152]: audit 2026-03-09T16:02:52.242161+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:52 vm01 bash[28152]: audit 2026-03-09T16:02:52.242161+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:52 vm01 bash[28152]: audit 2026-03-09T16:02:52.242443+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:52 vm01 bash[28152]: audit 2026-03-09T16:02:52.242443+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:52 vm01 bash[28152]: audit 2026-03-09T16:02:52.242769+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:52 vm01 bash[28152]: audit 2026-03-09T16:02:52.242769+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:52 vm01 bash[28152]: audit 2026-03-09T16:02:52.243008+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:52 vm01 bash[28152]: audit 2026-03-09T16:02:52.243008+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:52 vm01 bash[20728]: audit 2026-03-09T16:02:52.242161+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:52 vm01 bash[20728]: audit 2026-03-09T16:02:52.242161+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:52 vm01 bash[20728]: audit 2026-03-09T16:02:52.242443+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:52 vm01 bash[20728]: audit 2026-03-09T16:02:52.242443+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:52 vm01 bash[20728]: audit 2026-03-09T16:02:52.242769+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:52 vm01 bash[20728]: audit 2026-03-09T16:02:52.242769+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:52 vm01 bash[20728]: audit 2026-03-09T16:02:52.243008+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:52 vm01 bash[20728]: audit 2026-03-09T16:02:52.243008+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:52 vm09 bash[22983]: audit 2026-03-09T16:02:52.242161+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:52 vm09 bash[22983]: audit 2026-03-09T16:02:52.242161+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:52 vm09 bash[22983]: audit 2026-03-09T16:02:52.242443+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:52 vm09 bash[22983]: audit 2026-03-09T16:02:52.242443+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:52 vm09 bash[22983]: audit 2026-03-09T16:02:52.242769+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:52 vm09 bash[22983]: audit 2026-03-09T16:02:52.242769+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:52 vm09 bash[22983]: audit 2026-03-09T16:02:52.243008+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:52 vm09 bash[22983]: audit 2026-03-09T16:02:52.243008+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-65"}]: dispatch 2026-03-09T16:02:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:53 vm09 bash[22983]: cluster 2026-03-09T16:02:52.767274+0000 mgr.y (mgr.14520) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:02:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:53 vm09 bash[22983]: cluster 2026-03-09T16:02:52.767274+0000 mgr.y (mgr.14520) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:02:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:53 vm09 bash[22983]: cluster 2026-03-09T16:02:52.986943+0000 mon.a (mon.0) 2780 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T16:02:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:53 vm09 bash[22983]: cluster 2026-03-09T16:02:52.986943+0000 mon.a (mon.0) 2780 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T16:02:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:53 vm01 bash[28152]: cluster 2026-03-09T16:02:52.767274+0000 mgr.y (mgr.14520) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:02:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:53 vm01 bash[28152]: cluster 2026-03-09T16:02:52.767274+0000 mgr.y (mgr.14520) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:02:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:53 vm01 bash[28152]: cluster 2026-03-09T16:02:52.986943+0000 mon.a (mon.0) 2780 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T16:02:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:53 vm01 bash[28152]: cluster 2026-03-09T16:02:52.986943+0000 mon.a (mon.0) 2780 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T16:02:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:53 vm01 bash[20728]: cluster 2026-03-09T16:02:52.767274+0000 mgr.y (mgr.14520) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:02:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:53 vm01 bash[20728]: cluster 2026-03-09T16:02:52.767274+0000 mgr.y (mgr.14520) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:02:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:53 vm01 bash[20728]: cluster 2026-03-09T16:02:52.986943+0000 mon.a (mon.0) 2780 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T16:02:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:53 vm01 bash[20728]: cluster 2026-03-09T16:02:52.986943+0000 mon.a (mon.0) 2780 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-09T16:02:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:55 vm09 bash[22983]: cluster 2026-03-09T16:02:53.997378+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T16:02:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:55 vm09 bash[22983]: cluster 2026-03-09T16:02:53.997378+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T16:02:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:55 vm09 bash[22983]: audit 2026-03-09T16:02:54.001281+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:55 vm09 bash[22983]: audit 2026-03-09T16:02:54.001281+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:55 vm09 bash[22983]: audit 2026-03-09T16:02:54.010932+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:55 vm09 bash[22983]: audit 2026-03-09T16:02:54.010932+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:55 vm01 bash[28152]: cluster 2026-03-09T16:02:53.997378+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:55 vm01 bash[28152]: cluster 2026-03-09T16:02:53.997378+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:55 vm01 bash[28152]: audit 2026-03-09T16:02:54.001281+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:55 vm01 bash[28152]: audit 2026-03-09T16:02:54.001281+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:55 vm01 bash[28152]: audit 2026-03-09T16:02:54.010932+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:55 vm01 bash[28152]: audit 2026-03-09T16:02:54.010932+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:55 vm01 bash[20728]: cluster 2026-03-09T16:02:53.997378+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:55 vm01 bash[20728]: cluster 2026-03-09T16:02:53.997378+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:55 vm01 bash[20728]: audit 2026-03-09T16:02:54.001281+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:55 vm01 bash[20728]: audit 2026-03-09T16:02:54.001281+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:55 vm01 bash[20728]: audit 2026-03-09T16:02:54.010932+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:55 vm01 bash[20728]: audit 2026-03-09T16:02:54.010932+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: cluster 2026-03-09T16:02:54.767911+0000 mgr.y (mgr.14520) 362 : cluster [DBG] pgmap v580: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: cluster 2026-03-09T16:02:54.767911+0000 mgr.y (mgr.14520) 362 : cluster [DBG] pgmap v580: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: cluster 2026-03-09T16:02:54.987397+0000 mon.a (mon.0) 2783 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: cluster 2026-03-09T16:02:54.987397+0000 mon.a (mon.0) 2783 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: audit 2026-03-09T16:02:54.998265+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: audit 2026-03-09T16:02:54.998265+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: cluster 2026-03-09T16:02:55.016389+0000 mon.a (mon.0) 2785 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: cluster 2026-03-09T16:02:55.016389+0000 mon.a (mon.0) 2785 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: audit 2026-03-09T16:02:55.026859+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:56 vm09 bash[22983]: audit 2026-03-09T16:02:55.026859+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: cluster 2026-03-09T16:02:54.767911+0000 mgr.y (mgr.14520) 362 : cluster [DBG] pgmap v580: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: cluster 2026-03-09T16:02:54.767911+0000 mgr.y (mgr.14520) 362 : cluster [DBG] pgmap v580: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: cluster 2026-03-09T16:02:54.987397+0000 mon.a (mon.0) 2783 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: cluster 2026-03-09T16:02:54.987397+0000 mon.a (mon.0) 2783 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: audit 2026-03-09T16:02:54.998265+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: audit 2026-03-09T16:02:54.998265+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: cluster 2026-03-09T16:02:55.016389+0000 mon.a (mon.0) 2785 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: cluster 2026-03-09T16:02:55.016389+0000 mon.a (mon.0) 2785 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: audit 2026-03-09T16:02:55.026859+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:56 vm01 bash[28152]: audit 2026-03-09T16:02:55.026859+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: cluster 2026-03-09T16:02:54.767911+0000 mgr.y (mgr.14520) 362 : cluster [DBG] pgmap v580: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: cluster 2026-03-09T16:02:54.767911+0000 mgr.y (mgr.14520) 362 : cluster [DBG] pgmap v580: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: cluster 2026-03-09T16:02:54.987397+0000 mon.a (mon.0) 2783 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: cluster 2026-03-09T16:02:54.987397+0000 mon.a (mon.0) 2783 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: audit 2026-03-09T16:02:54.998265+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:56.431 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: audit 2026-03-09T16:02:54.998265+0000 mon.a (mon.0) 2784 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:02:56.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: cluster 2026-03-09T16:02:55.016389+0000 mon.a (mon.0) 2785 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T16:02:56.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: cluster 2026-03-09T16:02:55.016389+0000 mon.a (mon.0) 2785 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-09T16:02:56.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: audit 2026-03-09T16:02:55.026859+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:56.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:56 vm01 bash[20728]: audit 2026-03-09T16:02:55.026859+0000 mon.c (mon.2) 397 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:02:56.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:02:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: cluster 2026-03-09T16:02:56.006410+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: cluster 2026-03-09T16:02:56.006410+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: audit 2026-03-09T16:02:56.048916+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: audit 2026-03-09T16:02:56.048916+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: audit 2026-03-09T16:02:56.049310+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: audit 2026-03-09T16:02:56.049310+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: audit 2026-03-09T16:02:56.049617+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: audit 2026-03-09T16:02:56.049617+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: audit 2026-03-09T16:02:56.049820+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:57 vm09 bash[22983]: audit 2026-03-09T16:02:56.049820+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: cluster 2026-03-09T16:02:56.006410+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: cluster 2026-03-09T16:02:56.006410+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: audit 2026-03-09T16:02:56.048916+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: audit 2026-03-09T16:02:56.048916+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: audit 2026-03-09T16:02:56.049310+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: audit 2026-03-09T16:02:56.049310+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: audit 2026-03-09T16:02:56.049617+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: audit 2026-03-09T16:02:56.049617+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: audit 2026-03-09T16:02:56.049820+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:57 vm01 bash[28152]: audit 2026-03-09T16:02:56.049820+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: cluster 2026-03-09T16:02:56.006410+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T16:02:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: cluster 2026-03-09T16:02:56.006410+0000 mon.a (mon.0) 2786 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-09T16:02:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: audit 2026-03-09T16:02:56.048916+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: audit 2026-03-09T16:02:56.048916+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: audit 2026-03-09T16:02:56.049310+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: audit 2026-03-09T16:02:56.049310+0000 mon.a (mon.0) 2787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:02:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: audit 2026-03-09T16:02:56.049617+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: audit 2026-03-09T16:02:56.049617+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: audit 2026-03-09T16:02:56.049820+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:57 vm01 bash[20728]: audit 2026-03-09T16:02:56.049820+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-67"}]: dispatch 2026-03-09T16:02:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:58 vm09 bash[22983]: audit 2026-03-09T16:02:56.539400+0000 mgr.y (mgr.14520) 363 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:58 vm09 bash[22983]: audit 2026-03-09T16:02:56.539400+0000 mgr.y (mgr.14520) 363 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:58 vm09 bash[22983]: cluster 2026-03-09T16:02:56.768207+0000 mgr.y (mgr.14520) 364 : cluster [DBG] pgmap v583: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:58 vm09 bash[22983]: cluster 2026-03-09T16:02:56.768207+0000 mgr.y (mgr.14520) 364 : cluster [DBG] pgmap v583: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:58 vm09 bash[22983]: cluster 2026-03-09T16:02:57.041830+0000 mon.a (mon.0) 2789 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T16:02:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:58 vm09 bash[22983]: cluster 2026-03-09T16:02:57.041830+0000 mon.a (mon.0) 2789 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T16:02:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:58 vm01 bash[28152]: audit 2026-03-09T16:02:56.539400+0000 mgr.y (mgr.14520) 363 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:58 vm01 bash[28152]: audit 2026-03-09T16:02:56.539400+0000 mgr.y (mgr.14520) 363 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:58 vm01 bash[28152]: cluster 2026-03-09T16:02:56.768207+0000 mgr.y (mgr.14520) 364 : cluster [DBG] pgmap v583: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:58 vm01 bash[28152]: cluster 2026-03-09T16:02:56.768207+0000 mgr.y (mgr.14520) 364 : cluster [DBG] pgmap v583: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:58 vm01 bash[28152]: cluster 2026-03-09T16:02:57.041830+0000 mon.a (mon.0) 2789 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T16:02:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:58 vm01 bash[28152]: cluster 2026-03-09T16:02:57.041830+0000 mon.a (mon.0) 2789 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T16:02:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:58 vm01 bash[20728]: audit 2026-03-09T16:02:56.539400+0000 mgr.y (mgr.14520) 363 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:58 vm01 bash[20728]: audit 2026-03-09T16:02:56.539400+0000 mgr.y (mgr.14520) 363 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:02:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:58 vm01 bash[20728]: cluster 2026-03-09T16:02:56.768207+0000 mgr.y (mgr.14520) 364 : cluster [DBG] pgmap v583: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:58.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:58 vm01 bash[20728]: cluster 2026-03-09T16:02:56.768207+0000 mgr.y (mgr.14520) 364 : cluster [DBG] pgmap v583: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:02:58.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:58 vm01 bash[20728]: cluster 2026-03-09T16:02:57.041830+0000 mon.a (mon.0) 2789 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T16:02:58.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:58 vm01 bash[20728]: cluster 2026-03-09T16:02:57.041830+0000 mon.a (mon.0) 2789 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-09T16:02:59.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:59 vm09 bash[22983]: cluster 2026-03-09T16:02:58.038968+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T16:02:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:59 vm09 bash[22983]: cluster 2026-03-09T16:02:58.038968+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T16:02:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:59 vm09 bash[22983]: audit 2026-03-09T16:02:58.039771+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:59 vm09 bash[22983]: audit 2026-03-09T16:02:58.039771+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:59 vm09 bash[22983]: audit 2026-03-09T16:02:58.043014+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:02:59 vm09 bash[22983]: audit 2026-03-09T16:02:58.043014+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:59 vm01 bash[28152]: cluster 2026-03-09T16:02:58.038968+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:59 vm01 bash[28152]: cluster 2026-03-09T16:02:58.038968+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:59 vm01 bash[28152]: audit 2026-03-09T16:02:58.039771+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:59 vm01 bash[28152]: audit 2026-03-09T16:02:58.039771+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:59 vm01 bash[28152]: audit 2026-03-09T16:02:58.043014+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:02:59 vm01 bash[28152]: audit 2026-03-09T16:02:58.043014+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:59 vm01 bash[20728]: cluster 2026-03-09T16:02:58.038968+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:59 vm01 bash[20728]: cluster 2026-03-09T16:02:58.038968+0000 mon.a (mon.0) 2790 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:59 vm01 bash[20728]: audit 2026-03-09T16:02:58.039771+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:59 vm01 bash[20728]: audit 2026-03-09T16:02:58.039771+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:59 vm01 bash[20728]: audit 2026-03-09T16:02:58.043014+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:02:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:02:59 vm01 bash[20728]: audit 2026-03-09T16:02:58.043014+0000 mon.a (mon.0) 2791 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: cluster 2026-03-09T16:02:58.768595+0000 mgr.y (mgr.14520) 365 : cluster [DBG] pgmap v586: 292 pgs: 9 creating+peering, 23 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: cluster 2026-03-09T16:02:58.768595+0000 mgr.y (mgr.14520) 365 : cluster [DBG] pgmap v586: 292 pgs: 9 creating+peering, 23 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.044160+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.044160+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.058905+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.058905+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: cluster 2026-03-09T16:02:59.071480+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: cluster 2026-03-09T16:02:59.071480+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.103481+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.103481+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.103710+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.103710+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.104237+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.104237+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.104426+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.104426+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.269583+0000 mon.a (mon.0) 2796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:00 vm09 bash[22983]: audit 2026-03-09T16:02:59.269583+0000 mon.a (mon.0) 2796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: cluster 2026-03-09T16:02:58.768595+0000 mgr.y (mgr.14520) 365 : cluster [DBG] pgmap v586: 292 pgs: 9 creating+peering, 23 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: cluster 2026-03-09T16:02:58.768595+0000 mgr.y (mgr.14520) 365 : cluster [DBG] pgmap v586: 292 pgs: 9 creating+peering, 23 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.044160+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.044160+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.058905+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.058905+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: cluster 2026-03-09T16:02:59.071480+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: cluster 2026-03-09T16:02:59.071480+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.103481+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.103481+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.103710+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.103710+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.104237+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.104237+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.104426+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.104426+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.269583+0000 mon.a (mon.0) 2796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:00 vm01 bash[28152]: audit 2026-03-09T16:02:59.269583+0000 mon.a (mon.0) 2796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: cluster 2026-03-09T16:02:58.768595+0000 mgr.y (mgr.14520) 365 : cluster [DBG] pgmap v586: 292 pgs: 9 creating+peering, 23 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: cluster 2026-03-09T16:02:58.768595+0000 mgr.y (mgr.14520) 365 : cluster [DBG] pgmap v586: 292 pgs: 9 creating+peering, 23 unknown, 260 active+clean; 8.3 MiB data, 906 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.044160+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.044160+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.058905+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.058905+0000 mon.c (mon.2) 401 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: cluster 2026-03-09T16:02:59.071480+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: cluster 2026-03-09T16:02:59.071480+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.103481+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.103481+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.103710+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.103710+0000 mon.a (mon.0) 2794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.104237+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.104237+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.104426+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.104426+0000 mon.a (mon.0) 2795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-69"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.269583+0000 mon.a (mon.0) 2796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:00 vm01 bash[20728]: audit 2026-03-09T16:02:59.269583+0000 mon.a (mon.0) 2796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:01 vm09 bash[22983]: cluster 2026-03-09T16:03:00.088062+0000 mon.a (mon.0) 2797 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T16:03:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:01 vm09 bash[22983]: cluster 2026-03-09T16:03:00.088062+0000 mon.a (mon.0) 2797 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T16:03:01.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:01 vm01 bash[28152]: cluster 2026-03-09T16:03:00.088062+0000 mon.a (mon.0) 2797 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T16:03:01.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:01 vm01 bash[28152]: cluster 2026-03-09T16:03:00.088062+0000 mon.a (mon.0) 2797 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T16:03:01.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:01 vm01 bash[20728]: cluster 2026-03-09T16:03:00.088062+0000 mon.a (mon.0) 2797 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T16:03:01.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:01 vm01 bash[20728]: cluster 2026-03-09T16:03:00.088062+0000 mon.a (mon.0) 2797 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: cluster 2026-03-09T16:03:00.768929+0000 mgr.y (mgr.14520) 366 : cluster [DBG] pgmap v589: 260 pgs: 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: cluster 2026-03-09T16:03:00.768929+0000 mgr.y (mgr.14520) 366 : cluster [DBG] pgmap v589: 260 pgs: 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: cluster 2026-03-09T16:03:01.081098+0000 mon.a (mon.0) 2798 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: cluster 2026-03-09T16:03:01.081098+0000 mon.a (mon.0) 2798 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: cluster 2026-03-09T16:03:01.108124+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: cluster 2026-03-09T16:03:01.108124+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: audit 2026-03-09T16:03:01.110095+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: audit 2026-03-09T16:03:01.110095+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: audit 2026-03-09T16:03:01.119128+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:02 vm01 bash[28152]: audit 2026-03-09T16:03:01.119128+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: cluster 2026-03-09T16:03:00.768929+0000 mgr.y (mgr.14520) 366 : cluster [DBG] pgmap v589: 260 pgs: 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: cluster 2026-03-09T16:03:00.768929+0000 mgr.y (mgr.14520) 366 : cluster [DBG] pgmap v589: 260 pgs: 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: cluster 2026-03-09T16:03:01.081098+0000 mon.a (mon.0) 2798 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: cluster 2026-03-09T16:03:01.081098+0000 mon.a (mon.0) 2798 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:02.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: cluster 2026-03-09T16:03:01.108124+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T16:03:02.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: cluster 2026-03-09T16:03:01.108124+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T16:03:02.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: audit 2026-03-09T16:03:01.110095+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: audit 2026-03-09T16:03:01.110095+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: audit 2026-03-09T16:03:01.119128+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:02 vm01 bash[20728]: audit 2026-03-09T16:03:01.119128+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: cluster 2026-03-09T16:03:00.768929+0000 mgr.y (mgr.14520) 366 : cluster [DBG] pgmap v589: 260 pgs: 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: cluster 2026-03-09T16:03:00.768929+0000 mgr.y (mgr.14520) 366 : cluster [DBG] pgmap v589: 260 pgs: 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: cluster 2026-03-09T16:03:01.081098+0000 mon.a (mon.0) 2798 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: cluster 2026-03-09T16:03:01.081098+0000 mon.a (mon.0) 2798 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: cluster 2026-03-09T16:03:01.108124+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T16:03:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: cluster 2026-03-09T16:03:01.108124+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-09T16:03:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: audit 2026-03-09T16:03:01.110095+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: audit 2026-03-09T16:03:01.110095+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: audit 2026-03-09T16:03:01.119128+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:02 vm09 bash[22983]: audit 2026-03-09T16:03:01.119128+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:03.129 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:03:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:03:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:02.093504+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:02.093504+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: cluster 2026-03-09T16:03:02.098732+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: cluster 2026-03-09T16:03:02.098732+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:02.103115+0000 mon.c (mon.2) 405 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:02.103115+0000 mon.c (mon.2) 405 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:02.113749+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:02.113749+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:02.118520+0000 mon.a (mon.0) 2803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:02.118520+0000 mon.a (mon.0) 2803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: cluster 2026-03-09T16:03:02.769291+0000 mgr.y (mgr.14520) 367 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: cluster 2026-03-09T16:03:02.769291+0000 mgr.y (mgr.14520) 367 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:03.096513+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: audit 2026-03-09T16:03:03.096513+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: cluster 2026-03-09T16:03:03.100411+0000 mon.a (mon.0) 2805 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T16:03:03.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:03 vm09 bash[22983]: cluster 2026-03-09T16:03:03.100411+0000 mon.a (mon.0) 2805 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T16:03:03.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:02.093504+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:03.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:02.093504+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:03.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: cluster 2026-03-09T16:03:02.098732+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T16:03:03.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: cluster 2026-03-09T16:03:02.098732+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T16:03:03.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:02.103115+0000 mon.c (mon.2) 405 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:03.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:02.103115+0000 mon.c (mon.2) 405 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:03.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:02.113749+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:02.113749+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:02.118520+0000 mon.a (mon.0) 2803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:02.118520+0000 mon.a (mon.0) 2803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: cluster 2026-03-09T16:03:02.769291+0000 mgr.y (mgr.14520) 367 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: cluster 2026-03-09T16:03:02.769291+0000 mgr.y (mgr.14520) 367 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:03.096513+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: audit 2026-03-09T16:03:03.096513+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: cluster 2026-03-09T16:03:03.100411+0000 mon.a (mon.0) 2805 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:03 vm01 bash[28152]: cluster 2026-03-09T16:03:03.100411+0000 mon.a (mon.0) 2805 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:02.093504+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:02.093504+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: cluster 2026-03-09T16:03:02.098732+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: cluster 2026-03-09T16:03:02.098732+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:02.103115+0000 mon.c (mon.2) 405 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:02.103115+0000 mon.c (mon.2) 405 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:02.113749+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:02.113749+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:02.118520+0000 mon.a (mon.0) 2803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:02.118520+0000 mon.a (mon.0) 2803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: cluster 2026-03-09T16:03:02.769291+0000 mgr.y (mgr.14520) 367 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: cluster 2026-03-09T16:03:02.769291+0000 mgr.y (mgr.14520) 367 : cluster [DBG] pgmap v592: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:03.096513+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: audit 2026-03-09T16:03:03.096513+0000 mon.a (mon.0) 2804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: cluster 2026-03-09T16:03:03.100411+0000 mon.a (mon.0) 2805 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T16:03:03.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:03 vm01 bash[20728]: cluster 2026-03-09T16:03:03.100411+0000 mon.a (mon.0) 2805 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-09T16:03:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:05 vm09 bash[22983]: cluster 2026-03-09T16:03:04.149706+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T16:03:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:05 vm09 bash[22983]: cluster 2026-03-09T16:03:04.149706+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T16:03:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:05 vm09 bash[22983]: cluster 2026-03-09T16:03:04.769819+0000 mgr.y (mgr.14520) 368 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.0 KiB/s wr, 5 op/s 2026-03-09T16:03:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:05 vm09 bash[22983]: cluster 2026-03-09T16:03:04.769819+0000 mgr.y (mgr.14520) 368 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.0 KiB/s wr, 5 op/s 2026-03-09T16:03:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:05 vm01 bash[28152]: cluster 2026-03-09T16:03:04.149706+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T16:03:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:05 vm01 bash[28152]: cluster 2026-03-09T16:03:04.149706+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T16:03:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:05 vm01 bash[28152]: cluster 2026-03-09T16:03:04.769819+0000 mgr.y (mgr.14520) 368 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.0 KiB/s wr, 5 op/s 2026-03-09T16:03:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:05 vm01 bash[28152]: cluster 2026-03-09T16:03:04.769819+0000 mgr.y (mgr.14520) 368 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.0 KiB/s wr, 5 op/s 2026-03-09T16:03:05.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:05 vm01 bash[20728]: cluster 2026-03-09T16:03:04.149706+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T16:03:05.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:05 vm01 bash[20728]: cluster 2026-03-09T16:03:04.149706+0000 mon.a (mon.0) 2806 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-09T16:03:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:05 vm01 bash[20728]: cluster 2026-03-09T16:03:04.769819+0000 mgr.y (mgr.14520) 368 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.0 KiB/s wr, 5 op/s 2026-03-09T16:03:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:05 vm01 bash[20728]: cluster 2026-03-09T16:03:04.769819+0000 mgr.y (mgr.14520) 368 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 3.0 KiB/s wr, 5 op/s 2026-03-09T16:03:06.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:03:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:03:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:07 vm09 bash[22983]: audit 2026-03-09T16:03:06.544457+0000 mgr.y (mgr.14520) 369 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:07 vm09 bash[22983]: audit 2026-03-09T16:03:06.544457+0000 mgr.y (mgr.14520) 369 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:07 vm09 bash[22983]: cluster 2026-03-09T16:03:06.770109+0000 mgr.y (mgr.14520) 370 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 4 op/s 2026-03-09T16:03:08.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:07 vm09 bash[22983]: cluster 2026-03-09T16:03:06.770109+0000 mgr.y (mgr.14520) 370 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 4 op/s 2026-03-09T16:03:08.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:07 vm01 bash[28152]: audit 2026-03-09T16:03:06.544457+0000 mgr.y (mgr.14520) 369 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:08.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:07 vm01 bash[28152]: audit 2026-03-09T16:03:06.544457+0000 mgr.y (mgr.14520) 369 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:08.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:07 vm01 bash[28152]: cluster 2026-03-09T16:03:06.770109+0000 mgr.y (mgr.14520) 370 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 4 op/s 2026-03-09T16:03:08.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:07 vm01 bash[28152]: cluster 2026-03-09T16:03:06.770109+0000 mgr.y (mgr.14520) 370 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 4 op/s 2026-03-09T16:03:08.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:07 vm01 bash[20728]: audit 2026-03-09T16:03:06.544457+0000 mgr.y (mgr.14520) 369 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:08.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:07 vm01 bash[20728]: audit 2026-03-09T16:03:06.544457+0000 mgr.y (mgr.14520) 369 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:08.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:07 vm01 bash[20728]: cluster 2026-03-09T16:03:06.770109+0000 mgr.y (mgr.14520) 370 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 4 op/s 2026-03-09T16:03:08.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:07 vm01 bash[20728]: cluster 2026-03-09T16:03:06.770109+0000 mgr.y (mgr.14520) 370 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 4 op/s 2026-03-09T16:03:10.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:09 vm09 bash[22983]: cluster 2026-03-09T16:03:08.770526+0000 mgr.y (mgr.14520) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 920 B/s rd, 1.8 KiB/s wr, 3 op/s 2026-03-09T16:03:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:09 vm09 bash[22983]: cluster 2026-03-09T16:03:08.770526+0000 mgr.y (mgr.14520) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 920 B/s rd, 1.8 KiB/s wr, 3 op/s 2026-03-09T16:03:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:09 vm01 bash[28152]: cluster 2026-03-09T16:03:08.770526+0000 mgr.y (mgr.14520) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 920 B/s rd, 1.8 KiB/s wr, 3 op/s 2026-03-09T16:03:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:09 vm01 bash[28152]: cluster 2026-03-09T16:03:08.770526+0000 mgr.y (mgr.14520) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 920 B/s rd, 1.8 KiB/s wr, 3 op/s 2026-03-09T16:03:10.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:09 vm01 bash[20728]: cluster 2026-03-09T16:03:08.770526+0000 mgr.y (mgr.14520) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 920 B/s rd, 1.8 KiB/s wr, 3 op/s 2026-03-09T16:03:10.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:09 vm01 bash[20728]: cluster 2026-03-09T16:03:08.770526+0000 mgr.y (mgr.14520) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 920 B/s rd, 1.8 KiB/s wr, 3 op/s 2026-03-09T16:03:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:11 vm09 bash[22983]: cluster 2026-03-09T16:03:10.771208+0000 mgr.y (mgr.14520) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-09T16:03:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:11 vm09 bash[22983]: cluster 2026-03-09T16:03:10.771208+0000 mgr.y (mgr.14520) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-09T16:03:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:11 vm01 bash[28152]: cluster 2026-03-09T16:03:10.771208+0000 mgr.y (mgr.14520) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-09T16:03:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:11 vm01 bash[28152]: cluster 2026-03-09T16:03:10.771208+0000 mgr.y (mgr.14520) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-09T16:03:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:11 vm01 bash[20728]: cluster 2026-03-09T16:03:10.771208+0000 mgr.y (mgr.14520) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-09T16:03:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:11 vm01 bash[20728]: cluster 2026-03-09T16:03:10.771208+0000 mgr.y (mgr.14520) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-09T16:03:13.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:03:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:03:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:03:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:13 vm09 bash[22983]: cluster 2026-03-09T16:03:12.771480+0000 mgr.y (mgr.14520) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:03:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:13 vm09 bash[22983]: cluster 2026-03-09T16:03:12.771480+0000 mgr.y (mgr.14520) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:03:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:13 vm01 bash[28152]: cluster 2026-03-09T16:03:12.771480+0000 mgr.y (mgr.14520) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:03:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:13 vm01 bash[28152]: cluster 2026-03-09T16:03:12.771480+0000 mgr.y (mgr.14520) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:03:14.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:13 vm01 bash[20728]: cluster 2026-03-09T16:03:12.771480+0000 mgr.y (mgr.14520) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:03:14.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:13 vm01 bash[20728]: cluster 2026-03-09T16:03:12.771480+0000 mgr.y (mgr.14520) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T16:03:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:14 vm09 bash[22983]: audit 2026-03-09T16:03:14.275527+0000 mon.a (mon.0) 2807 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:14 vm09 bash[22983]: audit 2026-03-09T16:03:14.275527+0000 mon.a (mon.0) 2807 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:14 vm01 bash[28152]: audit 2026-03-09T16:03:14.275527+0000 mon.a (mon.0) 2807 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:14 vm01 bash[28152]: audit 2026-03-09T16:03:14.275527+0000 mon.a (mon.0) 2807 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:15.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:14 vm01 bash[20728]: audit 2026-03-09T16:03:14.275527+0000 mon.a (mon.0) 2807 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:15.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:14 vm01 bash[20728]: audit 2026-03-09T16:03:14.275527+0000 mon.a (mon.0) 2807 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:15 vm09 bash[22983]: cluster 2026-03-09T16:03:14.772314+0000 mgr.y (mgr.14520) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 578 B/s wr, 2 op/s 2026-03-09T16:03:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:15 vm09 bash[22983]: cluster 2026-03-09T16:03:14.772314+0000 mgr.y (mgr.14520) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 578 B/s wr, 2 op/s 2026-03-09T16:03:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:15 vm01 bash[28152]: cluster 2026-03-09T16:03:14.772314+0000 mgr.y (mgr.14520) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 578 B/s wr, 2 op/s 2026-03-09T16:03:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:15 vm01 bash[28152]: cluster 2026-03-09T16:03:14.772314+0000 mgr.y (mgr.14520) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 578 B/s wr, 2 op/s 2026-03-09T16:03:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:15 vm01 bash[20728]: cluster 2026-03-09T16:03:14.772314+0000 mgr.y (mgr.14520) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 578 B/s wr, 2 op/s 2026-03-09T16:03:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:15 vm01 bash[20728]: cluster 2026-03-09T16:03:14.772314+0000 mgr.y (mgr.14520) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 578 B/s wr, 2 op/s 2026-03-09T16:03:16.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:03:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:03:18.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:17 vm09 bash[22983]: audit 2026-03-09T16:03:16.553142+0000 mgr.y (mgr.14520) 375 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:17 vm09 bash[22983]: audit 2026-03-09T16:03:16.553142+0000 mgr.y (mgr.14520) 375 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:17 vm09 bash[22983]: cluster 2026-03-09T16:03:16.772701+0000 mgr.y (mgr.14520) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s 2026-03-09T16:03:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:17 vm09 bash[22983]: cluster 2026-03-09T16:03:16.772701+0000 mgr.y (mgr.14520) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s 2026-03-09T16:03:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:17 vm01 bash[28152]: audit 2026-03-09T16:03:16.553142+0000 mgr.y (mgr.14520) 375 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:17 vm01 bash[28152]: audit 2026-03-09T16:03:16.553142+0000 mgr.y (mgr.14520) 375 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:17 vm01 bash[28152]: cluster 2026-03-09T16:03:16.772701+0000 mgr.y (mgr.14520) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s 2026-03-09T16:03:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:17 vm01 bash[28152]: cluster 2026-03-09T16:03:16.772701+0000 mgr.y (mgr.14520) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s 2026-03-09T16:03:18.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:17 vm01 bash[20728]: audit 2026-03-09T16:03:16.553142+0000 mgr.y (mgr.14520) 375 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:18.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:17 vm01 bash[20728]: audit 2026-03-09T16:03:16.553142+0000 mgr.y (mgr.14520) 375 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:18.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:17 vm01 bash[20728]: cluster 2026-03-09T16:03:16.772701+0000 mgr.y (mgr.14520) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s 2026-03-09T16:03:18.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:17 vm01 bash[20728]: cluster 2026-03-09T16:03:16.772701+0000 mgr.y (mgr.14520) 376 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s 2026-03-09T16:03:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:19 vm09 bash[22983]: cluster 2026-03-09T16:03:18.773289+0000 mgr.y (mgr.14520) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T16:03:20.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:19 vm09 bash[22983]: cluster 2026-03-09T16:03:18.773289+0000 mgr.y (mgr.14520) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T16:03:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:19 vm01 bash[28152]: cluster 2026-03-09T16:03:18.773289+0000 mgr.y (mgr.14520) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T16:03:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:19 vm01 bash[28152]: cluster 2026-03-09T16:03:18.773289+0000 mgr.y (mgr.14520) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T16:03:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:19 vm01 bash[20728]: cluster 2026-03-09T16:03:18.773289+0000 mgr.y (mgr.14520) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T16:03:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:19 vm01 bash[20728]: cluster 2026-03-09T16:03:18.773289+0000 mgr.y (mgr.14520) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 2 op/s 2026-03-09T16:03:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:21 vm09 bash[22983]: cluster 2026-03-09T16:03:20.773946+0000 mgr.y (mgr.14520) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-09T16:03:22.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:21 vm09 bash[22983]: cluster 2026-03-09T16:03:20.773946+0000 mgr.y (mgr.14520) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-09T16:03:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:21 vm01 bash[28152]: cluster 2026-03-09T16:03:20.773946+0000 mgr.y (mgr.14520) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-09T16:03:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:21 vm01 bash[28152]: cluster 2026-03-09T16:03:20.773946+0000 mgr.y (mgr.14520) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-09T16:03:22.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:21 vm01 bash[20728]: cluster 2026-03-09T16:03:20.773946+0000 mgr.y (mgr.14520) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-09T16:03:22.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:21 vm01 bash[20728]: cluster 2026-03-09T16:03:20.773946+0000 mgr.y (mgr.14520) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-09T16:03:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:03:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:03:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:03:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:23 vm09 bash[22983]: cluster 2026-03-09T16:03:22.774262+0000 mgr.y (mgr.14520) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:23 vm09 bash[22983]: cluster 2026-03-09T16:03:22.774262+0000 mgr.y (mgr.14520) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:23 vm01 bash[20728]: cluster 2026-03-09T16:03:22.774262+0000 mgr.y (mgr.14520) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:23 vm01 bash[20728]: cluster 2026-03-09T16:03:22.774262+0000 mgr.y (mgr.14520) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:23 vm01 bash[28152]: cluster 2026-03-09T16:03:22.774262+0000 mgr.y (mgr.14520) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:23 vm01 bash[28152]: cluster 2026-03-09T16:03:22.774262+0000 mgr.y (mgr.14520) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:24 vm01 bash[28152]: audit 2026-03-09T16:03:24.196077+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:24 vm01 bash[28152]: audit 2026-03-09T16:03:24.196077+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:24 vm01 bash[28152]: audit 2026-03-09T16:03:24.196424+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:24 vm01 bash[28152]: audit 2026-03-09T16:03:24.196424+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:24 vm01 bash[28152]: audit 2026-03-09T16:03:24.197448+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:24 vm01 bash[28152]: audit 2026-03-09T16:03:24.197448+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:24 vm01 bash[28152]: audit 2026-03-09T16:03:24.197736+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:24 vm01 bash[28152]: audit 2026-03-09T16:03:24.197736+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:24 vm01 bash[20728]: audit 2026-03-09T16:03:24.196077+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:24 vm01 bash[20728]: audit 2026-03-09T16:03:24.196077+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:24 vm01 bash[20728]: audit 2026-03-09T16:03:24.196424+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:24 vm01 bash[20728]: audit 2026-03-09T16:03:24.196424+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:24 vm01 bash[20728]: audit 2026-03-09T16:03:24.197448+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:24 vm01 bash[20728]: audit 2026-03-09T16:03:24.197448+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:24 vm01 bash[20728]: audit 2026-03-09T16:03:24.197736+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:24 vm01 bash[20728]: audit 2026-03-09T16:03:24.197736+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:24 vm09 bash[22983]: audit 2026-03-09T16:03:24.196077+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:24 vm09 bash[22983]: audit 2026-03-09T16:03:24.196077+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:24 vm09 bash[22983]: audit 2026-03-09T16:03:24.196424+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:24 vm09 bash[22983]: audit 2026-03-09T16:03:24.196424+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:24 vm09 bash[22983]: audit 2026-03-09T16:03:24.197448+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:24 vm09 bash[22983]: audit 2026-03-09T16:03:24.197448+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:24 vm09 bash[22983]: audit 2026-03-09T16:03:24.197736+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:24 vm09 bash[22983]: audit 2026-03-09T16:03:24.197736+0000 mon.a (mon.0) 2809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-71"}]: dispatch 2026-03-09T16:03:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:25 vm01 bash[20728]: cluster 2026-03-09T16:03:24.775222+0000 mgr.y (mgr.14520) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:25 vm01 bash[20728]: cluster 2026-03-09T16:03:24.775222+0000 mgr.y (mgr.14520) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:25 vm01 bash[20728]: cluster 2026-03-09T16:03:24.904746+0000 mon.a (mon.0) 2810 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T16:03:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:25 vm01 bash[20728]: cluster 2026-03-09T16:03:24.904746+0000 mon.a (mon.0) 2810 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T16:03:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:25 vm01 bash[28152]: cluster 2026-03-09T16:03:24.775222+0000 mgr.y (mgr.14520) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:25 vm01 bash[28152]: cluster 2026-03-09T16:03:24.775222+0000 mgr.y (mgr.14520) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:25 vm01 bash[28152]: cluster 2026-03-09T16:03:24.904746+0000 mon.a (mon.0) 2810 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T16:03:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:25 vm01 bash[28152]: cluster 2026-03-09T16:03:24.904746+0000 mon.a (mon.0) 2810 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T16:03:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:25 vm09 bash[22983]: cluster 2026-03-09T16:03:24.775222+0000 mgr.y (mgr.14520) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:25 vm09 bash[22983]: cluster 2026-03-09T16:03:24.775222+0000 mgr.y (mgr.14520) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:03:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:25 vm09 bash[22983]: cluster 2026-03-09T16:03:24.904746+0000 mon.a (mon.0) 2810 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T16:03:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:25 vm09 bash[22983]: cluster 2026-03-09T16:03:24.904746+0000 mon.a (mon.0) 2810 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-09T16:03:26.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:03:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:03:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:26 vm09 bash[22983]: cluster 2026-03-09T16:03:25.917689+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T16:03:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:26 vm09 bash[22983]: cluster 2026-03-09T16:03:25.917689+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T16:03:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:26 vm09 bash[22983]: audit 2026-03-09T16:03:25.926145+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:26 vm09 bash[22983]: audit 2026-03-09T16:03:25.926145+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:26 vm09 bash[22983]: audit 2026-03-09T16:03:25.926597+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:26 vm09 bash[22983]: audit 2026-03-09T16:03:25.926597+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:26 vm01 bash[20728]: cluster 2026-03-09T16:03:25.917689+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T16:03:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:26 vm01 bash[20728]: cluster 2026-03-09T16:03:25.917689+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T16:03:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:26 vm01 bash[20728]: audit 2026-03-09T16:03:25.926145+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:26 vm01 bash[20728]: audit 2026-03-09T16:03:25.926145+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:26 vm01 bash[20728]: audit 2026-03-09T16:03:25.926597+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:26 vm01 bash[20728]: audit 2026-03-09T16:03:25.926597+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:26 vm01 bash[28152]: cluster 2026-03-09T16:03:25.917689+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T16:03:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:26 vm01 bash[28152]: cluster 2026-03-09T16:03:25.917689+0000 mon.a (mon.0) 2811 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-09T16:03:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:26 vm01 bash[28152]: audit 2026-03-09T16:03:25.926145+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:26 vm01 bash[28152]: audit 2026-03-09T16:03:25.926145+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:26 vm01 bash[28152]: audit 2026-03-09T16:03:25.926597+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:27.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:26 vm01 bash[28152]: audit 2026-03-09T16:03:25.926597+0000 mon.a (mon.0) 2812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: audit 2026-03-09T16:03:26.563218+0000 mgr.y (mgr.14520) 381 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: audit 2026-03-09T16:03:26.563218+0000 mgr.y (mgr.14520) 381 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: cluster 2026-03-09T16:03:26.775571+0000 mgr.y (mgr.14520) 382 : cluster [DBG] pgmap v608: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: cluster 2026-03-09T16:03:26.775571+0000 mgr.y (mgr.14520) 382 : cluster [DBG] pgmap v608: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: cluster 2026-03-09T16:03:26.910197+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: cluster 2026-03-09T16:03:26.910197+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: audit 2026-03-09T16:03:26.912603+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: audit 2026-03-09T16:03:26.912603+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: cluster 2026-03-09T16:03:26.928233+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: cluster 2026-03-09T16:03:26.928233+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: audit 2026-03-09T16:03:26.931515+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: audit 2026-03-09T16:03:26.931515+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: cluster 2026-03-09T16:03:27.920208+0000 mon.a (mon.0) 2816 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T16:03:28.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:27 vm09 bash[22983]: cluster 2026-03-09T16:03:27.920208+0000 mon.a (mon.0) 2816 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: audit 2026-03-09T16:03:26.563218+0000 mgr.y (mgr.14520) 381 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: audit 2026-03-09T16:03:26.563218+0000 mgr.y (mgr.14520) 381 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: cluster 2026-03-09T16:03:26.775571+0000 mgr.y (mgr.14520) 382 : cluster [DBG] pgmap v608: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: cluster 2026-03-09T16:03:26.775571+0000 mgr.y (mgr.14520) 382 : cluster [DBG] pgmap v608: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: cluster 2026-03-09T16:03:26.910197+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: cluster 2026-03-09T16:03:26.910197+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: audit 2026-03-09T16:03:26.912603+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: audit 2026-03-09T16:03:26.912603+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: cluster 2026-03-09T16:03:26.928233+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T16:03:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: cluster 2026-03-09T16:03:26.928233+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: audit 2026-03-09T16:03:26.931515+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: audit 2026-03-09T16:03:26.931515+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: cluster 2026-03-09T16:03:27.920208+0000 mon.a (mon.0) 2816 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:27 vm01 bash[20728]: cluster 2026-03-09T16:03:27.920208+0000 mon.a (mon.0) 2816 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: audit 2026-03-09T16:03:26.563218+0000 mgr.y (mgr.14520) 381 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: audit 2026-03-09T16:03:26.563218+0000 mgr.y (mgr.14520) 381 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: cluster 2026-03-09T16:03:26.775571+0000 mgr.y (mgr.14520) 382 : cluster [DBG] pgmap v608: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: cluster 2026-03-09T16:03:26.775571+0000 mgr.y (mgr.14520) 382 : cluster [DBG] pgmap v608: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: cluster 2026-03-09T16:03:26.910197+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: cluster 2026-03-09T16:03:26.910197+0000 mon.a (mon.0) 2813 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: audit 2026-03-09T16:03:26.912603+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: audit 2026-03-09T16:03:26.912603+0000 mon.a (mon.0) 2814 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: cluster 2026-03-09T16:03:26.928233+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: cluster 2026-03-09T16:03:26.928233+0000 mon.a (mon.0) 2815 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: audit 2026-03-09T16:03:26.931515+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: audit 2026-03-09T16:03:26.931515+0000 mon.c (mon.2) 410 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: cluster 2026-03-09T16:03:27.920208+0000 mon.a (mon.0) 2816 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T16:03:28.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:27 vm01 bash[28152]: cluster 2026-03-09T16:03:27.920208+0000 mon.a (mon.0) 2816 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:29 vm01 bash[28152]: cluster 2026-03-09T16:03:28.776187+0000 mgr.y (mgr.14520) 383 : cluster [DBG] pgmap v611: 292 pgs: 26 unknown, 266 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:29 vm01 bash[28152]: cluster 2026-03-09T16:03:28.776187+0000 mgr.y (mgr.14520) 383 : cluster [DBG] pgmap v611: 292 pgs: 26 unknown, 266 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:29 vm01 bash[28152]: cluster 2026-03-09T16:03:28.923658+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:29 vm01 bash[28152]: cluster 2026-03-09T16:03:28.923658+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:29 vm01 bash[28152]: audit 2026-03-09T16:03:29.281399+0000 mon.a (mon.0) 2818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:29 vm01 bash[28152]: audit 2026-03-09T16:03:29.281399+0000 mon.a (mon.0) 2818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:29 vm01 bash[20728]: cluster 2026-03-09T16:03:28.776187+0000 mgr.y (mgr.14520) 383 : cluster [DBG] pgmap v611: 292 pgs: 26 unknown, 266 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:29 vm01 bash[20728]: cluster 2026-03-09T16:03:28.776187+0000 mgr.y (mgr.14520) 383 : cluster [DBG] pgmap v611: 292 pgs: 26 unknown, 266 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:29 vm01 bash[20728]: cluster 2026-03-09T16:03:28.923658+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:29 vm01 bash[20728]: cluster 2026-03-09T16:03:28.923658+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:29 vm01 bash[20728]: audit 2026-03-09T16:03:29.281399+0000 mon.a (mon.0) 2818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:29 vm01 bash[20728]: audit 2026-03-09T16:03:29.281399+0000 mon.a (mon.0) 2818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:29 vm09 bash[22983]: cluster 2026-03-09T16:03:28.776187+0000 mgr.y (mgr.14520) 383 : cluster [DBG] pgmap v611: 292 pgs: 26 unknown, 266 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:29 vm09 bash[22983]: cluster 2026-03-09T16:03:28.776187+0000 mgr.y (mgr.14520) 383 : cluster [DBG] pgmap v611: 292 pgs: 26 unknown, 266 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:29 vm09 bash[22983]: cluster 2026-03-09T16:03:28.923658+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T16:03:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:29 vm09 bash[22983]: cluster 2026-03-09T16:03:28.923658+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-09T16:03:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:29 vm09 bash[22983]: audit 2026-03-09T16:03:29.281399+0000 mon.a (mon.0) 2818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:29 vm09 bash[22983]: audit 2026-03-09T16:03:29.281399+0000 mon.a (mon.0) 2818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:31 vm09 bash[22983]: cluster 2026-03-09T16:03:30.776850+0000 mgr.y (mgr.14520) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-09T16:03:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:31 vm09 bash[22983]: cluster 2026-03-09T16:03:30.776850+0000 mgr.y (mgr.14520) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-09T16:03:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:31 vm01 bash[28152]: cluster 2026-03-09T16:03:30.776850+0000 mgr.y (mgr.14520) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-09T16:03:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:31 vm01 bash[28152]: cluster 2026-03-09T16:03:30.776850+0000 mgr.y (mgr.14520) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-09T16:03:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:31 vm01 bash[20728]: cluster 2026-03-09T16:03:30.776850+0000 mgr.y (mgr.14520) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-09T16:03:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:31 vm01 bash[20728]: cluster 2026-03-09T16:03:30.776850+0000 mgr.y (mgr.14520) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-09T16:03:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:03:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:03:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:03:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:33 vm09 bash[22983]: cluster 2026-03-09T16:03:32.777258+0000 mgr.y (mgr.14520) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:03:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:33 vm09 bash[22983]: cluster 2026-03-09T16:03:32.777258+0000 mgr.y (mgr.14520) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:03:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:33 vm09 bash[22983]: cluster 2026-03-09T16:03:33.865681+0000 mon.a (mon.0) 2819 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:33 vm09 bash[22983]: cluster 2026-03-09T16:03:33.865681+0000 mon.a (mon.0) 2819 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:33 vm01 bash[28152]: cluster 2026-03-09T16:03:32.777258+0000 mgr.y (mgr.14520) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:03:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:33 vm01 bash[28152]: cluster 2026-03-09T16:03:32.777258+0000 mgr.y (mgr.14520) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:03:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:33 vm01 bash[28152]: cluster 2026-03-09T16:03:33.865681+0000 mon.a (mon.0) 2819 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:33 vm01 bash[28152]: cluster 2026-03-09T16:03:33.865681+0000 mon.a (mon.0) 2819 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:33 vm01 bash[20728]: cluster 2026-03-09T16:03:32.777258+0000 mgr.y (mgr.14520) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:03:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:33 vm01 bash[20728]: cluster 2026-03-09T16:03:32.777258+0000 mgr.y (mgr.14520) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:03:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:33 vm01 bash[20728]: cluster 2026-03-09T16:03:33.865681+0000 mon.a (mon.0) 2819 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:33 vm01 bash[20728]: cluster 2026-03-09T16:03:33.865681+0000 mon.a (mon.0) 2819 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:34 vm09 bash[22983]: audit 2026-03-09T16:03:34.438379+0000 mon.a (mon.0) 2820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:03:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:34 vm09 bash[22983]: audit 2026-03-09T16:03:34.438379+0000 mon.a (mon.0) 2820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:03:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:34 vm09 bash[22983]: audit 2026-03-09T16:03:34.755879+0000 mon.a (mon.0) 2821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:03:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:34 vm09 bash[22983]: audit 2026-03-09T16:03:34.755879+0000 mon.a (mon.0) 2821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:03:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:34 vm09 bash[22983]: audit 2026-03-09T16:03:34.756361+0000 mon.a (mon.0) 2822 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:03:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:34 vm09 bash[22983]: audit 2026-03-09T16:03:34.756361+0000 mon.a (mon.0) 2822 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:03:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:34 vm09 bash[22983]: audit 2026-03-09T16:03:34.761571+0000 mon.a (mon.0) 2823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:03:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:34 vm09 bash[22983]: audit 2026-03-09T16:03:34.761571+0000 mon.a (mon.0) 2823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:34 vm01 bash[28152]: audit 2026-03-09T16:03:34.438379+0000 mon.a (mon.0) 2820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:34 vm01 bash[28152]: audit 2026-03-09T16:03:34.438379+0000 mon.a (mon.0) 2820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:34 vm01 bash[28152]: audit 2026-03-09T16:03:34.755879+0000 mon.a (mon.0) 2821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:34 vm01 bash[28152]: audit 2026-03-09T16:03:34.755879+0000 mon.a (mon.0) 2821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:34 vm01 bash[28152]: audit 2026-03-09T16:03:34.756361+0000 mon.a (mon.0) 2822 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:34 vm01 bash[28152]: audit 2026-03-09T16:03:34.756361+0000 mon.a (mon.0) 2822 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:34 vm01 bash[28152]: audit 2026-03-09T16:03:34.761571+0000 mon.a (mon.0) 2823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:34 vm01 bash[28152]: audit 2026-03-09T16:03:34.761571+0000 mon.a (mon.0) 2823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:34 vm01 bash[20728]: audit 2026-03-09T16:03:34.438379+0000 mon.a (mon.0) 2820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:34 vm01 bash[20728]: audit 2026-03-09T16:03:34.438379+0000 mon.a (mon.0) 2820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:34 vm01 bash[20728]: audit 2026-03-09T16:03:34.755879+0000 mon.a (mon.0) 2821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:34 vm01 bash[20728]: audit 2026-03-09T16:03:34.755879+0000 mon.a (mon.0) 2821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:34 vm01 bash[20728]: audit 2026-03-09T16:03:34.756361+0000 mon.a (mon.0) 2822 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:34 vm01 bash[20728]: audit 2026-03-09T16:03:34.756361+0000 mon.a (mon.0) 2822 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:34 vm01 bash[20728]: audit 2026-03-09T16:03:34.761571+0000 mon.a (mon.0) 2823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:03:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:34 vm01 bash[20728]: audit 2026-03-09T16:03:34.761571+0000 mon.a (mon.0) 2823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:03:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:35 vm09 bash[22983]: cluster 2026-03-09T16:03:34.778051+0000 mgr.y (mgr.14520) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 911 B/s wr, 4 op/s 2026-03-09T16:03:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:35 vm09 bash[22983]: cluster 2026-03-09T16:03:34.778051+0000 mgr.y (mgr.14520) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 911 B/s wr, 4 op/s 2026-03-09T16:03:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:35 vm01 bash[28152]: cluster 2026-03-09T16:03:34.778051+0000 mgr.y (mgr.14520) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 911 B/s wr, 4 op/s 2026-03-09T16:03:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:35 vm01 bash[28152]: cluster 2026-03-09T16:03:34.778051+0000 mgr.y (mgr.14520) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 911 B/s wr, 4 op/s 2026-03-09T16:03:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:35 vm01 bash[20728]: cluster 2026-03-09T16:03:34.778051+0000 mgr.y (mgr.14520) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 911 B/s wr, 4 op/s 2026-03-09T16:03:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:35 vm01 bash[20728]: cluster 2026-03-09T16:03:34.778051+0000 mgr.y (mgr.14520) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 911 B/s wr, 4 op/s 2026-03-09T16:03:36.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:03:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:03:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:37 vm09 bash[22983]: audit 2026-03-09T16:03:36.571263+0000 mgr.y (mgr.14520) 387 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:37 vm09 bash[22983]: audit 2026-03-09T16:03:36.571263+0000 mgr.y (mgr.14520) 387 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:37 vm09 bash[22983]: cluster 2026-03-09T16:03:36.778493+0000 mgr.y (mgr.14520) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 809 B/s wr, 3 op/s 2026-03-09T16:03:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:37 vm09 bash[22983]: cluster 2026-03-09T16:03:36.778493+0000 mgr.y (mgr.14520) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 809 B/s wr, 3 op/s 2026-03-09T16:03:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:37 vm01 bash[28152]: audit 2026-03-09T16:03:36.571263+0000 mgr.y (mgr.14520) 387 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:37 vm01 bash[28152]: audit 2026-03-09T16:03:36.571263+0000 mgr.y (mgr.14520) 387 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:37 vm01 bash[28152]: cluster 2026-03-09T16:03:36.778493+0000 mgr.y (mgr.14520) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 809 B/s wr, 3 op/s 2026-03-09T16:03:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:37 vm01 bash[28152]: cluster 2026-03-09T16:03:36.778493+0000 mgr.y (mgr.14520) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 809 B/s wr, 3 op/s 2026-03-09T16:03:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:37 vm01 bash[20728]: audit 2026-03-09T16:03:36.571263+0000 mgr.y (mgr.14520) 387 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:37 vm01 bash[20728]: audit 2026-03-09T16:03:36.571263+0000 mgr.y (mgr.14520) 387 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:37 vm01 bash[20728]: cluster 2026-03-09T16:03:36.778493+0000 mgr.y (mgr.14520) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 809 B/s wr, 3 op/s 2026-03-09T16:03:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:37 vm01 bash[20728]: cluster 2026-03-09T16:03:36.778493+0000 mgr.y (mgr.14520) 388 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 809 B/s wr, 3 op/s 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: cluster 2026-03-09T16:03:38.778948+0000 mgr.y (mgr.14520) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: cluster 2026-03-09T16:03:38.778948+0000 mgr.y (mgr.14520) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: audit 2026-03-09T16:03:38.984925+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: audit 2026-03-09T16:03:38.984925+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: audit 2026-03-09T16:03:38.985114+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: audit 2026-03-09T16:03:38.985114+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: audit 2026-03-09T16:03:38.985481+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: audit 2026-03-09T16:03:38.985481+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: audit 2026-03-09T16:03:38.985645+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:39 vm09 bash[22983]: audit 2026-03-09T16:03:38.985645+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: cluster 2026-03-09T16:03:38.778948+0000 mgr.y (mgr.14520) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: cluster 2026-03-09T16:03:38.778948+0000 mgr.y (mgr.14520) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: audit 2026-03-09T16:03:38.984925+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: audit 2026-03-09T16:03:38.984925+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: audit 2026-03-09T16:03:38.985114+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: audit 2026-03-09T16:03:38.985114+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: audit 2026-03-09T16:03:38.985481+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: audit 2026-03-09T16:03:38.985481+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: audit 2026-03-09T16:03:38.985645+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:39 vm01 bash[28152]: audit 2026-03-09T16:03:38.985645+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: cluster 2026-03-09T16:03:38.778948+0000 mgr.y (mgr.14520) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: cluster 2026-03-09T16:03:38.778948+0000 mgr.y (mgr.14520) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: audit 2026-03-09T16:03:38.984925+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: audit 2026-03-09T16:03:38.984925+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: audit 2026-03-09T16:03:38.985114+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: audit 2026-03-09T16:03:38.985114+0000 mon.a (mon.0) 2824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: audit 2026-03-09T16:03:38.985481+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: audit 2026-03-09T16:03:38.985481+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: audit 2026-03-09T16:03:38.985645+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:40.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:39 vm01 bash[20728]: audit 2026-03-09T16:03:38.985645+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-73"}]: dispatch 2026-03-09T16:03:41.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:40 vm09 bash[22983]: cluster 2026-03-09T16:03:39.979999+0000 mon.a (mon.0) 2826 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T16:03:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:40 vm09 bash[22983]: cluster 2026-03-09T16:03:39.979999+0000 mon.a (mon.0) 2826 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T16:03:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:40 vm01 bash[28152]: cluster 2026-03-09T16:03:39.979999+0000 mon.a (mon.0) 2826 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T16:03:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:40 vm01 bash[28152]: cluster 2026-03-09T16:03:39.979999+0000 mon.a (mon.0) 2826 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T16:03:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:40 vm01 bash[20728]: cluster 2026-03-09T16:03:39.979999+0000 mon.a (mon.0) 2826 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T16:03:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:40 vm01 bash[20728]: cluster 2026-03-09T16:03:39.979999+0000 mon.a (mon.0) 2826 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-09T16:03:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:42 vm09 bash[22983]: cluster 2026-03-09T16:03:40.779298+0000 mgr.y (mgr.14520) 390 : cluster [DBG] pgmap v619: 260 pgs: 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 818 B/s rd, 0 op/s 2026-03-09T16:03:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:42 vm09 bash[22983]: cluster 2026-03-09T16:03:40.779298+0000 mgr.y (mgr.14520) 390 : cluster [DBG] pgmap v619: 260 pgs: 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 818 B/s rd, 0 op/s 2026-03-09T16:03:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:42 vm09 bash[22983]: cluster 2026-03-09T16:03:41.006087+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T16:03:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:42 vm09 bash[22983]: cluster 2026-03-09T16:03:41.006087+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T16:03:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:42 vm09 bash[22983]: audit 2026-03-09T16:03:41.011792+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:42 vm09 bash[22983]: audit 2026-03-09T16:03:41.011792+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:42 vm09 bash[22983]: audit 2026-03-09T16:03:41.015428+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:42 vm09 bash[22983]: audit 2026-03-09T16:03:41.015428+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:42 vm01 bash[28152]: cluster 2026-03-09T16:03:40.779298+0000 mgr.y (mgr.14520) 390 : cluster [DBG] pgmap v619: 260 pgs: 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 818 B/s rd, 0 op/s 2026-03-09T16:03:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:42 vm01 bash[28152]: cluster 2026-03-09T16:03:40.779298+0000 mgr.y (mgr.14520) 390 : cluster [DBG] pgmap v619: 260 pgs: 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 818 B/s rd, 0 op/s 2026-03-09T16:03:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:42 vm01 bash[28152]: cluster 2026-03-09T16:03:41.006087+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T16:03:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:42 vm01 bash[28152]: cluster 2026-03-09T16:03:41.006087+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T16:03:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:42 vm01 bash[28152]: audit 2026-03-09T16:03:41.011792+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:42 vm01 bash[28152]: audit 2026-03-09T16:03:41.011792+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:42 vm01 bash[28152]: audit 2026-03-09T16:03:41.015428+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:42 vm01 bash[28152]: audit 2026-03-09T16:03:41.015428+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:42 vm01 bash[20728]: cluster 2026-03-09T16:03:40.779298+0000 mgr.y (mgr.14520) 390 : cluster [DBG] pgmap v619: 260 pgs: 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 818 B/s rd, 0 op/s 2026-03-09T16:03:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:42 vm01 bash[20728]: cluster 2026-03-09T16:03:40.779298+0000 mgr.y (mgr.14520) 390 : cluster [DBG] pgmap v619: 260 pgs: 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 818 B/s rd, 0 op/s 2026-03-09T16:03:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:42 vm01 bash[20728]: cluster 2026-03-09T16:03:41.006087+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T16:03:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:42 vm01 bash[20728]: cluster 2026-03-09T16:03:41.006087+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-09T16:03:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:42 vm01 bash[20728]: audit 2026-03-09T16:03:41.011792+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:42 vm01 bash[20728]: audit 2026-03-09T16:03:41.011792+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:42 vm01 bash[20728]: audit 2026-03-09T16:03:41.015428+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:42 vm01 bash[20728]: audit 2026-03-09T16:03:41.015428+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:43 vm01 bash[28152]: audit 2026-03-09T16:03:41.987790+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:43 vm01 bash[28152]: audit 2026-03-09T16:03:41.987790+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:43 vm01 bash[28152]: cluster 2026-03-09T16:03:41.990633+0000 mon.a (mon.0) 2830 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T16:03:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:43 vm01 bash[28152]: cluster 2026-03-09T16:03:41.990633+0000 mon.a (mon.0) 2830 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T16:03:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:43 vm01 bash[28152]: audit 2026-03-09T16:03:42.021555+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:43 vm01 bash[28152]: audit 2026-03-09T16:03:42.021555+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:43.177 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:03:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:03:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:03:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:43 vm01 bash[20728]: audit 2026-03-09T16:03:41.987790+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:43 vm01 bash[20728]: audit 2026-03-09T16:03:41.987790+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:43 vm01 bash[20728]: cluster 2026-03-09T16:03:41.990633+0000 mon.a (mon.0) 2830 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T16:03:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:43 vm01 bash[20728]: cluster 2026-03-09T16:03:41.990633+0000 mon.a (mon.0) 2830 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T16:03:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:43 vm01 bash[20728]: audit 2026-03-09T16:03:42.021555+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:43 vm01 bash[20728]: audit 2026-03-09T16:03:42.021555+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:43 vm09 bash[22983]: audit 2026-03-09T16:03:41.987790+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:43 vm09 bash[22983]: audit 2026-03-09T16:03:41.987790+0000 mon.a (mon.0) 2829 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:43 vm09 bash[22983]: cluster 2026-03-09T16:03:41.990633+0000 mon.a (mon.0) 2830 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T16:03:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:43 vm09 bash[22983]: cluster 2026-03-09T16:03:41.990633+0000 mon.a (mon.0) 2830 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-09T16:03:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:43 vm09 bash[22983]: audit 2026-03-09T16:03:42.021555+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:43.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:43 vm09 bash[22983]: audit 2026-03-09T16:03:42.021555+0000 mon.c (mon.2) 414 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:44 vm09 bash[22983]: cluster 2026-03-09T16:03:42.779659+0000 mgr.y (mgr.14520) 391 : cluster [DBG] pgmap v622: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:44 vm09 bash[22983]: cluster 2026-03-09T16:03:42.779659+0000 mgr.y (mgr.14520) 391 : cluster [DBG] pgmap v622: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:44 vm09 bash[22983]: cluster 2026-03-09T16:03:43.023164+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T16:03:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:44 vm09 bash[22983]: cluster 2026-03-09T16:03:43.023164+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T16:03:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:44 vm01 bash[28152]: cluster 2026-03-09T16:03:42.779659+0000 mgr.y (mgr.14520) 391 : cluster [DBG] pgmap v622: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:44 vm01 bash[28152]: cluster 2026-03-09T16:03:42.779659+0000 mgr.y (mgr.14520) 391 : cluster [DBG] pgmap v622: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:44 vm01 bash[28152]: cluster 2026-03-09T16:03:43.023164+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T16:03:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:44 vm01 bash[28152]: cluster 2026-03-09T16:03:43.023164+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T16:03:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:44 vm01 bash[20728]: cluster 2026-03-09T16:03:42.779659+0000 mgr.y (mgr.14520) 391 : cluster [DBG] pgmap v622: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:44 vm01 bash[20728]: cluster 2026-03-09T16:03:42.779659+0000 mgr.y (mgr.14520) 391 : cluster [DBG] pgmap v622: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:44 vm01 bash[20728]: cluster 2026-03-09T16:03:43.023164+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T16:03:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:44 vm01 bash[20728]: cluster 2026-03-09T16:03:43.023164+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: cluster 2026-03-09T16:03:44.019449+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: cluster 2026-03-09T16:03:44.019449+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.090423+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.090423+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.091026+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.091026+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.091677+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.091677+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.092123+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.092123+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.287545+0000 mon.a (mon.0) 2835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:45 vm09 bash[22983]: audit 2026-03-09T16:03:44.287545+0000 mon.a (mon.0) 2835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: cluster 2026-03-09T16:03:44.019449+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: cluster 2026-03-09T16:03:44.019449+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.090423+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.090423+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.091026+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.091026+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.091677+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.091677+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.092123+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.092123+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.287545+0000 mon.a (mon.0) 2835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:45 vm01 bash[28152]: audit 2026-03-09T16:03:44.287545+0000 mon.a (mon.0) 2835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: cluster 2026-03-09T16:03:44.019449+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T16:03:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: cluster 2026-03-09T16:03:44.019449+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.090423+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.090423+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.091026+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.091026+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.091677+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.091677+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.092123+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.092123+0000 mon.a (mon.0) 2834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-75"}]: dispatch 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.287545+0000 mon.a (mon.0) 2835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:45 vm01 bash[20728]: audit 2026-03-09T16:03:44.287545+0000 mon.a (mon.0) 2835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:03:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:46 vm09 bash[22983]: cluster 2026-03-09T16:03:44.780490+0000 mgr.y (mgr.14520) 392 : cluster [DBG] pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T16:03:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:46 vm09 bash[22983]: cluster 2026-03-09T16:03:44.780490+0000 mgr.y (mgr.14520) 392 : cluster [DBG] pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T16:03:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:46 vm09 bash[22983]: cluster 2026-03-09T16:03:45.054431+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T16:03:46.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:46 vm09 bash[22983]: cluster 2026-03-09T16:03:45.054431+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T16:03:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:46 vm01 bash[28152]: cluster 2026-03-09T16:03:44.780490+0000 mgr.y (mgr.14520) 392 : cluster [DBG] pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T16:03:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:46 vm01 bash[28152]: cluster 2026-03-09T16:03:44.780490+0000 mgr.y (mgr.14520) 392 : cluster [DBG] pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T16:03:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:46 vm01 bash[28152]: cluster 2026-03-09T16:03:45.054431+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T16:03:46.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:46 vm01 bash[28152]: cluster 2026-03-09T16:03:45.054431+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T16:03:46.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:46 vm01 bash[20728]: cluster 2026-03-09T16:03:44.780490+0000 mgr.y (mgr.14520) 392 : cluster [DBG] pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T16:03:46.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:46 vm01 bash[20728]: cluster 2026-03-09T16:03:44.780490+0000 mgr.y (mgr.14520) 392 : cluster [DBG] pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-09T16:03:46.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:46 vm01 bash[20728]: cluster 2026-03-09T16:03:45.054431+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T16:03:46.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:46 vm01 bash[20728]: cluster 2026-03-09T16:03:45.054431+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-09T16:03:46.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:03:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:03:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:47 vm09 bash[22983]: cluster 2026-03-09T16:03:46.064060+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T16:03:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:47 vm09 bash[22983]: cluster 2026-03-09T16:03:46.064060+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T16:03:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:47 vm09 bash[22983]: audit 2026-03-09T16:03:46.070114+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:47 vm09 bash[22983]: audit 2026-03-09T16:03:46.070114+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:47 vm09 bash[22983]: audit 2026-03-09T16:03:46.073181+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:47 vm09 bash[22983]: audit 2026-03-09T16:03:46.073181+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:47 vm01 bash[28152]: cluster 2026-03-09T16:03:46.064060+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:47 vm01 bash[28152]: cluster 2026-03-09T16:03:46.064060+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:47 vm01 bash[28152]: audit 2026-03-09T16:03:46.070114+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:47 vm01 bash[28152]: audit 2026-03-09T16:03:46.070114+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:47 vm01 bash[28152]: audit 2026-03-09T16:03:46.073181+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:47 vm01 bash[28152]: audit 2026-03-09T16:03:46.073181+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:47 vm01 bash[20728]: cluster 2026-03-09T16:03:46.064060+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:47 vm01 bash[20728]: cluster 2026-03-09T16:03:46.064060+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:47 vm01 bash[20728]: audit 2026-03-09T16:03:46.070114+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:47 vm01 bash[20728]: audit 2026-03-09T16:03:46.070114+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:47 vm01 bash[20728]: audit 2026-03-09T16:03:46.073181+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:47 vm01 bash[20728]: audit 2026-03-09T16:03:46.073181+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: audit 2026-03-09T16:03:46.582010+0000 mgr.y (mgr.14520) 393 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: audit 2026-03-09T16:03:46.582010+0000 mgr.y (mgr.14520) 393 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: cluster 2026-03-09T16:03:46.780797+0000 mgr.y (mgr.14520) 394 : cluster [DBG] pgmap v628: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: cluster 2026-03-09T16:03:46.780797+0000 mgr.y (mgr.14520) 394 : cluster [DBG] pgmap v628: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: cluster 2026-03-09T16:03:47.061700+0000 mon.a (mon.0) 2839 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: cluster 2026-03-09T16:03:47.061700+0000 mon.a (mon.0) 2839 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: audit 2026-03-09T16:03:47.064322+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: audit 2026-03-09T16:03:47.064322+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: cluster 2026-03-09T16:03:47.069875+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: cluster 2026-03-09T16:03:47.069875+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: audit 2026-03-09T16:03:47.073938+0000 mon.c (mon.2) 418 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: audit 2026-03-09T16:03:47.073938+0000 mon.c (mon.2) 418 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: cluster 2026-03-09T16:03:48.075118+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T16:03:48.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:48 vm09 bash[22983]: cluster 2026-03-09T16:03:48.075118+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: audit 2026-03-09T16:03:46.582010+0000 mgr.y (mgr.14520) 393 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: audit 2026-03-09T16:03:46.582010+0000 mgr.y (mgr.14520) 393 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: cluster 2026-03-09T16:03:46.780797+0000 mgr.y (mgr.14520) 394 : cluster [DBG] pgmap v628: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: cluster 2026-03-09T16:03:46.780797+0000 mgr.y (mgr.14520) 394 : cluster [DBG] pgmap v628: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: cluster 2026-03-09T16:03:47.061700+0000 mon.a (mon.0) 2839 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: cluster 2026-03-09T16:03:47.061700+0000 mon.a (mon.0) 2839 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: audit 2026-03-09T16:03:47.064322+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: audit 2026-03-09T16:03:47.064322+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: cluster 2026-03-09T16:03:47.069875+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: cluster 2026-03-09T16:03:47.069875+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: audit 2026-03-09T16:03:47.073938+0000 mon.c (mon.2) 418 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: audit 2026-03-09T16:03:47.073938+0000 mon.c (mon.2) 418 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: cluster 2026-03-09T16:03:48.075118+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:48 vm01 bash[28152]: cluster 2026-03-09T16:03:48.075118+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: audit 2026-03-09T16:03:46.582010+0000 mgr.y (mgr.14520) 393 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: audit 2026-03-09T16:03:46.582010+0000 mgr.y (mgr.14520) 393 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: cluster 2026-03-09T16:03:46.780797+0000 mgr.y (mgr.14520) 394 : cluster [DBG] pgmap v628: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: cluster 2026-03-09T16:03:46.780797+0000 mgr.y (mgr.14520) 394 : cluster [DBG] pgmap v628: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: cluster 2026-03-09T16:03:47.061700+0000 mon.a (mon.0) 2839 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:48.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: cluster 2026-03-09T16:03:47.061700+0000 mon.a (mon.0) 2839 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: audit 2026-03-09T16:03:47.064322+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: audit 2026-03-09T16:03:47.064322+0000 mon.a (mon.0) 2840 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: cluster 2026-03-09T16:03:47.069875+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T16:03:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: cluster 2026-03-09T16:03:47.069875+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-09T16:03:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: audit 2026-03-09T16:03:47.073938+0000 mon.c (mon.2) 418 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: audit 2026-03-09T16:03:47.073938+0000 mon.c (mon.2) 418 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: cluster 2026-03-09T16:03:48.075118+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T16:03:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:48 vm01 bash[20728]: cluster 2026-03-09T16:03:48.075118+0000 mon.a (mon.0) 2842 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-09T16:03:49.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: audit 2026-03-09T16:03:48.124831+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: audit 2026-03-09T16:03:48.124831+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: audit 2026-03-09T16:03:48.125301+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: audit 2026-03-09T16:03:48.125301+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: audit 2026-03-09T16:03:48.125714+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: audit 2026-03-09T16:03:48.125714+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: audit 2026-03-09T16:03:48.125982+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: audit 2026-03-09T16:03:48.125982+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: cluster 2026-03-09T16:03:48.781462+0000 mgr.y (mgr.14520) 395 : cluster [DBG] pgmap v631: 292 pgs: 21 unknown, 271 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:49 vm09 bash[22983]: cluster 2026-03-09T16:03:48.781462+0000 mgr.y (mgr.14520) 395 : cluster [DBG] pgmap v631: 292 pgs: 21 unknown, 271 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: audit 2026-03-09T16:03:48.124831+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: audit 2026-03-09T16:03:48.124831+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: audit 2026-03-09T16:03:48.125301+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: audit 2026-03-09T16:03:48.125301+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: audit 2026-03-09T16:03:48.125714+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: audit 2026-03-09T16:03:48.125714+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: audit 2026-03-09T16:03:48.125982+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: audit 2026-03-09T16:03:48.125982+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: cluster 2026-03-09T16:03:48.781462+0000 mgr.y (mgr.14520) 395 : cluster [DBG] pgmap v631: 292 pgs: 21 unknown, 271 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:49 vm01 bash[28152]: cluster 2026-03-09T16:03:48.781462+0000 mgr.y (mgr.14520) 395 : cluster [DBG] pgmap v631: 292 pgs: 21 unknown, 271 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: audit 2026-03-09T16:03:48.124831+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: audit 2026-03-09T16:03:48.124831+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: audit 2026-03-09T16:03:48.125301+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: audit 2026-03-09T16:03:48.125301+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:03:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: audit 2026-03-09T16:03:48.125714+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: audit 2026-03-09T16:03:48.125714+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: audit 2026-03-09T16:03:48.125982+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: audit 2026-03-09T16:03:48.125982+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-77"}]: dispatch 2026-03-09T16:03:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: cluster 2026-03-09T16:03:48.781462+0000 mgr.y (mgr.14520) 395 : cluster [DBG] pgmap v631: 292 pgs: 21 unknown, 271 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:49 vm01 bash[20728]: cluster 2026-03-09T16:03:48.781462+0000 mgr.y (mgr.14520) 395 : cluster [DBG] pgmap v631: 292 pgs: 21 unknown, 271 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:03:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:50 vm09 bash[22983]: cluster 2026-03-09T16:03:49.130641+0000 mon.a (mon.0) 2845 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T16:03:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:50 vm09 bash[22983]: cluster 2026-03-09T16:03:49.130641+0000 mon.a (mon.0) 2845 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T16:03:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:50 vm01 bash[28152]: cluster 2026-03-09T16:03:49.130641+0000 mon.a (mon.0) 2845 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T16:03:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:50 vm01 bash[28152]: cluster 2026-03-09T16:03:49.130641+0000 mon.a (mon.0) 2845 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T16:03:50.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:50 vm01 bash[20728]: cluster 2026-03-09T16:03:49.130641+0000 mon.a (mon.0) 2845 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T16:03:50.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:50 vm01 bash[20728]: cluster 2026-03-09T16:03:49.130641+0000 mon.a (mon.0) 2845 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-09T16:03:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:51 vm01 bash[28152]: cluster 2026-03-09T16:03:50.126951+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T16:03:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:51 vm01 bash[28152]: cluster 2026-03-09T16:03:50.126951+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T16:03:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:51 vm01 bash[28152]: audit 2026-03-09T16:03:50.143083+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:51 vm01 bash[28152]: audit 2026-03-09T16:03:50.143083+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:51 vm01 bash[28152]: audit 2026-03-09T16:03:50.145374+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:51 vm01 bash[28152]: audit 2026-03-09T16:03:50.145374+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:51 vm01 bash[28152]: cluster 2026-03-09T16:03:50.781899+0000 mgr.y (mgr.14520) 396 : cluster [DBG] pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:03:51.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:51 vm01 bash[28152]: cluster 2026-03-09T16:03:50.781899+0000 mgr.y (mgr.14520) 396 : cluster [DBG] pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:03:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:51 vm01 bash[20728]: cluster 2026-03-09T16:03:50.126951+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T16:03:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:51 vm01 bash[20728]: cluster 2026-03-09T16:03:50.126951+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T16:03:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:51 vm01 bash[20728]: audit 2026-03-09T16:03:50.143083+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:51 vm01 bash[20728]: audit 2026-03-09T16:03:50.143083+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:51 vm01 bash[20728]: audit 2026-03-09T16:03:50.145374+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:51 vm01 bash[20728]: audit 2026-03-09T16:03:50.145374+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:51 vm01 bash[20728]: cluster 2026-03-09T16:03:50.781899+0000 mgr.y (mgr.14520) 396 : cluster [DBG] pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:03:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:51 vm01 bash[20728]: cluster 2026-03-09T16:03:50.781899+0000 mgr.y (mgr.14520) 396 : cluster [DBG] pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:03:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:51 vm09 bash[22983]: cluster 2026-03-09T16:03:50.126951+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T16:03:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:51 vm09 bash[22983]: cluster 2026-03-09T16:03:50.126951+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-09T16:03:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:51 vm09 bash[22983]: audit 2026-03-09T16:03:50.143083+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:51 vm09 bash[22983]: audit 2026-03-09T16:03:50.143083+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:51 vm09 bash[22983]: audit 2026-03-09T16:03:50.145374+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:51 vm09 bash[22983]: audit 2026-03-09T16:03:50.145374+0000 mon.a (mon.0) 2847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:03:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:51 vm09 bash[22983]: cluster 2026-03-09T16:03:50.781899+0000 mgr.y (mgr.14520) 396 : cluster [DBG] pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:03:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:51 vm09 bash[22983]: cluster 2026-03-09T16:03:50.781899+0000 mgr.y (mgr.14520) 396 : cluster [DBG] pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:03:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:52 vm01 bash[28152]: audit 2026-03-09T16:03:51.127607+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:52 vm01 bash[28152]: audit 2026-03-09T16:03:51.127607+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:52 vm01 bash[28152]: audit 2026-03-09T16:03:51.139449+0000 mon.c (mon.2) 422 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:52 vm01 bash[28152]: audit 2026-03-09T16:03:51.139449+0000 mon.c (mon.2) 422 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:52 vm01 bash[28152]: cluster 2026-03-09T16:03:51.143944+0000 mon.a (mon.0) 2849 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T16:03:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:52 vm01 bash[28152]: cluster 2026-03-09T16:03:51.143944+0000 mon.a (mon.0) 2849 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T16:03:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:52 vm01 bash[28152]: cluster 2026-03-09T16:03:52.133694+0000 mon.a (mon.0) 2850 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T16:03:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:52 vm01 bash[28152]: cluster 2026-03-09T16:03:52.133694+0000 mon.a (mon.0) 2850 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T16:03:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:52 vm01 bash[20728]: audit 2026-03-09T16:03:51.127607+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:52 vm01 bash[20728]: audit 2026-03-09T16:03:51.127607+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:52 vm01 bash[20728]: audit 2026-03-09T16:03:51.139449+0000 mon.c (mon.2) 422 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:52 vm01 bash[20728]: audit 2026-03-09T16:03:51.139449+0000 mon.c (mon.2) 422 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:52 vm01 bash[20728]: cluster 2026-03-09T16:03:51.143944+0000 mon.a (mon.0) 2849 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T16:03:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:52 vm01 bash[20728]: cluster 2026-03-09T16:03:51.143944+0000 mon.a (mon.0) 2849 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T16:03:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:52 vm01 bash[20728]: cluster 2026-03-09T16:03:52.133694+0000 mon.a (mon.0) 2850 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T16:03:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:52 vm01 bash[20728]: cluster 2026-03-09T16:03:52.133694+0000 mon.a (mon.0) 2850 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T16:03:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:52 vm09 bash[22983]: audit 2026-03-09T16:03:51.127607+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:52 vm09 bash[22983]: audit 2026-03-09T16:03:51.127607+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:03:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:52 vm09 bash[22983]: audit 2026-03-09T16:03:51.139449+0000 mon.c (mon.2) 422 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:52 vm09 bash[22983]: audit 2026-03-09T16:03:51.139449+0000 mon.c (mon.2) 422 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:03:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:52 vm09 bash[22983]: cluster 2026-03-09T16:03:51.143944+0000 mon.a (mon.0) 2849 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T16:03:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:52 vm09 bash[22983]: cluster 2026-03-09T16:03:51.143944+0000 mon.a (mon.0) 2849 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-09T16:03:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:52 vm09 bash[22983]: cluster 2026-03-09T16:03:52.133694+0000 mon.a (mon.0) 2850 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T16:03:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:52 vm09 bash[22983]: cluster 2026-03-09T16:03:52.133694+0000 mon.a (mon.0) 2850 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-09T16:03:53.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:03:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:03:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:53 vm01 bash[28152]: cluster 2026-03-09T16:03:52.782293+0000 mgr.y (mgr.14520) 397 : cluster [DBG] pgmap v637: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:53 vm01 bash[28152]: cluster 2026-03-09T16:03:52.782293+0000 mgr.y (mgr.14520) 397 : cluster [DBG] pgmap v637: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:53 vm01 bash[28152]: cluster 2026-03-09T16:03:53.139193+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:53 vm01 bash[28152]: cluster 2026-03-09T16:03:53.139193+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:53 vm01 bash[28152]: audit 2026-03-09T16:03:53.160666+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:53 vm01 bash[28152]: audit 2026-03-09T16:03:53.160666+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:53 vm01 bash[20728]: cluster 2026-03-09T16:03:52.782293+0000 mgr.y (mgr.14520) 397 : cluster [DBG] pgmap v637: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:53 vm01 bash[20728]: cluster 2026-03-09T16:03:52.782293+0000 mgr.y (mgr.14520) 397 : cluster [DBG] pgmap v637: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:53 vm01 bash[20728]: cluster 2026-03-09T16:03:53.139193+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:53 vm01 bash[20728]: cluster 2026-03-09T16:03:53.139193+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:53 vm01 bash[20728]: audit 2026-03-09T16:03:53.160666+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:53.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:53 vm01 bash[20728]: audit 2026-03-09T16:03:53.160666+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:53 vm09 bash[22983]: cluster 2026-03-09T16:03:52.782293+0000 mgr.y (mgr.14520) 397 : cluster [DBG] pgmap v637: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T16:03:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:53 vm09 bash[22983]: cluster 2026-03-09T16:03:52.782293+0000 mgr.y (mgr.14520) 397 : cluster [DBG] pgmap v637: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T16:03:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:53 vm09 bash[22983]: cluster 2026-03-09T16:03:53.139193+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T16:03:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:53 vm09 bash[22983]: cluster 2026-03-09T16:03:53.139193+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-09T16:03:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:53 vm09 bash[22983]: audit 2026-03-09T16:03:53.160666+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:53 vm09 bash[22983]: audit 2026-03-09T16:03:53.160666+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:54 vm01 bash[28152]: audit 2026-03-09T16:03:53.160886+0000 mgr.y (mgr.14520) 398 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:54 vm01 bash[28152]: audit 2026-03-09T16:03:53.160886+0000 mgr.y (mgr.14520) 398 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:54 vm01 bash[28152]: cluster 2026-03-09T16:03:53.167580+0000 mon.a (mon.0) 2852 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:54 vm01 bash[28152]: cluster 2026-03-09T16:03:53.167580+0000 mon.a (mon.0) 2852 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:54 vm01 bash[20728]: audit 2026-03-09T16:03:53.160886+0000 mgr.y (mgr.14520) 398 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:54 vm01 bash[20728]: audit 2026-03-09T16:03:53.160886+0000 mgr.y (mgr.14520) 398 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:54 vm01 bash[20728]: cluster 2026-03-09T16:03:53.167580+0000 mon.a (mon.0) 2852 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:54 vm01 bash[20728]: cluster 2026-03-09T16:03:53.167580+0000 mon.a (mon.0) 2852 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:54 vm09 bash[22983]: audit 2026-03-09T16:03:53.160886+0000 mgr.y (mgr.14520) 398 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:54 vm09 bash[22983]: audit 2026-03-09T16:03:53.160886+0000 mgr.y (mgr.14520) 398 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1b"}]: dispatch 2026-03-09T16:03:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:54 vm09 bash[22983]: cluster 2026-03-09T16:03:53.167580+0000 mon.a (mon.0) 2852 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:54 vm09 bash[22983]: cluster 2026-03-09T16:03:53.167580+0000 mon.a (mon.0) 2852 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:03:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:55 vm09 bash[22983]: cluster 2026-03-09T16:03:53.983860+0000 osd.1 (osd.1) 11 : cluster [DBG] 297.1b deep-scrub starts 2026-03-09T16:03:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:55 vm09 bash[22983]: cluster 2026-03-09T16:03:53.983860+0000 osd.1 (osd.1) 11 : cluster [DBG] 297.1b deep-scrub starts 2026-03-09T16:03:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:55 vm09 bash[22983]: cluster 2026-03-09T16:03:53.985312+0000 osd.1 (osd.1) 12 : cluster [DBG] 297.1b deep-scrub ok 2026-03-09T16:03:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:55 vm09 bash[22983]: cluster 2026-03-09T16:03:53.985312+0000 osd.1 (osd.1) 12 : cluster [DBG] 297.1b deep-scrub ok 2026-03-09T16:03:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:55 vm09 bash[22983]: cluster 2026-03-09T16:03:54.783097+0000 mgr.y (mgr.14520) 399 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T16:03:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:55 vm09 bash[22983]: cluster 2026-03-09T16:03:54.783097+0000 mgr.y (mgr.14520) 399 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:55 vm01 bash[28152]: cluster 2026-03-09T16:03:53.983860+0000 osd.1 (osd.1) 11 : cluster [DBG] 297.1b deep-scrub starts 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:55 vm01 bash[28152]: cluster 2026-03-09T16:03:53.983860+0000 osd.1 (osd.1) 11 : cluster [DBG] 297.1b deep-scrub starts 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:55 vm01 bash[28152]: cluster 2026-03-09T16:03:53.985312+0000 osd.1 (osd.1) 12 : cluster [DBG] 297.1b deep-scrub ok 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:55 vm01 bash[28152]: cluster 2026-03-09T16:03:53.985312+0000 osd.1 (osd.1) 12 : cluster [DBG] 297.1b deep-scrub ok 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:55 vm01 bash[28152]: cluster 2026-03-09T16:03:54.783097+0000 mgr.y (mgr.14520) 399 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:55 vm01 bash[28152]: cluster 2026-03-09T16:03:54.783097+0000 mgr.y (mgr.14520) 399 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:55 vm01 bash[20728]: cluster 2026-03-09T16:03:53.983860+0000 osd.1 (osd.1) 11 : cluster [DBG] 297.1b deep-scrub starts 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:55 vm01 bash[20728]: cluster 2026-03-09T16:03:53.983860+0000 osd.1 (osd.1) 11 : cluster [DBG] 297.1b deep-scrub starts 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:55 vm01 bash[20728]: cluster 2026-03-09T16:03:53.985312+0000 osd.1 (osd.1) 12 : cluster [DBG] 297.1b deep-scrub ok 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:55 vm01 bash[20728]: cluster 2026-03-09T16:03:53.985312+0000 osd.1 (osd.1) 12 : cluster [DBG] 297.1b deep-scrub ok 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:55 vm01 bash[20728]: cluster 2026-03-09T16:03:54.783097+0000 mgr.y (mgr.14520) 399 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T16:03:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:55 vm01 bash[20728]: cluster 2026-03-09T16:03:54.783097+0000 mgr.y (mgr.14520) 399 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T16:03:56.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:03:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:03:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:57 vm09 bash[22983]: audit 2026-03-09T16:03:56.583175+0000 mgr.y (mgr.14520) 400 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:57 vm09 bash[22983]: audit 2026-03-09T16:03:56.583175+0000 mgr.y (mgr.14520) 400 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:57 vm09 bash[22983]: cluster 2026-03-09T16:03:56.783435+0000 mgr.y (mgr.14520) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-09T16:03:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:03:57 vm09 bash[22983]: cluster 2026-03-09T16:03:56.783435+0000 mgr.y (mgr.14520) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-09T16:03:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:57 vm01 bash[28152]: audit 2026-03-09T16:03:56.583175+0000 mgr.y (mgr.14520) 400 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:57 vm01 bash[28152]: audit 2026-03-09T16:03:56.583175+0000 mgr.y (mgr.14520) 400 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:57 vm01 bash[28152]: cluster 2026-03-09T16:03:56.783435+0000 mgr.y (mgr.14520) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-09T16:03:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:03:57 vm01 bash[28152]: cluster 2026-03-09T16:03:56.783435+0000 mgr.y (mgr.14520) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-09T16:03:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:57 vm01 bash[20728]: audit 2026-03-09T16:03:56.583175+0000 mgr.y (mgr.14520) 400 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:57 vm01 bash[20728]: audit 2026-03-09T16:03:56.583175+0000 mgr.y (mgr.14520) 400 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:03:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:57 vm01 bash[20728]: cluster 2026-03-09T16:03:56.783435+0000 mgr.y (mgr.14520) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-09T16:03:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:03:57 vm01 bash[20728]: cluster 2026-03-09T16:03:56.783435+0000 mgr.y (mgr.14520) 401 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-09T16:04:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:00 vm09 bash[22983]: cluster 2026-03-09T16:03:58.783831+0000 mgr.y (mgr.14520) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 670 B/s rd, 670 B/s wr, 1 op/s 2026-03-09T16:04:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:00 vm09 bash[22983]: cluster 2026-03-09T16:03:58.783831+0000 mgr.y (mgr.14520) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 670 B/s rd, 670 B/s wr, 1 op/s 2026-03-09T16:04:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:00 vm09 bash[22983]: audit 2026-03-09T16:03:59.295279+0000 mon.a (mon.0) 2853 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:00.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:00 vm09 bash[22983]: audit 2026-03-09T16:03:59.295279+0000 mon.a (mon.0) 2853 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:00 vm01 bash[28152]: cluster 2026-03-09T16:03:58.783831+0000 mgr.y (mgr.14520) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 670 B/s rd, 670 B/s wr, 1 op/s 2026-03-09T16:04:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:00 vm01 bash[28152]: cluster 2026-03-09T16:03:58.783831+0000 mgr.y (mgr.14520) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 670 B/s rd, 670 B/s wr, 1 op/s 2026-03-09T16:04:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:00 vm01 bash[28152]: audit 2026-03-09T16:03:59.295279+0000 mon.a (mon.0) 2853 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:00 vm01 bash[28152]: audit 2026-03-09T16:03:59.295279+0000 mon.a (mon.0) 2853 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:00 vm01 bash[20728]: cluster 2026-03-09T16:03:58.783831+0000 mgr.y (mgr.14520) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 670 B/s rd, 670 B/s wr, 1 op/s 2026-03-09T16:04:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:00 vm01 bash[20728]: cluster 2026-03-09T16:03:58.783831+0000 mgr.y (mgr.14520) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 670 B/s rd, 670 B/s wr, 1 op/s 2026-03-09T16:04:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:00 vm01 bash[20728]: audit 2026-03-09T16:03:59.295279+0000 mon.a (mon.0) 2853 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:00 vm01 bash[20728]: audit 2026-03-09T16:03:59.295279+0000 mon.a (mon.0) 2853 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:02 vm09 bash[22983]: cluster 2026-03-09T16:04:00.784481+0000 mgr.y (mgr.14520) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 591 B/s wr, 2 op/s 2026-03-09T16:04:02.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:02 vm09 bash[22983]: cluster 2026-03-09T16:04:00.784481+0000 mgr.y (mgr.14520) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 591 B/s wr, 2 op/s 2026-03-09T16:04:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:02 vm01 bash[28152]: cluster 2026-03-09T16:04:00.784481+0000 mgr.y (mgr.14520) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 591 B/s wr, 2 op/s 2026-03-09T16:04:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:02 vm01 bash[28152]: cluster 2026-03-09T16:04:00.784481+0000 mgr.y (mgr.14520) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 591 B/s wr, 2 op/s 2026-03-09T16:04:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:02 vm01 bash[20728]: cluster 2026-03-09T16:04:00.784481+0000 mgr.y (mgr.14520) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 591 B/s wr, 2 op/s 2026-03-09T16:04:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:02 vm01 bash[20728]: cluster 2026-03-09T16:04:00.784481+0000 mgr.y (mgr.14520) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 591 B/s wr, 2 op/s 2026-03-09T16:04:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:04:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:04:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:04:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:04 vm09 bash[22983]: cluster 2026-03-09T16:04:02.784792+0000 mgr.y (mgr.14520) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:04:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:04 vm09 bash[22983]: cluster 2026-03-09T16:04:02.784792+0000 mgr.y (mgr.14520) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:04:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:04 vm01 bash[28152]: cluster 2026-03-09T16:04:02.784792+0000 mgr.y (mgr.14520) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:04:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:04 vm01 bash[28152]: cluster 2026-03-09T16:04:02.784792+0000 mgr.y (mgr.14520) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:04:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:04 vm01 bash[20728]: cluster 2026-03-09T16:04:02.784792+0000 mgr.y (mgr.14520) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:04:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:04 vm01 bash[20728]: cluster 2026-03-09T16:04:02.784792+0000 mgr.y (mgr.14520) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:04:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:06 vm09 bash[22983]: cluster 2026-03-09T16:04:04.785615+0000 mgr.y (mgr.14520) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 439 B/s wr, 2 op/s 2026-03-09T16:04:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:06 vm09 bash[22983]: cluster 2026-03-09T16:04:04.785615+0000 mgr.y (mgr.14520) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 439 B/s wr, 2 op/s 2026-03-09T16:04:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:06 vm01 bash[28152]: cluster 2026-03-09T16:04:04.785615+0000 mgr.y (mgr.14520) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 439 B/s wr, 2 op/s 2026-03-09T16:04:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:06 vm01 bash[28152]: cluster 2026-03-09T16:04:04.785615+0000 mgr.y (mgr.14520) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 439 B/s wr, 2 op/s 2026-03-09T16:04:06.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:06 vm01 bash[20728]: cluster 2026-03-09T16:04:04.785615+0000 mgr.y (mgr.14520) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 439 B/s wr, 2 op/s 2026-03-09T16:04:06.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:06 vm01 bash[20728]: cluster 2026-03-09T16:04:04.785615+0000 mgr.y (mgr.14520) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 439 B/s wr, 2 op/s 2026-03-09T16:04:06.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:04:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:04:08.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:08 vm09 bash[22983]: audit 2026-03-09T16:04:06.593934+0000 mgr.y (mgr.14520) 406 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:08.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:08 vm09 bash[22983]: audit 2026-03-09T16:04:06.593934+0000 mgr.y (mgr.14520) 406 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:08.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:08 vm09 bash[22983]: cluster 2026-03-09T16:04:06.786006+0000 mgr.y (mgr.14520) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:08.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:08 vm09 bash[22983]: cluster 2026-03-09T16:04:06.786006+0000 mgr.y (mgr.14520) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:08 vm01 bash[28152]: audit 2026-03-09T16:04:06.593934+0000 mgr.y (mgr.14520) 406 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:08 vm01 bash[28152]: audit 2026-03-09T16:04:06.593934+0000 mgr.y (mgr.14520) 406 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:08 vm01 bash[28152]: cluster 2026-03-09T16:04:06.786006+0000 mgr.y (mgr.14520) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:08 vm01 bash[28152]: cluster 2026-03-09T16:04:06.786006+0000 mgr.y (mgr.14520) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:08 vm01 bash[20728]: audit 2026-03-09T16:04:06.593934+0000 mgr.y (mgr.14520) 406 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:08 vm01 bash[20728]: audit 2026-03-09T16:04:06.593934+0000 mgr.y (mgr.14520) 406 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:08 vm01 bash[20728]: cluster 2026-03-09T16:04:06.786006+0000 mgr.y (mgr.14520) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:08 vm01 bash[20728]: cluster 2026-03-09T16:04:06.786006+0000 mgr.y (mgr.14520) 407 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:10 vm09 bash[22983]: cluster 2026-03-09T16:04:08.786594+0000 mgr.y (mgr.14520) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:10.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:10 vm09 bash[22983]: cluster 2026-03-09T16:04:08.786594+0000 mgr.y (mgr.14520) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:10.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:10 vm01 bash[28152]: cluster 2026-03-09T16:04:08.786594+0000 mgr.y (mgr.14520) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:10.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:10 vm01 bash[28152]: cluster 2026-03-09T16:04:08.786594+0000 mgr.y (mgr.14520) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:10.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:10 vm01 bash[20728]: cluster 2026-03-09T16:04:08.786594+0000 mgr.y (mgr.14520) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:10.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:10 vm01 bash[20728]: cluster 2026-03-09T16:04:08.786594+0000 mgr.y (mgr.14520) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:12 vm09 bash[22983]: cluster 2026-03-09T16:04:10.787339+0000 mgr.y (mgr.14520) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:12 vm09 bash[22983]: cluster 2026-03-09T16:04:10.787339+0000 mgr.y (mgr.14520) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:12 vm01 bash[28152]: cluster 2026-03-09T16:04:10.787339+0000 mgr.y (mgr.14520) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:12 vm01 bash[28152]: cluster 2026-03-09T16:04:10.787339+0000 mgr.y (mgr.14520) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:12 vm01 bash[20728]: cluster 2026-03-09T16:04:10.787339+0000 mgr.y (mgr.14520) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:12 vm01 bash[20728]: cluster 2026-03-09T16:04:10.787339+0000 mgr.y (mgr.14520) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:13.164 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:04:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:04:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:04:13.426 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 16:04:13 vm01 bash[36842]: debug 2026-03-09T16:04:13.162+0000 7f43c941b640 -1 snap_mapper.add_oid found existing snaps mapped on 297:d93e8d4f:test-rados-api-vm01-59821-80::foo:2, removing 2026-03-09T16:04:13.426 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 16:04:13 vm01 bash[42882]: debug 2026-03-09T16:04:13.162+0000 7fc7e0585640 -1 snap_mapper.add_oid found existing snaps mapped on 297:d93e8d4f:test-rados-api-vm01-59821-80::foo:2, removing 2026-03-09T16:04:13.632 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:04:13 vm09 bash[37995]: debug 2026-03-09T16:04:13.160+0000 7f58ecf59640 -1 snap_mapper.add_oid found existing snaps mapped on 297:d93e8d4f:test-rados-api-vm01-59821-80::foo:2, removing 2026-03-09T16:04:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: cluster 2026-03-09T16:04:12.787626+0000 mgr.y (mgr.14520) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: cluster 2026-03-09T16:04:12.787626+0000 mgr.y (mgr.14520) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: audit 2026-03-09T16:04:13.170026+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: audit 2026-03-09T16:04:13.170026+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: audit 2026-03-09T16:04:13.170269+0000 mon.a (mon.0) 2854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: audit 2026-03-09T16:04:13.170269+0000 mon.a (mon.0) 2854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: audit 2026-03-09T16:04:13.170890+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: audit 2026-03-09T16:04:13.170890+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: audit 2026-03-09T16:04:13.171103+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:14 vm09 bash[22983]: audit 2026-03-09T16:04:13.171103+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: cluster 2026-03-09T16:04:12.787626+0000 mgr.y (mgr.14520) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: cluster 2026-03-09T16:04:12.787626+0000 mgr.y (mgr.14520) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: audit 2026-03-09T16:04:13.170026+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: audit 2026-03-09T16:04:13.170026+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: audit 2026-03-09T16:04:13.170269+0000 mon.a (mon.0) 2854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: audit 2026-03-09T16:04:13.170269+0000 mon.a (mon.0) 2854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: audit 2026-03-09T16:04:13.170890+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: audit 2026-03-09T16:04:13.170890+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: audit 2026-03-09T16:04:13.171103+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:14 vm01 bash[28152]: audit 2026-03-09T16:04:13.171103+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: cluster 2026-03-09T16:04:12.787626+0000 mgr.y (mgr.14520) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: cluster 2026-03-09T16:04:12.787626+0000 mgr.y (mgr.14520) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: audit 2026-03-09T16:04:13.170026+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: audit 2026-03-09T16:04:13.170026+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: audit 2026-03-09T16:04:13.170269+0000 mon.a (mon.0) 2854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: audit 2026-03-09T16:04:13.170269+0000 mon.a (mon.0) 2854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: audit 2026-03-09T16:04:13.170890+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: audit 2026-03-09T16:04:13.170890+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: audit 2026-03-09T16:04:13.171103+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:14 vm01 bash[20728]: audit 2026-03-09T16:04:13.171103+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-79"}]: dispatch 2026-03-09T16:04:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:15 vm09 bash[22983]: cluster 2026-03-09T16:04:14.087818+0000 mon.a (mon.0) 2856 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T16:04:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:15 vm09 bash[22983]: cluster 2026-03-09T16:04:14.087818+0000 mon.a (mon.0) 2856 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T16:04:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:15 vm09 bash[22983]: audit 2026-03-09T16:04:14.312718+0000 mon.a (mon.0) 2857 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:15 vm09 bash[22983]: audit 2026-03-09T16:04:14.312718+0000 mon.a (mon.0) 2857 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:15 vm01 bash[20728]: cluster 2026-03-09T16:04:14.087818+0000 mon.a (mon.0) 2856 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T16:04:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:15 vm01 bash[20728]: cluster 2026-03-09T16:04:14.087818+0000 mon.a (mon.0) 2856 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T16:04:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:15 vm01 bash[20728]: audit 2026-03-09T16:04:14.312718+0000 mon.a (mon.0) 2857 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:15 vm01 bash[20728]: audit 2026-03-09T16:04:14.312718+0000 mon.a (mon.0) 2857 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:15 vm01 bash[28152]: cluster 2026-03-09T16:04:14.087818+0000 mon.a (mon.0) 2856 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T16:04:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:15 vm01 bash[28152]: cluster 2026-03-09T16:04:14.087818+0000 mon.a (mon.0) 2856 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-09T16:04:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:15 vm01 bash[28152]: audit 2026-03-09T16:04:14.312718+0000 mon.a (mon.0) 2857 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:15 vm01 bash[28152]: audit 2026-03-09T16:04:14.312718+0000 mon.a (mon.0) 2857 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:16 vm09 bash[22983]: cluster 2026-03-09T16:04:14.788149+0000 mgr.y (mgr.14520) 411 : cluster [DBG] pgmap v650: 260 pgs: 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:16 vm09 bash[22983]: cluster 2026-03-09T16:04:14.788149+0000 mgr.y (mgr.14520) 411 : cluster [DBG] pgmap v650: 260 pgs: 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:16 vm09 bash[22983]: cluster 2026-03-09T16:04:15.103846+0000 mon.a (mon.0) 2858 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T16:04:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:16 vm09 bash[22983]: cluster 2026-03-09T16:04:15.103846+0000 mon.a (mon.0) 2858 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T16:04:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:16 vm09 bash[22983]: audit 2026-03-09T16:04:15.121539+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:16 vm09 bash[22983]: audit 2026-03-09T16:04:15.121539+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:16 vm09 bash[22983]: audit 2026-03-09T16:04:15.121809+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:16 vm09 bash[22983]: audit 2026-03-09T16:04:15.121809+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:16 vm01 bash[20728]: cluster 2026-03-09T16:04:14.788149+0000 mgr.y (mgr.14520) 411 : cluster [DBG] pgmap v650: 260 pgs: 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:16 vm01 bash[20728]: cluster 2026-03-09T16:04:14.788149+0000 mgr.y (mgr.14520) 411 : cluster [DBG] pgmap v650: 260 pgs: 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:16 vm01 bash[20728]: cluster 2026-03-09T16:04:15.103846+0000 mon.a (mon.0) 2858 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:16 vm01 bash[20728]: cluster 2026-03-09T16:04:15.103846+0000 mon.a (mon.0) 2858 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:16 vm01 bash[20728]: audit 2026-03-09T16:04:15.121539+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:16 vm01 bash[20728]: audit 2026-03-09T16:04:15.121539+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:16 vm01 bash[20728]: audit 2026-03-09T16:04:15.121809+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:16 vm01 bash[20728]: audit 2026-03-09T16:04:15.121809+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:16 vm01 bash[28152]: cluster 2026-03-09T16:04:14.788149+0000 mgr.y (mgr.14520) 411 : cluster [DBG] pgmap v650: 260 pgs: 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:16 vm01 bash[28152]: cluster 2026-03-09T16:04:14.788149+0000 mgr.y (mgr.14520) 411 : cluster [DBG] pgmap v650: 260 pgs: 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:16 vm01 bash[28152]: cluster 2026-03-09T16:04:15.103846+0000 mon.a (mon.0) 2858 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:16 vm01 bash[28152]: cluster 2026-03-09T16:04:15.103846+0000 mon.a (mon.0) 2858 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:16 vm01 bash[28152]: audit 2026-03-09T16:04:15.121539+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:16 vm01 bash[28152]: audit 2026-03-09T16:04:15.121539+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:16 vm01 bash[28152]: audit 2026-03-09T16:04:15.121809+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:16 vm01 bash[28152]: audit 2026-03-09T16:04:15.121809+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:16.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:04:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:04:17.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.105040+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.105040+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: cluster 2026-03-09T16:04:16.108217+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: cluster 2026-03-09T16:04:16.108217+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.151389+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.151389+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.154739+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.154739+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.154973+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.154973+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.604625+0000 mgr.y (mgr.14520) 412 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: audit 2026-03-09T16:04:16.604625+0000 mgr.y (mgr.14520) 412 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: cluster 2026-03-09T16:04:16.788476+0000 mgr.y (mgr.14520) 413 : cluster [DBG] pgmap v653: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:17.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:17 vm09 bash[22983]: cluster 2026-03-09T16:04:16.788476+0000 mgr.y (mgr.14520) 413 : cluster [DBG] pgmap v653: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.105040+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.105040+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: cluster 2026-03-09T16:04:16.108217+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: cluster 2026-03-09T16:04:16.108217+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.151389+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.151389+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.154739+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.154739+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.154973+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.154973+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.604625+0000 mgr.y (mgr.14520) 412 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: audit 2026-03-09T16:04:16.604625+0000 mgr.y (mgr.14520) 412 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: cluster 2026-03-09T16:04:16.788476+0000 mgr.y (mgr.14520) 413 : cluster [DBG] pgmap v653: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:17 vm01 bash[28152]: cluster 2026-03-09T16:04:16.788476+0000 mgr.y (mgr.14520) 413 : cluster [DBG] pgmap v653: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.105040+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.105040+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: cluster 2026-03-09T16:04:16.108217+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: cluster 2026-03-09T16:04:16.108217+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.151389+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.151389+0000 mon.c (mon.2) 427 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.154739+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.154739+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.154973+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.154973+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.604625+0000 mgr.y (mgr.14520) 412 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: audit 2026-03-09T16:04:16.604625+0000 mgr.y (mgr.14520) 412 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: cluster 2026-03-09T16:04:16.788476+0000 mgr.y (mgr.14520) 413 : cluster [DBG] pgmap v653: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:17 vm01 bash[20728]: cluster 2026-03-09T16:04:16.788476+0000 mgr.y (mgr.14520) 413 : cluster [DBG] pgmap v653: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:04:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:18 vm09 bash[22983]: audit 2026-03-09T16:04:17.114251+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:18 vm09 bash[22983]: audit 2026-03-09T16:04:17.114251+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:18 vm09 bash[22983]: audit 2026-03-09T16:04:17.125898+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:18 vm09 bash[22983]: audit 2026-03-09T16:04:17.125898+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:18 vm09 bash[22983]: cluster 2026-03-09T16:04:17.128619+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T16:04:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:18 vm09 bash[22983]: cluster 2026-03-09T16:04:17.128619+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T16:04:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:18 vm09 bash[22983]: audit 2026-03-09T16:04:17.129789+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:18 vm09 bash[22983]: audit 2026-03-09T16:04:17.129789+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:18 vm01 bash[28152]: audit 2026-03-09T16:04:17.114251+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:18 vm01 bash[28152]: audit 2026-03-09T16:04:17.114251+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:18 vm01 bash[28152]: audit 2026-03-09T16:04:17.125898+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:18 vm01 bash[28152]: audit 2026-03-09T16:04:17.125898+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:18 vm01 bash[28152]: cluster 2026-03-09T16:04:17.128619+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:18 vm01 bash[28152]: cluster 2026-03-09T16:04:17.128619+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:18 vm01 bash[28152]: audit 2026-03-09T16:04:17.129789+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:18 vm01 bash[28152]: audit 2026-03-09T16:04:17.129789+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:18 vm01 bash[20728]: audit 2026-03-09T16:04:17.114251+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:18 vm01 bash[20728]: audit 2026-03-09T16:04:17.114251+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:18 vm01 bash[20728]: audit 2026-03-09T16:04:17.125898+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:18 vm01 bash[20728]: audit 2026-03-09T16:04:17.125898+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:18 vm01 bash[20728]: cluster 2026-03-09T16:04:17.128619+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:18 vm01 bash[20728]: cluster 2026-03-09T16:04:17.128619+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:18 vm01 bash[20728]: audit 2026-03-09T16:04:17.129789+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:18 vm01 bash[20728]: audit 2026-03-09T16:04:17.129789+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:18.117668+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:18.117668+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: cluster 2026-03-09T16:04:18.125285+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T16:04:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: cluster 2026-03-09T16:04:18.125285+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T16:04:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:18.126349+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:18.126349+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:18.127074+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:18.127074+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: cluster 2026-03-09T16:04:18.789004+0000 mgr.y (mgr.14520) 414 : cluster [DBG] pgmap v656: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: cluster 2026-03-09T16:04:18.789004+0000 mgr.y (mgr.14520) 414 : cluster [DBG] pgmap v656: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:19.121862+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:19.121862+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: cluster 2026-03-09T16:04:19.125606+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: cluster 2026-03-09T16:04:19.125606+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:19.132741+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:19.132741+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:19.134818+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:19 vm01 bash[28152]: audit 2026-03-09T16:04:19.134818+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:18.117668+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:18.117668+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: cluster 2026-03-09T16:04:18.125285+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: cluster 2026-03-09T16:04:18.125285+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:18.126349+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:18.126349+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:18.127074+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:18.127074+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: cluster 2026-03-09T16:04:18.789004+0000 mgr.y (mgr.14520) 414 : cluster [DBG] pgmap v656: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: cluster 2026-03-09T16:04:18.789004+0000 mgr.y (mgr.14520) 414 : cluster [DBG] pgmap v656: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:19.121862+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:19.121862+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: cluster 2026-03-09T16:04:19.125606+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: cluster 2026-03-09T16:04:19.125606+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:19.132741+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:19.132741+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:19.134818+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:19 vm01 bash[20728]: audit 2026-03-09T16:04:19.134818+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:18.117668+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:18.117668+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: cluster 2026-03-09T16:04:18.125285+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T16:04:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: cluster 2026-03-09T16:04:18.125285+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:18.126349+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:18.126349+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:18.127074+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:18.127074+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: cluster 2026-03-09T16:04:18.789004+0000 mgr.y (mgr.14520) 414 : cluster [DBG] pgmap v656: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: cluster 2026-03-09T16:04:18.789004+0000 mgr.y (mgr.14520) 414 : cluster [DBG] pgmap v656: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:19.121862+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:19.121862+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: cluster 2026-03-09T16:04:19.125606+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: cluster 2026-03-09T16:04:19.125606+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:19.132741+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:19.132741+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:19.134818+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:19 vm09 bash[22983]: audit 2026-03-09T16:04:19.134818+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:21.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: audit 2026-03-09T16:04:20.125250+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: audit 2026-03-09T16:04:20.125250+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: cluster 2026-03-09T16:04:20.128732+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T16:04:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: cluster 2026-03-09T16:04:20.128732+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T16:04:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: audit 2026-03-09T16:04:20.160257+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: audit 2026-03-09T16:04:20.160257+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: audit 2026-03-09T16:04:20.160574+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: audit 2026-03-09T16:04:20.160574+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: cluster 2026-03-09T16:04:20.789310+0000 mgr.y (mgr.14520) 415 : cluster [DBG] pgmap v659: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 86 KiB/s rd, 0 B/s wr, 143 op/s 2026-03-09T16:04:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:21 vm09 bash[22983]: cluster 2026-03-09T16:04:20.789310+0000 mgr.y (mgr.14520) 415 : cluster [DBG] pgmap v659: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 86 KiB/s rd, 0 B/s wr, 143 op/s 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: audit 2026-03-09T16:04:20.125250+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: audit 2026-03-09T16:04:20.125250+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: cluster 2026-03-09T16:04:20.128732+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: cluster 2026-03-09T16:04:20.128732+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: audit 2026-03-09T16:04:20.160257+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: audit 2026-03-09T16:04:20.160257+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: audit 2026-03-09T16:04:20.160574+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: audit 2026-03-09T16:04:20.160574+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: cluster 2026-03-09T16:04:20.789310+0000 mgr.y (mgr.14520) 415 : cluster [DBG] pgmap v659: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 86 KiB/s rd, 0 B/s wr, 143 op/s 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:21 vm01 bash[28152]: cluster 2026-03-09T16:04:20.789310+0000 mgr.y (mgr.14520) 415 : cluster [DBG] pgmap v659: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 86 KiB/s rd, 0 B/s wr, 143 op/s 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: audit 2026-03-09T16:04:20.125250+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: audit 2026-03-09T16:04:20.125250+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: cluster 2026-03-09T16:04:20.128732+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: cluster 2026-03-09T16:04:20.128732+0000 mon.a (mon.0) 2873 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: audit 2026-03-09T16:04:20.160257+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: audit 2026-03-09T16:04:20.160257+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: audit 2026-03-09T16:04:20.160574+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: audit 2026-03-09T16:04:20.160574+0000 mon.a (mon.0) 2874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T16:04:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: cluster 2026-03-09T16:04:20.789310+0000 mgr.y (mgr.14520) 415 : cluster [DBG] pgmap v659: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 86 KiB/s rd, 0 B/s wr, 143 op/s 2026-03-09T16:04:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:21 vm01 bash[20728]: cluster 2026-03-09T16:04:20.789310+0000 mgr.y (mgr.14520) 415 : cluster [DBG] pgmap v659: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 86 KiB/s rd, 0 B/s wr, 143 op/s 2026-03-09T16:04:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:22 vm09 bash[22983]: audit 2026-03-09T16:04:21.134165+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T16:04:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:22 vm09 bash[22983]: audit 2026-03-09T16:04:21.134165+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T16:04:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:22 vm09 bash[22983]: cluster 2026-03-09T16:04:21.140629+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T16:04:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:22 vm09 bash[22983]: cluster 2026-03-09T16:04:21.140629+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T16:04:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:22 vm09 bash[22983]: audit 2026-03-09T16:04:21.164091+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:22 vm09 bash[22983]: audit 2026-03-09T16:04:21.164091+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:22 vm09 bash[22983]: audit 2026-03-09T16:04:21.164504+0000 mon.a (mon.0) 2877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:22 vm09 bash[22983]: audit 2026-03-09T16:04:21.164504+0000 mon.a (mon.0) 2877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:22 vm01 bash[28152]: audit 2026-03-09T16:04:21.134165+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T16:04:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:22 vm01 bash[28152]: audit 2026-03-09T16:04:21.134165+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T16:04:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:22 vm01 bash[28152]: cluster 2026-03-09T16:04:21.140629+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T16:04:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:22 vm01 bash[28152]: cluster 2026-03-09T16:04:21.140629+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T16:04:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:22 vm01 bash[28152]: audit 2026-03-09T16:04:21.164091+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:22 vm01 bash[28152]: audit 2026-03-09T16:04:21.164091+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:22 vm01 bash[28152]: audit 2026-03-09T16:04:21.164504+0000 mon.a (mon.0) 2877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:22 vm01 bash[28152]: audit 2026-03-09T16:04:21.164504+0000 mon.a (mon.0) 2877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:22 vm01 bash[20728]: audit 2026-03-09T16:04:21.134165+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T16:04:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:22 vm01 bash[20728]: audit 2026-03-09T16:04:21.134165+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T16:04:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:22 vm01 bash[20728]: cluster 2026-03-09T16:04:21.140629+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T16:04:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:22 vm01 bash[20728]: cluster 2026-03-09T16:04:21.140629+0000 mon.a (mon.0) 2876 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-09T16:04:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:22 vm01 bash[20728]: audit 2026-03-09T16:04:21.164091+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:22 vm01 bash[20728]: audit 2026-03-09T16:04:21.164091+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:22 vm01 bash[20728]: audit 2026-03-09T16:04:21.164504+0000 mon.a (mon.0) 2877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:22 vm01 bash[20728]: audit 2026-03-09T16:04:21.164504+0000 mon.a (mon.0) 2877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T16:04:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:04:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:04:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: audit 2026-03-09T16:04:22.204196+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: audit 2026-03-09T16:04:22.204196+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: cluster 2026-03-09T16:04:22.206455+0000 mon.a (mon.0) 2879 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: cluster 2026-03-09T16:04:22.206455+0000 mon.a (mon.0) 2879 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: audit 2026-03-09T16:04:22.227479+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: audit 2026-03-09T16:04:22.227479+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: audit 2026-03-09T16:04:22.227771+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: audit 2026-03-09T16:04:22.227771+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: cluster 2026-03-09T16:04:22.789594+0000 mgr.y (mgr.14520) 416 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s 2026-03-09T16:04:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:23 vm09 bash[22983]: cluster 2026-03-09T16:04:22.789594+0000 mgr.y (mgr.14520) 416 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s 2026-03-09T16:04:23.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: audit 2026-03-09T16:04:22.204196+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T16:04:23.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: audit 2026-03-09T16:04:22.204196+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T16:04:23.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: cluster 2026-03-09T16:04:22.206455+0000 mon.a (mon.0) 2879 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T16:04:23.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: cluster 2026-03-09T16:04:22.206455+0000 mon.a (mon.0) 2879 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T16:04:23.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: audit 2026-03-09T16:04:22.227479+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: audit 2026-03-09T16:04:22.227479+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: audit 2026-03-09T16:04:22.227771+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: audit 2026-03-09T16:04:22.227771+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: cluster 2026-03-09T16:04:22.789594+0000 mgr.y (mgr.14520) 416 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:23 vm01 bash[28152]: cluster 2026-03-09T16:04:22.789594+0000 mgr.y (mgr.14520) 416 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: audit 2026-03-09T16:04:22.204196+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: audit 2026-03-09T16:04:22.204196+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: cluster 2026-03-09T16:04:22.206455+0000 mon.a (mon.0) 2879 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: cluster 2026-03-09T16:04:22.206455+0000 mon.a (mon.0) 2879 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: audit 2026-03-09T16:04:22.227479+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: audit 2026-03-09T16:04:22.227479+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: audit 2026-03-09T16:04:22.227771+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: audit 2026-03-09T16:04:22.227771+0000 mon.a (mon.0) 2880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: cluster 2026-03-09T16:04:22.789594+0000 mgr.y (mgr.14520) 416 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s 2026-03-09T16:04:23.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:23 vm01 bash[20728]: cluster 2026-03-09T16:04:22.789594+0000 mgr.y (mgr.14520) 416 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.226296+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.226296+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: cluster 2026-03-09T16:04:23.250740+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: cluster 2026-03-09T16:04:23.250740+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.285894+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.285894+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.286238+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.286238+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.286604+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.286604+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.286830+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:24 vm09 bash[22983]: audit 2026-03-09T16:04:23.286830+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.226296+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:24.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.226296+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:24.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: cluster 2026-03-09T16:04:23.250740+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T16:04:24.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: cluster 2026-03-09T16:04:23.250740+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T16:04:24.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.285894+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.285894+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.286238+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.286238+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.286604+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.286604+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.286830+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:24 vm01 bash[28152]: audit 2026-03-09T16:04:23.286830+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.226296+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.226296+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: cluster 2026-03-09T16:04:23.250740+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: cluster 2026-03-09T16:04:23.250740+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.285894+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.285894+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.286238+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.286238+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.286604+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.286604+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.286830+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:24.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:24 vm01 bash[20728]: audit 2026-03-09T16:04:23.286830+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-81"}]: dispatch 2026-03-09T16:04:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:25 vm09 bash[22983]: cluster 2026-03-09T16:04:24.235038+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T16:04:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:25 vm09 bash[22983]: cluster 2026-03-09T16:04:24.235038+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T16:04:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:25 vm09 bash[22983]: cluster 2026-03-09T16:04:24.789969+0000 mgr.y (mgr.14520) 417 : cluster [DBG] pgmap v665: 260 pgs: 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:25.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:25 vm09 bash[22983]: cluster 2026-03-09T16:04:24.789969+0000 mgr.y (mgr.14520) 417 : cluster [DBG] pgmap v665: 260 pgs: 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:25.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:25 vm01 bash[28152]: cluster 2026-03-09T16:04:24.235038+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T16:04:25.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:25 vm01 bash[28152]: cluster 2026-03-09T16:04:24.235038+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T16:04:25.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:25 vm01 bash[28152]: cluster 2026-03-09T16:04:24.789969+0000 mgr.y (mgr.14520) 417 : cluster [DBG] pgmap v665: 260 pgs: 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:25.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:25 vm01 bash[28152]: cluster 2026-03-09T16:04:24.789969+0000 mgr.y (mgr.14520) 417 : cluster [DBG] pgmap v665: 260 pgs: 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:25.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:25 vm01 bash[20728]: cluster 2026-03-09T16:04:24.235038+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T16:04:25.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:25 vm01 bash[20728]: cluster 2026-03-09T16:04:24.235038+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-09T16:04:25.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:25 vm01 bash[20728]: cluster 2026-03-09T16:04:24.789969+0000 mgr.y (mgr.14520) 417 : cluster [DBG] pgmap v665: 260 pgs: 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:25.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:25 vm01 bash[20728]: cluster 2026-03-09T16:04:24.789969+0000 mgr.y (mgr.14520) 417 : cluster [DBG] pgmap v665: 260 pgs: 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:26.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:26 vm09 bash[22983]: cluster 2026-03-09T16:04:25.283413+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T16:04:26.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:26 vm09 bash[22983]: cluster 2026-03-09T16:04:25.283413+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T16:04:26.610 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:26 vm09 bash[22983]: audit 2026-03-09T16:04:25.287749+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.610 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:26 vm09 bash[22983]: audit 2026-03-09T16:04:25.287749+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.610 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:26 vm09 bash[22983]: audit 2026-03-09T16:04:25.288025+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.610 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:26 vm09 bash[22983]: audit 2026-03-09T16:04:25.288025+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:26 vm01 bash[28152]: cluster 2026-03-09T16:04:25.283413+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:26 vm01 bash[28152]: cluster 2026-03-09T16:04:25.283413+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:26 vm01 bash[28152]: audit 2026-03-09T16:04:25.287749+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:26 vm01 bash[28152]: audit 2026-03-09T16:04:25.287749+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:26 vm01 bash[28152]: audit 2026-03-09T16:04:25.288025+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:26 vm01 bash[28152]: audit 2026-03-09T16:04:25.288025+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:26 vm01 bash[20728]: cluster 2026-03-09T16:04:25.283413+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:26 vm01 bash[20728]: cluster 2026-03-09T16:04:25.283413+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:26 vm01 bash[20728]: audit 2026-03-09T16:04:25.287749+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:26 vm01 bash[20728]: audit 2026-03-09T16:04:25.287749+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:26 vm01 bash[20728]: audit 2026-03-09T16:04:25.288025+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:26 vm01 bash[20728]: audit 2026-03-09T16:04:25.288025+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:26.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:04:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.276360+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.276360+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.290831+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.290831+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.292936+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.292936+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: cluster 2026-03-09T16:04:26.293260+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: cluster 2026-03-09T16:04:26.293260+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.309870+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.309870+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.613125+0000 mgr.y (mgr.14520) 418 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: audit 2026-03-09T16:04:26.613125+0000 mgr.y (mgr.14520) 418 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: cluster 2026-03-09T16:04:26.790395+0000 mgr.y (mgr.14520) 419 : cluster [DBG] pgmap v668: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:27.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:27 vm09 bash[22983]: cluster 2026-03-09T16:04:26.790395+0000 mgr.y (mgr.14520) 419 : cluster [DBG] pgmap v668: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.276360+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.276360+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.290831+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.290831+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.292936+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.292936+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: cluster 2026-03-09T16:04:26.293260+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: cluster 2026-03-09T16:04:26.293260+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.309870+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.309870+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.613125+0000 mgr.y (mgr.14520) 418 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: audit 2026-03-09T16:04:26.613125+0000 mgr.y (mgr.14520) 418 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: cluster 2026-03-09T16:04:26.790395+0000 mgr.y (mgr.14520) 419 : cluster [DBG] pgmap v668: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:27 vm01 bash[28152]: cluster 2026-03-09T16:04:26.790395+0000 mgr.y (mgr.14520) 419 : cluster [DBG] pgmap v668: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.276360+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.276360+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.290831+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.290831+0000 mon.c (mon.2) 438 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.292936+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.292936+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: cluster 2026-03-09T16:04:26.293260+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: cluster 2026-03-09T16:04:26.293260+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.309870+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.309870+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.613125+0000 mgr.y (mgr.14520) 418 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: audit 2026-03-09T16:04:26.613125+0000 mgr.y (mgr.14520) 418 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: cluster 2026-03-09T16:04:26.790395+0000 mgr.y (mgr.14520) 419 : cluster [DBG] pgmap v668: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:27.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:27 vm01 bash[20728]: cluster 2026-03-09T16:04:26.790395+0000 mgr.y (mgr.14520) 419 : cluster [DBG] pgmap v668: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 76 op/s 2026-03-09T16:04:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:28 vm09 bash[22983]: audit 2026-03-09T16:04:27.347247+0000 mon.a (mon.0) 2891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:28 vm09 bash[22983]: audit 2026-03-09T16:04:27.347247+0000 mon.a (mon.0) 2891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:28 vm09 bash[22983]: cluster 2026-03-09T16:04:27.357712+0000 mon.a (mon.0) 2892 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T16:04:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:28 vm09 bash[22983]: cluster 2026-03-09T16:04:27.357712+0000 mon.a (mon.0) 2892 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T16:04:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:28 vm09 bash[22983]: audit 2026-03-09T16:04:27.369538+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:28 vm09 bash[22983]: audit 2026-03-09T16:04:27.369538+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:28 vm09 bash[22983]: audit 2026-03-09T16:04:27.369903+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:28 vm09 bash[22983]: audit 2026-03-09T16:04:27.369903+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:28 vm01 bash[28152]: audit 2026-03-09T16:04:27.347247+0000 mon.a (mon.0) 2891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:28.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:28 vm01 bash[28152]: audit 2026-03-09T16:04:27.347247+0000 mon.a (mon.0) 2891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:28.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:28 vm01 bash[28152]: cluster 2026-03-09T16:04:27.357712+0000 mon.a (mon.0) 2892 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T16:04:28.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:28 vm01 bash[28152]: cluster 2026-03-09T16:04:27.357712+0000 mon.a (mon.0) 2892 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T16:04:28.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:28 vm01 bash[28152]: audit 2026-03-09T16:04:27.369538+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:28 vm01 bash[28152]: audit 2026-03-09T16:04:27.369538+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:28 vm01 bash[28152]: audit 2026-03-09T16:04:27.369903+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:28 vm01 bash[28152]: audit 2026-03-09T16:04:27.369903+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:28 vm01 bash[20728]: audit 2026-03-09T16:04:27.347247+0000 mon.a (mon.0) 2891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:28 vm01 bash[20728]: audit 2026-03-09T16:04:27.347247+0000 mon.a (mon.0) 2891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:28 vm01 bash[20728]: cluster 2026-03-09T16:04:27.357712+0000 mon.a (mon.0) 2892 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T16:04:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:28 vm01 bash[20728]: cluster 2026-03-09T16:04:27.357712+0000 mon.a (mon.0) 2892 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-09T16:04:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:28 vm01 bash[20728]: audit 2026-03-09T16:04:27.369538+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:28 vm01 bash[20728]: audit 2026-03-09T16:04:27.369538+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:28 vm01 bash[20728]: audit 2026-03-09T16:04:27.369903+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:28.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:28 vm01 bash[20728]: audit 2026-03-09T16:04:27.369903+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: audit 2026-03-09T16:04:28.350617+0000 mon.a (mon.0) 2894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: audit 2026-03-09T16:04:28.350617+0000 mon.a (mon.0) 2894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: cluster 2026-03-09T16:04:28.353280+0000 mon.a (mon.0) 2895 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: cluster 2026-03-09T16:04:28.353280+0000 mon.a (mon.0) 2895 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: audit 2026-03-09T16:04:28.359299+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: audit 2026-03-09T16:04:28.359299+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: audit 2026-03-09T16:04:28.365422+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: audit 2026-03-09T16:04:28.365422+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: cluster 2026-03-09T16:04:28.790822+0000 mgr.y (mgr.14520) 420 : cluster [DBG] pgmap v671: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: cluster 2026-03-09T16:04:28.790822+0000 mgr.y (mgr.14520) 420 : cluster [DBG] pgmap v671: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: audit 2026-03-09T16:04:29.319126+0000 mon.a (mon.0) 2897 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:29.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:29 vm09 bash[22983]: audit 2026-03-09T16:04:29.319126+0000 mon.a (mon.0) 2897 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: audit 2026-03-09T16:04:28.350617+0000 mon.a (mon.0) 2894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: audit 2026-03-09T16:04:28.350617+0000 mon.a (mon.0) 2894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: cluster 2026-03-09T16:04:28.353280+0000 mon.a (mon.0) 2895 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: cluster 2026-03-09T16:04:28.353280+0000 mon.a (mon.0) 2895 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: audit 2026-03-09T16:04:28.359299+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: audit 2026-03-09T16:04:28.359299+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: audit 2026-03-09T16:04:28.365422+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: audit 2026-03-09T16:04:28.365422+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: cluster 2026-03-09T16:04:28.790822+0000 mgr.y (mgr.14520) 420 : cluster [DBG] pgmap v671: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: cluster 2026-03-09T16:04:28.790822+0000 mgr.y (mgr.14520) 420 : cluster [DBG] pgmap v671: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: audit 2026-03-09T16:04:29.319126+0000 mon.a (mon.0) 2897 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:29 vm01 bash[28152]: audit 2026-03-09T16:04:29.319126+0000 mon.a (mon.0) 2897 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: audit 2026-03-09T16:04:28.350617+0000 mon.a (mon.0) 2894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: audit 2026-03-09T16:04:28.350617+0000 mon.a (mon.0) 2894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: cluster 2026-03-09T16:04:28.353280+0000 mon.a (mon.0) 2895 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: cluster 2026-03-09T16:04:28.353280+0000 mon.a (mon.0) 2895 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: audit 2026-03-09T16:04:28.359299+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: audit 2026-03-09T16:04:28.359299+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: audit 2026-03-09T16:04:28.365422+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: audit 2026-03-09T16:04:28.365422+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:29.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: cluster 2026-03-09T16:04:28.790822+0000 mgr.y (mgr.14520) 420 : cluster [DBG] pgmap v671: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:29.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: cluster 2026-03-09T16:04:28.790822+0000 mgr.y (mgr.14520) 420 : cluster [DBG] pgmap v671: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:29.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: audit 2026-03-09T16:04:29.319126+0000 mon.a (mon.0) 2897 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:29.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:29 vm01 bash[20728]: audit 2026-03-09T16:04:29.319126+0000 mon.a (mon.0) 2897 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:30 vm09 bash[22983]: audit 2026-03-09T16:04:29.370994+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:30 vm09 bash[22983]: audit 2026-03-09T16:04:29.370994+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:30 vm09 bash[22983]: audit 2026-03-09T16:04:29.379188+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:30 vm09 bash[22983]: audit 2026-03-09T16:04:29.379188+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:30 vm09 bash[22983]: cluster 2026-03-09T16:04:29.387770+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T16:04:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:30 vm09 bash[22983]: cluster 2026-03-09T16:04:29.387770+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T16:04:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:30 vm09 bash[22983]: audit 2026-03-09T16:04:29.388909+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:30 vm09 bash[22983]: audit 2026-03-09T16:04:29.388909+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:30 vm01 bash[28152]: audit 2026-03-09T16:04:29.370994+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:30 vm01 bash[28152]: audit 2026-03-09T16:04:29.370994+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:30 vm01 bash[28152]: audit 2026-03-09T16:04:29.379188+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:30 vm01 bash[28152]: audit 2026-03-09T16:04:29.379188+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:30 vm01 bash[28152]: cluster 2026-03-09T16:04:29.387770+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T16:04:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:30 vm01 bash[28152]: cluster 2026-03-09T16:04:29.387770+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T16:04:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:30 vm01 bash[28152]: audit 2026-03-09T16:04:29.388909+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:30 vm01 bash[28152]: audit 2026-03-09T16:04:29.388909+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:30 vm01 bash[20728]: audit 2026-03-09T16:04:29.370994+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:30.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:30 vm01 bash[20728]: audit 2026-03-09T16:04:29.370994+0000 mon.a (mon.0) 2898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:30.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:30 vm01 bash[20728]: audit 2026-03-09T16:04:29.379188+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:30 vm01 bash[20728]: audit 2026-03-09T16:04:29.379188+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:30 vm01 bash[20728]: cluster 2026-03-09T16:04:29.387770+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T16:04:30.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:30 vm01 bash[20728]: cluster 2026-03-09T16:04:29.387770+0000 mon.a (mon.0) 2899 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-09T16:04:30.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:30 vm01 bash[20728]: audit 2026-03-09T16:04:29.388909+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:30.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:30 vm01 bash[20728]: audit 2026-03-09T16:04:29.388909+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:31.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:31 vm01 bash[28152]: audit 2026-03-09T16:04:30.383123+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:31.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:31 vm01 bash[28152]: audit 2026-03-09T16:04:30.383123+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:31.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:31 vm01 bash[28152]: cluster 2026-03-09T16:04:30.386295+0000 mon.a (mon.0) 2902 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T16:04:31.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:31 vm01 bash[28152]: cluster 2026-03-09T16:04:30.386295+0000 mon.a (mon.0) 2902 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T16:04:31.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:31 vm01 bash[28152]: cluster 2026-03-09T16:04:30.791146+0000 mgr.y (mgr.14520) 421 : cluster [DBG] pgmap v674: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:31.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:31 vm01 bash[28152]: cluster 2026-03-09T16:04:30.791146+0000 mgr.y (mgr.14520) 421 : cluster [DBG] pgmap v674: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:31.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:31 vm01 bash[20728]: audit 2026-03-09T16:04:30.383123+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:31.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:31 vm01 bash[20728]: audit 2026-03-09T16:04:30.383123+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:31.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:31 vm01 bash[20728]: cluster 2026-03-09T16:04:30.386295+0000 mon.a (mon.0) 2902 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T16:04:31.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:31 vm01 bash[20728]: cluster 2026-03-09T16:04:30.386295+0000 mon.a (mon.0) 2902 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T16:04:31.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:31 vm01 bash[20728]: cluster 2026-03-09T16:04:30.791146+0000 mgr.y (mgr.14520) 421 : cluster [DBG] pgmap v674: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:31.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:31 vm01 bash[20728]: cluster 2026-03-09T16:04:30.791146+0000 mgr.y (mgr.14520) 421 : cluster [DBG] pgmap v674: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:31 vm09 bash[22983]: audit 2026-03-09T16:04:30.383123+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:31 vm09 bash[22983]: audit 2026-03-09T16:04:30.383123+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:31 vm09 bash[22983]: cluster 2026-03-09T16:04:30.386295+0000 mon.a (mon.0) 2902 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T16:04:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:31 vm09 bash[22983]: cluster 2026-03-09T16:04:30.386295+0000 mon.a (mon.0) 2902 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-09T16:04:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:31 vm09 bash[22983]: cluster 2026-03-09T16:04:30.791146+0000 mgr.y (mgr.14520) 421 : cluster [DBG] pgmap v674: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:31 vm09 bash[22983]: cluster 2026-03-09T16:04:30.791146+0000 mgr.y (mgr.14520) 421 : cluster [DBG] pgmap v674: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:32.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:32 vm01 bash[28152]: cluster 2026-03-09T16:04:31.400178+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T16:04:32.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:32 vm01 bash[28152]: cluster 2026-03-09T16:04:31.400178+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T16:04:32.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:32 vm01 bash[20728]: cluster 2026-03-09T16:04:31.400178+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T16:04:32.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:32 vm01 bash[20728]: cluster 2026-03-09T16:04:31.400178+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T16:04:32.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:32 vm09 bash[22983]: cluster 2026-03-09T16:04:31.400178+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T16:04:32.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:32 vm09 bash[22983]: cluster 2026-03-09T16:04:31.400178+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-09T16:04:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:04:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:04:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: cluster 2026-03-09T16:04:32.410134+0000 mon.a (mon.0) 2904 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: cluster 2026-03-09T16:04:32.410134+0000 mon.a (mon.0) 2904 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: audit 2026-03-09T16:04:32.455204+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: audit 2026-03-09T16:04:32.455204+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: audit 2026-03-09T16:04:32.455509+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: audit 2026-03-09T16:04:32.455509+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: audit 2026-03-09T16:04:32.455955+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: audit 2026-03-09T16:04:32.455955+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: audit 2026-03-09T16:04:32.456153+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: audit 2026-03-09T16:04:32.456153+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: cluster 2026-03-09T16:04:32.791493+0000 mgr.y (mgr.14520) 422 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:33.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:33 vm01 bash[28152]: cluster 2026-03-09T16:04:32.791493+0000 mgr.y (mgr.14520) 422 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: cluster 2026-03-09T16:04:32.410134+0000 mon.a (mon.0) 2904 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: cluster 2026-03-09T16:04:32.410134+0000 mon.a (mon.0) 2904 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: audit 2026-03-09T16:04:32.455204+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: audit 2026-03-09T16:04:32.455204+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: audit 2026-03-09T16:04:32.455509+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: audit 2026-03-09T16:04:32.455509+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: audit 2026-03-09T16:04:32.455955+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: audit 2026-03-09T16:04:32.455955+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: audit 2026-03-09T16:04:32.456153+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: audit 2026-03-09T16:04:32.456153+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: cluster 2026-03-09T16:04:32.791493+0000 mgr.y (mgr.14520) 422 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:33.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:33 vm01 bash[20728]: cluster 2026-03-09T16:04:32.791493+0000 mgr.y (mgr.14520) 422 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:33.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: cluster 2026-03-09T16:04:32.410134+0000 mon.a (mon.0) 2904 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: cluster 2026-03-09T16:04:32.410134+0000 mon.a (mon.0) 2904 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: audit 2026-03-09T16:04:32.455204+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: audit 2026-03-09T16:04:32.455204+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: audit 2026-03-09T16:04:32.455509+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: audit 2026-03-09T16:04:32.455509+0000 mon.a (mon.0) 2905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: audit 2026-03-09T16:04:32.455955+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: audit 2026-03-09T16:04:32.455955+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: audit 2026-03-09T16:04:32.456153+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: audit 2026-03-09T16:04:32.456153+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-83"}]: dispatch 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: cluster 2026-03-09T16:04:32.791493+0000 mgr.y (mgr.14520) 422 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:33.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:33 vm09 bash[22983]: cluster 2026-03-09T16:04:32.791493+0000 mgr.y (mgr.14520) 422 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:34.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:34 vm01 bash[28152]: cluster 2026-03-09T16:04:33.434287+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T16:04:34.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:34 vm01 bash[28152]: cluster 2026-03-09T16:04:33.434287+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T16:04:34.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:34 vm01 bash[20728]: cluster 2026-03-09T16:04:33.434287+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T16:04:34.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:34 vm01 bash[20728]: cluster 2026-03-09T16:04:33.434287+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T16:04:34.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:34 vm09 bash[22983]: cluster 2026-03-09T16:04:33.434287+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T16:04:34.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:34 vm09 bash[22983]: cluster 2026-03-09T16:04:33.434287+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: cluster 2026-03-09T16:04:34.427711+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: cluster 2026-03-09T16:04:34.427711+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:34.443431+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:34.443431+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:34.443697+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:34.443697+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: cluster 2026-03-09T16:04:34.791873+0000 mgr.y (mgr.14520) 423 : cluster [DBG] pgmap v680: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: cluster 2026-03-09T16:04:34.791873+0000 mgr.y (mgr.14520) 423 : cluster [DBG] pgmap v680: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:34.804569+0000 mon.a (mon.0) 2910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:34.804569+0000 mon.a (mon.0) 2910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:35.139251+0000 mon.a (mon.0) 2911 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:35.139251+0000 mon.a (mon.0) 2911 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:35.139839+0000 mon.a (mon.0) 2912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:35.139839+0000 mon.a (mon.0) 2912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:35.144757+0000 mon.a (mon.0) 2913 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:04:35.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:35 vm09 bash[22983]: audit 2026-03-09T16:04:35.144757+0000 mon.a (mon.0) 2913 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: cluster 2026-03-09T16:04:34.427711+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: cluster 2026-03-09T16:04:34.427711+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:34.443431+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:34.443431+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:34.443697+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:34.443697+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: cluster 2026-03-09T16:04:34.791873+0000 mgr.y (mgr.14520) 423 : cluster [DBG] pgmap v680: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: cluster 2026-03-09T16:04:34.791873+0000 mgr.y (mgr.14520) 423 : cluster [DBG] pgmap v680: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:34.804569+0000 mon.a (mon.0) 2910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:34.804569+0000 mon.a (mon.0) 2910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:35.139251+0000 mon.a (mon.0) 2911 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:35.139251+0000 mon.a (mon.0) 2911 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:35.139839+0000 mon.a (mon.0) 2912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:35.139839+0000 mon.a (mon.0) 2912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:35.144757+0000 mon.a (mon.0) 2913 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:35 vm01 bash[28152]: audit 2026-03-09T16:04:35.144757+0000 mon.a (mon.0) 2913 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:04:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: cluster 2026-03-09T16:04:34.427711+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: cluster 2026-03-09T16:04:34.427711+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:34.443431+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:34.443431+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:34.443697+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:34.443697+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: cluster 2026-03-09T16:04:34.791873+0000 mgr.y (mgr.14520) 423 : cluster [DBG] pgmap v680: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: cluster 2026-03-09T16:04:34.791873+0000 mgr.y (mgr.14520) 423 : cluster [DBG] pgmap v680: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:34.804569+0000 mon.a (mon.0) 2910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:34.804569+0000 mon.a (mon.0) 2910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:35.139251+0000 mon.a (mon.0) 2911 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:35.139251+0000 mon.a (mon.0) 2911 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:35.139839+0000 mon.a (mon.0) 2912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:35.139839+0000 mon.a (mon.0) 2912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:35.144757+0000 mon.a (mon.0) 2913 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:04:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:35 vm01 bash[20728]: audit 2026-03-09T16:04:35.144757+0000 mon.a (mon.0) 2913 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:04:36.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:04:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:04:36.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: cluster 2026-03-09T16:04:35.422989+0000 mon.a (mon.0) 2914 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:36.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: cluster 2026-03-09T16:04:35.422989+0000 mon.a (mon.0) 2914 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: audit 2026-03-09T16:04:35.433082+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: audit 2026-03-09T16:04:35.433082+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: cluster 2026-03-09T16:04:35.439702+0000 mon.a (mon.0) 2916 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: cluster 2026-03-09T16:04:35.439702+0000 mon.a (mon.0) 2916 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: audit 2026-03-09T16:04:35.468058+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: audit 2026-03-09T16:04:35.468058+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: audit 2026-03-09T16:04:35.471614+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: audit 2026-03-09T16:04:35.471614+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: audit 2026-03-09T16:04:35.471845+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:36 vm09 bash[22983]: audit 2026-03-09T16:04:35.471845+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: cluster 2026-03-09T16:04:35.422989+0000 mon.a (mon.0) 2914 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: cluster 2026-03-09T16:04:35.422989+0000 mon.a (mon.0) 2914 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: audit 2026-03-09T16:04:35.433082+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: audit 2026-03-09T16:04:35.433082+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: cluster 2026-03-09T16:04:35.439702+0000 mon.a (mon.0) 2916 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: cluster 2026-03-09T16:04:35.439702+0000 mon.a (mon.0) 2916 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: audit 2026-03-09T16:04:35.468058+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: audit 2026-03-09T16:04:35.468058+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: audit 2026-03-09T16:04:35.471614+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: audit 2026-03-09T16:04:35.471614+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: audit 2026-03-09T16:04:35.471845+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:36 vm01 bash[28152]: audit 2026-03-09T16:04:35.471845+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: cluster 2026-03-09T16:04:35.422989+0000 mon.a (mon.0) 2914 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: cluster 2026-03-09T16:04:35.422989+0000 mon.a (mon.0) 2914 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: audit 2026-03-09T16:04:35.433082+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: audit 2026-03-09T16:04:35.433082+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: cluster 2026-03-09T16:04:35.439702+0000 mon.a (mon.0) 2916 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: cluster 2026-03-09T16:04:35.439702+0000 mon.a (mon.0) 2916 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: audit 2026-03-09T16:04:35.468058+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: audit 2026-03-09T16:04:35.468058+0000 mon.c (mon.2) 446 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: audit 2026-03-09T16:04:35.471614+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: audit 2026-03-09T16:04:35.471614+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: audit 2026-03-09T16:04:35.471845+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:36.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:36 vm01 bash[20728]: audit 2026-03-09T16:04:35.471845+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: audit 2026-03-09T16:04:36.444310+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: audit 2026-03-09T16:04:36.444310+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: cluster 2026-03-09T16:04:36.449013+0000 mon.a (mon.0) 2919 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: cluster 2026-03-09T16:04:36.449013+0000 mon.a (mon.0) 2919 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: audit 2026-03-09T16:04:36.449654+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: audit 2026-03-09T16:04:36.449654+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: audit 2026-03-09T16:04:36.450214+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: audit 2026-03-09T16:04:36.450214+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: audit 2026-03-09T16:04:36.623810+0000 mgr.y (mgr.14520) 424 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: audit 2026-03-09T16:04:36.623810+0000 mgr.y (mgr.14520) 424 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: cluster 2026-03-09T16:04:36.792288+0000 mgr.y (mgr.14520) 425 : cluster [DBG] pgmap v683: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:37 vm09 bash[22983]: cluster 2026-03-09T16:04:36.792288+0000 mgr.y (mgr.14520) 425 : cluster [DBG] pgmap v683: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: audit 2026-03-09T16:04:36.444310+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: audit 2026-03-09T16:04:36.444310+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: cluster 2026-03-09T16:04:36.449013+0000 mon.a (mon.0) 2919 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: cluster 2026-03-09T16:04:36.449013+0000 mon.a (mon.0) 2919 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: audit 2026-03-09T16:04:36.449654+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: audit 2026-03-09T16:04:36.449654+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: audit 2026-03-09T16:04:36.450214+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: audit 2026-03-09T16:04:36.450214+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: audit 2026-03-09T16:04:36.623810+0000 mgr.y (mgr.14520) 424 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: audit 2026-03-09T16:04:36.623810+0000 mgr.y (mgr.14520) 424 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: cluster 2026-03-09T16:04:36.792288+0000 mgr.y (mgr.14520) 425 : cluster [DBG] pgmap v683: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:37 vm01 bash[28152]: cluster 2026-03-09T16:04:36.792288+0000 mgr.y (mgr.14520) 425 : cluster [DBG] pgmap v683: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: audit 2026-03-09T16:04:36.444310+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: audit 2026-03-09T16:04:36.444310+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: cluster 2026-03-09T16:04:36.449013+0000 mon.a (mon.0) 2919 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: cluster 2026-03-09T16:04:36.449013+0000 mon.a (mon.0) 2919 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: audit 2026-03-09T16:04:36.449654+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: audit 2026-03-09T16:04:36.449654+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: audit 2026-03-09T16:04:36.450214+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: audit 2026-03-09T16:04:36.450214+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: audit 2026-03-09T16:04:36.623810+0000 mgr.y (mgr.14520) 424 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: audit 2026-03-09T16:04:36.623810+0000 mgr.y (mgr.14520) 424 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: cluster 2026-03-09T16:04:36.792288+0000 mgr.y (mgr.14520) 425 : cluster [DBG] pgmap v683: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:37.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:37 vm01 bash[20728]: cluster 2026-03-09T16:04:36.792288+0000 mgr.y (mgr.14520) 425 : cluster [DBG] pgmap v683: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T16:04:38.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:38 vm09 bash[22983]: audit 2026-03-09T16:04:37.451740+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:38.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:38 vm09 bash[22983]: audit 2026-03-09T16:04:37.451740+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:38.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:38 vm09 bash[22983]: audit 2026-03-09T16:04:37.459759+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:38 vm09 bash[22983]: audit 2026-03-09T16:04:37.459759+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:38 vm09 bash[22983]: cluster 2026-03-09T16:04:37.477837+0000 mon.a (mon.0) 2922 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T16:04:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:38 vm09 bash[22983]: cluster 2026-03-09T16:04:37.477837+0000 mon.a (mon.0) 2922 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T16:04:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:38 vm09 bash[22983]: audit 2026-03-09T16:04:37.478630+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:38 vm09 bash[22983]: audit 2026-03-09T16:04:37.478630+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:38 vm01 bash[28152]: audit 2026-03-09T16:04:37.451740+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:38 vm01 bash[28152]: audit 2026-03-09T16:04:37.451740+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:38 vm01 bash[28152]: audit 2026-03-09T16:04:37.459759+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:38 vm01 bash[28152]: audit 2026-03-09T16:04:37.459759+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:38 vm01 bash[28152]: cluster 2026-03-09T16:04:37.477837+0000 mon.a (mon.0) 2922 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:38 vm01 bash[28152]: cluster 2026-03-09T16:04:37.477837+0000 mon.a (mon.0) 2922 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:38 vm01 bash[28152]: audit 2026-03-09T16:04:37.478630+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:38 vm01 bash[28152]: audit 2026-03-09T16:04:37.478630+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:38 vm01 bash[20728]: audit 2026-03-09T16:04:37.451740+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:38 vm01 bash[20728]: audit 2026-03-09T16:04:37.451740+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_tier","val": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:38 vm01 bash[20728]: audit 2026-03-09T16:04:37.459759+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:38 vm01 bash[20728]: audit 2026-03-09T16:04:37.459759+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:38 vm01 bash[20728]: cluster 2026-03-09T16:04:37.477837+0000 mon.a (mon.0) 2922 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:38 vm01 bash[20728]: cluster 2026-03-09T16:04:37.477837+0000 mon.a (mon.0) 2922 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:38 vm01 bash[20728]: audit 2026-03-09T16:04:37.478630+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:38.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:38 vm01 bash[20728]: audit 2026-03-09T16:04:37.478630+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: audit 2026-03-09T16:04:38.455373+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: audit 2026-03-09T16:04:38.455373+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: audit 2026-03-09T16:04:38.459745+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: audit 2026-03-09T16:04:38.459745+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: cluster 2026-03-09T16:04:38.464652+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: cluster 2026-03-09T16:04:38.464652+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: audit 2026-03-09T16:04:38.467405+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: audit 2026-03-09T16:04:38.467405+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: cluster 2026-03-09T16:04:38.792795+0000 mgr.y (mgr.14520) 426 : cluster [DBG] pgmap v686: 292 pgs: 19 unknown, 273 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:39.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:39 vm09 bash[22983]: cluster 2026-03-09T16:04:38.792795+0000 mgr.y (mgr.14520) 426 : cluster [DBG] pgmap v686: 292 pgs: 19 unknown, 273 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: audit 2026-03-09T16:04:38.455373+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: audit 2026-03-09T16:04:38.455373+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: audit 2026-03-09T16:04:38.459745+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: audit 2026-03-09T16:04:38.459745+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: cluster 2026-03-09T16:04:38.464652+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: cluster 2026-03-09T16:04:38.464652+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: audit 2026-03-09T16:04:38.467405+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: audit 2026-03-09T16:04:38.467405+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: cluster 2026-03-09T16:04:38.792795+0000 mgr.y (mgr.14520) 426 : cluster [DBG] pgmap v686: 292 pgs: 19 unknown, 273 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:39 vm01 bash[28152]: cluster 2026-03-09T16:04:38.792795+0000 mgr.y (mgr.14520) 426 : cluster [DBG] pgmap v686: 292 pgs: 19 unknown, 273 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: audit 2026-03-09T16:04:38.455373+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: audit 2026-03-09T16:04:38.455373+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: audit 2026-03-09T16:04:38.459745+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: audit 2026-03-09T16:04:38.459745+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: cluster 2026-03-09T16:04:38.464652+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: cluster 2026-03-09T16:04:38.464652+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: audit 2026-03-09T16:04:38.467405+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: audit 2026-03-09T16:04:38.467405+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: cluster 2026-03-09T16:04:38.792795+0000 mgr.y (mgr.14520) 426 : cluster [DBG] pgmap v686: 292 pgs: 19 unknown, 273 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:39.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:39 vm01 bash[20728]: cluster 2026-03-09T16:04:38.792795+0000 mgr.y (mgr.14520) 426 : cluster [DBG] pgmap v686: 292 pgs: 19 unknown, 273 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:40.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:40 vm01 bash[28152]: audit 2026-03-09T16:04:39.613515+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:40.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:40 vm01 bash[28152]: audit 2026-03-09T16:04:39.613515+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:40.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:40 vm01 bash[28152]: cluster 2026-03-09T16:04:39.618548+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T16:04:40.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:40 vm01 bash[28152]: cluster 2026-03-09T16:04:39.618548+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T16:04:40.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:40 vm01 bash[20728]: audit 2026-03-09T16:04:39.613515+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:40.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:40 vm01 bash[20728]: audit 2026-03-09T16:04:39.613515+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:40.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:40 vm01 bash[20728]: cluster 2026-03-09T16:04:39.618548+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T16:04:40.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:40 vm01 bash[20728]: cluster 2026-03-09T16:04:39.618548+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T16:04:41.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:40 vm09 bash[22983]: audit 2026-03-09T16:04:39.613515+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:41.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:40 vm09 bash[22983]: audit 2026-03-09T16:04:39.613515+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:41.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:40 vm09 bash[22983]: cluster 2026-03-09T16:04:39.618548+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T16:04:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:40 vm09 bash[22983]: cluster 2026-03-09T16:04:39.618548+0000 mon.a (mon.0) 2928 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-09T16:04:41.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:41 vm01 bash[28152]: cluster 2026-03-09T16:04:40.642958+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T16:04:41.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:41 vm01 bash[28152]: cluster 2026-03-09T16:04:40.642958+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T16:04:41.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:41 vm01 bash[28152]: cluster 2026-03-09T16:04:40.793224+0000 mgr.y (mgr.14520) 427 : cluster [DBG] pgmap v689: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:41.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:41 vm01 bash[28152]: cluster 2026-03-09T16:04:40.793224+0000 mgr.y (mgr.14520) 427 : cluster [DBG] pgmap v689: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:41.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:41 vm01 bash[20728]: cluster 2026-03-09T16:04:40.642958+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T16:04:41.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:41 vm01 bash[20728]: cluster 2026-03-09T16:04:40.642958+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T16:04:41.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:41 vm01 bash[20728]: cluster 2026-03-09T16:04:40.793224+0000 mgr.y (mgr.14520) 427 : cluster [DBG] pgmap v689: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:41.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:41 vm01 bash[20728]: cluster 2026-03-09T16:04:40.793224+0000 mgr.y (mgr.14520) 427 : cluster [DBG] pgmap v689: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:42.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:41 vm09 bash[22983]: cluster 2026-03-09T16:04:40.642958+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T16:04:42.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:41 vm09 bash[22983]: cluster 2026-03-09T16:04:40.642958+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-09T16:04:42.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:41 vm09 bash[22983]: cluster 2026-03-09T16:04:40.793224+0000 mgr.y (mgr.14520) 427 : cluster [DBG] pgmap v689: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:42.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:41 vm09 bash[22983]: cluster 2026-03-09T16:04:40.793224+0000 mgr.y (mgr.14520) 427 : cluster [DBG] pgmap v689: 292 pgs: 292 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: cluster 2026-03-09T16:04:41.672620+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: cluster 2026-03-09T16:04:41.672620+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: audit 2026-03-09T16:04:41.719686+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: audit 2026-03-09T16:04:41.719686+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: audit 2026-03-09T16:04:41.720523+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: audit 2026-03-09T16:04:41.720523+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: audit 2026-03-09T16:04:41.721397+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: audit 2026-03-09T16:04:41.721397+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: audit 2026-03-09T16:04:41.721940+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:42.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:42 vm01 bash[28152]: audit 2026-03-09T16:04:41.721940+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:04:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:04:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: cluster 2026-03-09T16:04:41.672620+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: cluster 2026-03-09T16:04:41.672620+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: audit 2026-03-09T16:04:41.719686+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: audit 2026-03-09T16:04:41.719686+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: audit 2026-03-09T16:04:41.720523+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: audit 2026-03-09T16:04:41.720523+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: audit 2026-03-09T16:04:41.721397+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: audit 2026-03-09T16:04:41.721397+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: audit 2026-03-09T16:04:41.721940+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:42.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:42 vm01 bash[20728]: audit 2026-03-09T16:04:41.721940+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:43.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: cluster 2026-03-09T16:04:41.672620+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T16:04:43.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: cluster 2026-03-09T16:04:41.672620+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-09T16:04:43.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: audit 2026-03-09T16:04:41.719686+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: audit 2026-03-09T16:04:41.719686+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: audit 2026-03-09T16:04:41.720523+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: audit 2026-03-09T16:04:41.720523+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: audit 2026-03-09T16:04:41.721397+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: audit 2026-03-09T16:04:41.721397+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: audit 2026-03-09T16:04:41.721940+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:42 vm09 bash[22983]: audit 2026-03-09T16:04:41.721940+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-85"}]: dispatch 2026-03-09T16:04:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:43 vm09 bash[22983]: cluster 2026-03-09T16:04:42.683693+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T16:04:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:43 vm09 bash[22983]: cluster 2026-03-09T16:04:42.683693+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T16:04:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:43 vm09 bash[22983]: cluster 2026-03-09T16:04:42.793631+0000 mgr.y (mgr.14520) 428 : cluster [DBG] pgmap v692: 260 pgs: 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:43 vm09 bash[22983]: cluster 2026-03-09T16:04:42.793631+0000 mgr.y (mgr.14520) 428 : cluster [DBG] pgmap v692: 260 pgs: 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:43 vm01 bash[28152]: cluster 2026-03-09T16:04:42.683693+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T16:04:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:43 vm01 bash[28152]: cluster 2026-03-09T16:04:42.683693+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T16:04:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:43 vm01 bash[28152]: cluster 2026-03-09T16:04:42.793631+0000 mgr.y (mgr.14520) 428 : cluster [DBG] pgmap v692: 260 pgs: 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:43 vm01 bash[28152]: cluster 2026-03-09T16:04:42.793631+0000 mgr.y (mgr.14520) 428 : cluster [DBG] pgmap v692: 260 pgs: 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:44.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:43 vm01 bash[20728]: cluster 2026-03-09T16:04:42.683693+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T16:04:44.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:43 vm01 bash[20728]: cluster 2026-03-09T16:04:42.683693+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-09T16:04:44.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:43 vm01 bash[20728]: cluster 2026-03-09T16:04:42.793631+0000 mgr.y (mgr.14520) 428 : cluster [DBG] pgmap v692: 260 pgs: 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:44.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:43 vm01 bash[20728]: cluster 2026-03-09T16:04:42.793631+0000 mgr.y (mgr.14520) 428 : cluster [DBG] pgmap v692: 260 pgs: 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: OK ] LibRadosTwoPoolsPP.ProxyRead (18413 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.CachePin 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.CachePin (22508 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.SetRedirectRead 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.SetRedirectRead (2984 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestPromoteRead 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestPromoteRead (3116 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRefRead 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRefRead (3299 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestUnset 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestUnset (3030 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestDedupRefRead 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestDedupRefRead (4216 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount (40348 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 (16845 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestTestSnapCreate 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestTestSnapCreate (4055 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote (3045 ms) 2026-03-09T16:04:44.731 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification (24818 ms) 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapIncCount 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapIncCount (15078 ms) 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvict 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvict (5079 ms) 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictPromote 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictPromote (4067 ms) 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: waiting for scrubs... 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: done waiting 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch (24973 ms) 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.DedupFlushRead 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.DedupFlushRead (10142 ms) 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushSnap 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushSnap (9195 ms) 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushDupCount 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushDupCount (9242 ms) 2026-03-09T16:04:44.732 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringFlush 2026-03-09T16:04:45.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: cluster 2026-03-09T16:04:43.689415+0000 mon.a (mon.0) 2934 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T16:04:45.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: cluster 2026-03-09T16:04:43.689415+0000 mon.a (mon.0) 2934 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T16:04:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: audit 2026-03-09T16:04:43.692130+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: audit 2026-03-09T16:04:43.692130+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: audit 2026-03-09T16:04:43.696568+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: audit 2026-03-09T16:04:43.696568+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: cluster 2026-03-09T16:04:43.875367+0000 mon.a (mon.0) 2936 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: cluster 2026-03-09T16:04:43.875367+0000 mon.a (mon.0) 2936 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: audit 2026-03-09T16:04:44.327618+0000 mon.a (mon.0) 2937 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:44 vm09 bash[22983]: audit 2026-03-09T16:04:44.327618+0000 mon.a (mon.0) 2937 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: cluster 2026-03-09T16:04:43.689415+0000 mon.a (mon.0) 2934 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: cluster 2026-03-09T16:04:43.689415+0000 mon.a (mon.0) 2934 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: audit 2026-03-09T16:04:43.692130+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: audit 2026-03-09T16:04:43.692130+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: audit 2026-03-09T16:04:43.696568+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: audit 2026-03-09T16:04:43.696568+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: cluster 2026-03-09T16:04:43.875367+0000 mon.a (mon.0) 2936 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: cluster 2026-03-09T16:04:43.875367+0000 mon.a (mon.0) 2936 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: audit 2026-03-09T16:04:44.327618+0000 mon.a (mon.0) 2937 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:44 vm01 bash[28152]: audit 2026-03-09T16:04:44.327618+0000 mon.a (mon.0) 2937 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: cluster 2026-03-09T16:04:43.689415+0000 mon.a (mon.0) 2934 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: cluster 2026-03-09T16:04:43.689415+0000 mon.a (mon.0) 2934 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: audit 2026-03-09T16:04:43.692130+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: audit 2026-03-09T16:04:43.692130+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: audit 2026-03-09T16:04:43.696568+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: audit 2026-03-09T16:04:43.696568+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: cluster 2026-03-09T16:04:43.875367+0000 mon.a (mon.0) 2936 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: cluster 2026-03-09T16:04:43.875367+0000 mon.a (mon.0) 2936 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: audit 2026-03-09T16:04:44.327618+0000 mon.a (mon.0) 2937 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:44 vm01 bash[20728]: audit 2026-03-09T16:04:44.327618+0000 mon.a (mon.0) 2937 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: audit 2026-03-09T16:04:44.686789+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: audit 2026-03-09T16:04:44.686789+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: cluster 2026-03-09T16:04:44.701780+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: cluster 2026-03-09T16:04:44.701780+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: audit 2026-03-09T16:04:44.715294+0000 mon.c (mon.2) 454 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: audit 2026-03-09T16:04:44.715294+0000 mon.c (mon.2) 454 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: cluster 2026-03-09T16:04:44.793972+0000 mgr.y (mgr.14520) 429 : cluster [DBG] pgmap v695: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: cluster 2026-03-09T16:04:44.793972+0000 mgr.y (mgr.14520) 429 : cluster [DBG] pgmap v695: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: cluster 2026-03-09T16:04:45.694613+0000 mon.a (mon.0) 2940 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: cluster 2026-03-09T16:04:45.694613+0000 mon.a (mon.0) 2940 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: audit 2026-03-09T16:04:45.703108+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: audit 2026-03-09T16:04:45.703108+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: audit 2026-03-09T16:04:45.713520+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:45 vm09 bash[22983]: audit 2026-03-09T16:04:45.713520+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: audit 2026-03-09T16:04:44.686789+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: audit 2026-03-09T16:04:44.686789+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: cluster 2026-03-09T16:04:44.701780+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: cluster 2026-03-09T16:04:44.701780+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: audit 2026-03-09T16:04:44.715294+0000 mon.c (mon.2) 454 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: audit 2026-03-09T16:04:44.715294+0000 mon.c (mon.2) 454 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: cluster 2026-03-09T16:04:44.793972+0000 mgr.y (mgr.14520) 429 : cluster [DBG] pgmap v695: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: cluster 2026-03-09T16:04:44.793972+0000 mgr.y (mgr.14520) 429 : cluster [DBG] pgmap v695: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: cluster 2026-03-09T16:04:45.694613+0000 mon.a (mon.0) 2940 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: cluster 2026-03-09T16:04:45.694613+0000 mon.a (mon.0) 2940 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: audit 2026-03-09T16:04:45.703108+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: audit 2026-03-09T16:04:45.703108+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: audit 2026-03-09T16:04:45.713520+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:45 vm01 bash[28152]: audit 2026-03-09T16:04:45.713520+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: audit 2026-03-09T16:04:44.686789+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: audit 2026-03-09T16:04:44.686789+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: cluster 2026-03-09T16:04:44.701780+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: cluster 2026-03-09T16:04:44.701780+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: audit 2026-03-09T16:04:44.715294+0000 mon.c (mon.2) 454 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: audit 2026-03-09T16:04:44.715294+0000 mon.c (mon.2) 454 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: cluster 2026-03-09T16:04:44.793972+0000 mgr.y (mgr.14520) 429 : cluster [DBG] pgmap v695: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: cluster 2026-03-09T16:04:44.793972+0000 mgr.y (mgr.14520) 429 : cluster [DBG] pgmap v695: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: cluster 2026-03-09T16:04:45.694613+0000 mon.a (mon.0) 2940 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: cluster 2026-03-09T16:04:45.694613+0000 mon.a (mon.0) 2940 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: audit 2026-03-09T16:04:45.703108+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: audit 2026-03-09T16:04:45.703108+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: audit 2026-03-09T16:04:45.713520+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:45 vm01 bash[20728]: audit 2026-03-09T16:04:45.713520+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:46.883 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:04:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: audit 2026-03-09T16:04:46.634532+0000 mgr.y (mgr.14520) 430 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: audit 2026-03-09T16:04:46.634532+0000 mgr.y (mgr.14520) 430 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: audit 2026-03-09T16:04:46.695385+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: audit 2026-03-09T16:04:46.695385+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: audit 2026-03-09T16:04:46.701750+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: audit 2026-03-09T16:04:46.701750+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: cluster 2026-03-09T16:04:46.711503+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: cluster 2026-03-09T16:04:46.711503+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: audit 2026-03-09T16:04:46.721806+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: audit 2026-03-09T16:04:46.721806+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: cluster 2026-03-09T16:04:46.794322+0000 mgr.y (mgr.14520) 431 : cluster [DBG] pgmap v698: 324 pgs: 32 unknown, 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:47 vm09 bash[22983]: cluster 2026-03-09T16:04:46.794322+0000 mgr.y (mgr.14520) 431 : cluster [DBG] pgmap v698: 324 pgs: 32 unknown, 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: audit 2026-03-09T16:04:46.634532+0000 mgr.y (mgr.14520) 430 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: audit 2026-03-09T16:04:46.634532+0000 mgr.y (mgr.14520) 430 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: audit 2026-03-09T16:04:46.695385+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: audit 2026-03-09T16:04:46.695385+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: audit 2026-03-09T16:04:46.701750+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: audit 2026-03-09T16:04:46.701750+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: cluster 2026-03-09T16:04:46.711503+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: cluster 2026-03-09T16:04:46.711503+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: audit 2026-03-09T16:04:46.721806+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: audit 2026-03-09T16:04:46.721806+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: cluster 2026-03-09T16:04:46.794322+0000 mgr.y (mgr.14520) 431 : cluster [DBG] pgmap v698: 324 pgs: 32 unknown, 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:47 vm01 bash[28152]: cluster 2026-03-09T16:04:46.794322+0000 mgr.y (mgr.14520) 431 : cluster [DBG] pgmap v698: 324 pgs: 32 unknown, 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: audit 2026-03-09T16:04:46.634532+0000 mgr.y (mgr.14520) 430 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: audit 2026-03-09T16:04:46.634532+0000 mgr.y (mgr.14520) 430 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: audit 2026-03-09T16:04:46.695385+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: audit 2026-03-09T16:04:46.695385+0000 mon.a (mon.0) 2942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: audit 2026-03-09T16:04:46.701750+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: audit 2026-03-09T16:04:46.701750+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: cluster 2026-03-09T16:04:46.711503+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: cluster 2026-03-09T16:04:46.711503+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: audit 2026-03-09T16:04:46.721806+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: audit 2026-03-09T16:04:46.721806+0000 mon.a (mon.0) 2944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]: dispatch 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: cluster 2026-03-09T16:04:46.794322+0000 mgr.y (mgr.14520) 431 : cluster [DBG] pgmap v698: 324 pgs: 32 unknown, 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:48.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:47 vm01 bash[20728]: cluster 2026-03-09T16:04:46.794322+0000 mgr.y (mgr.14520) 431 : cluster [DBG] pgmap v698: 324 pgs: 32 unknown, 32 creating+peering, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:48 vm01 bash[28152]: audit 2026-03-09T16:04:47.848704+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]': finished 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:48 vm01 bash[28152]: audit 2026-03-09T16:04:47.848704+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]': finished 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:48 vm01 bash[28152]: audit 2026-03-09T16:04:47.854047+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:48 vm01 bash[28152]: audit 2026-03-09T16:04:47.854047+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:48 vm01 bash[28152]: cluster 2026-03-09T16:04:47.854945+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:48 vm01 bash[28152]: cluster 2026-03-09T16:04:47.854945+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:48 vm01 bash[28152]: audit 2026-03-09T16:04:47.860377+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:48 vm01 bash[28152]: audit 2026-03-09T16:04:47.860377+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:48 vm01 bash[20728]: audit 2026-03-09T16:04:47.848704+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]': finished 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:48 vm01 bash[20728]: audit 2026-03-09T16:04:47.848704+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]': finished 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:48 vm01 bash[20728]: audit 2026-03-09T16:04:47.854047+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:48 vm01 bash[20728]: audit 2026-03-09T16:04:47.854047+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:48 vm01 bash[20728]: cluster 2026-03-09T16:04:47.854945+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T16:04:49.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:48 vm01 bash[20728]: cluster 2026-03-09T16:04:47.854945+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T16:04:49.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:48 vm01 bash[20728]: audit 2026-03-09T16:04:47.860377+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:48 vm01 bash[20728]: audit 2026-03-09T16:04:47.860377+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:48 vm09 bash[22983]: audit 2026-03-09T16:04:47.848704+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]': finished 2026-03-09T16:04:49.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:48 vm09 bash[22983]: audit 2026-03-09T16:04:47.848704+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_tier","val": "test-rados-api-vm01-59821-89-test-flush"}]': finished 2026-03-09T16:04:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:48 vm09 bash[22983]: audit 2026-03-09T16:04:47.854047+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:48 vm09 bash[22983]: audit 2026-03-09T16:04:47.854047+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:48 vm09 bash[22983]: cluster 2026-03-09T16:04:47.854945+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T16:04:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:48 vm09 bash[22983]: cluster 2026-03-09T16:04:47.854945+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-09T16:04:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:48 vm09 bash[22983]: audit 2026-03-09T16:04:47.860377+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:49.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:48 vm09 bash[22983]: audit 2026-03-09T16:04:47.860377+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: cluster 2026-03-09T16:04:48.794976+0000 mgr.y (mgr.14520) 432 : cluster [DBG] pgmap v700: 324 pgs: 21 unknown, 20 creating+peering, 283 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: cluster 2026-03-09T16:04:48.794976+0000 mgr.y (mgr.14520) 432 : cluster [DBG] pgmap v700: 324 pgs: 21 unknown, 20 creating+peering, 283 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: cluster 2026-03-09T16:04:48.896228+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: cluster 2026-03-09T16:04:48.896228+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: audit 2026-03-09T16:04:48.899007+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: audit 2026-03-09T16:04:48.899007+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: cluster 2026-03-09T16:04:48.908071+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: cluster 2026-03-09T16:04:48.908071+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: audit 2026-03-09T16:04:48.911619+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: audit 2026-03-09T16:04:48.911619+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: audit 2026-03-09T16:04:48.928029+0000 mon.a (mon.0) 2951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:49 vm01 bash[28152]: audit 2026-03-09T16:04:48.928029+0000 mon.a (mon.0) 2951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: cluster 2026-03-09T16:04:48.794976+0000 mgr.y (mgr.14520) 432 : cluster [DBG] pgmap v700: 324 pgs: 21 unknown, 20 creating+peering, 283 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: cluster 2026-03-09T16:04:48.794976+0000 mgr.y (mgr.14520) 432 : cluster [DBG] pgmap v700: 324 pgs: 21 unknown, 20 creating+peering, 283 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: cluster 2026-03-09T16:04:48.896228+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: cluster 2026-03-09T16:04:48.896228+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: audit 2026-03-09T16:04:48.899007+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: audit 2026-03-09T16:04:48.899007+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: cluster 2026-03-09T16:04:48.908071+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: cluster 2026-03-09T16:04:48.908071+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: audit 2026-03-09T16:04:48.911619+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: audit 2026-03-09T16:04:48.911619+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: audit 2026-03-09T16:04:48.928029+0000 mon.a (mon.0) 2951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:49 vm01 bash[20728]: audit 2026-03-09T16:04:48.928029+0000 mon.a (mon.0) 2951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: cluster 2026-03-09T16:04:48.794976+0000 mgr.y (mgr.14520) 432 : cluster [DBG] pgmap v700: 324 pgs: 21 unknown, 20 creating+peering, 283 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: cluster 2026-03-09T16:04:48.794976+0000 mgr.y (mgr.14520) 432 : cluster [DBG] pgmap v700: 324 pgs: 21 unknown, 20 creating+peering, 283 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: cluster 2026-03-09T16:04:48.896228+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: cluster 2026-03-09T16:04:48.896228+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: audit 2026-03-09T16:04:48.899007+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: audit 2026-03-09T16:04:48.899007+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: cluster 2026-03-09T16:04:48.908071+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: cluster 2026-03-09T16:04:48.908071+0000 mon.a (mon.0) 2950 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: audit 2026-03-09T16:04:48.911619+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: audit 2026-03-09T16:04:48.911619+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: audit 2026-03-09T16:04:48.928029+0000 mon.a (mon.0) 2951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:49 vm09 bash[22983]: audit 2026-03-09T16:04:48.928029+0000 mon.a (mon.0) 2951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:04:51.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:51 vm09 bash[22983]: audit 2026-03-09T16:04:49.923943+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:51 vm09 bash[22983]: audit 2026-03-09T16:04:49.923943+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:51 vm09 bash[22983]: cluster 2026-03-09T16:04:49.926920+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T16:04:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:51 vm09 bash[22983]: cluster 2026-03-09T16:04:49.926920+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T16:04:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:51 vm01 bash[28152]: audit 2026-03-09T16:04:49.923943+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:51 vm01 bash[28152]: audit 2026-03-09T16:04:49.923943+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:51 vm01 bash[28152]: cluster 2026-03-09T16:04:49.926920+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T16:04:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:51 vm01 bash[28152]: cluster 2026-03-09T16:04:49.926920+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T16:04:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:51 vm01 bash[20728]: audit 2026-03-09T16:04:49.923943+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:51 vm01 bash[20728]: audit 2026-03-09T16:04:49.923943+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:04:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:51 vm01 bash[20728]: cluster 2026-03-09T16:04:49.926920+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T16:04:51.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:51 vm01 bash[20728]: cluster 2026-03-09T16:04:49.926920+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: cluster 2026-03-09T16:04:50.795353+0000 mgr.y (mgr.14520) 433 : cluster [DBG] pgmap v703: 324 pgs: 324 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: cluster 2026-03-09T16:04:50.795353+0000 mgr.y (mgr.14520) 433 : cluster [DBG] pgmap v703: 324 pgs: 324 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: cluster 2026-03-09T16:04:51.103468+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: cluster 2026-03-09T16:04:51.103468+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: audit 2026-03-09T16:04:51.196214+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: audit 2026-03-09T16:04:51.196214+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: audit 2026-03-09T16:04:51.196549+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: audit 2026-03-09T16:04:51.196549+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: audit 2026-03-09T16:04:51.196850+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: audit 2026-03-09T16:04:51.196850+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: audit 2026-03-09T16:04:51.197084+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:52 vm09 bash[22983]: audit 2026-03-09T16:04:51.197084+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: cluster 2026-03-09T16:04:50.795353+0000 mgr.y (mgr.14520) 433 : cluster [DBG] pgmap v703: 324 pgs: 324 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: cluster 2026-03-09T16:04:50.795353+0000 mgr.y (mgr.14520) 433 : cluster [DBG] pgmap v703: 324 pgs: 324 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: cluster 2026-03-09T16:04:51.103468+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: cluster 2026-03-09T16:04:51.103468+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: audit 2026-03-09T16:04:51.196214+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: audit 2026-03-09T16:04:51.196214+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: audit 2026-03-09T16:04:51.196549+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: audit 2026-03-09T16:04:51.196549+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: audit 2026-03-09T16:04:51.196850+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: audit 2026-03-09T16:04:51.196850+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: audit 2026-03-09T16:04:51.197084+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:52 vm01 bash[28152]: audit 2026-03-09T16:04:51.197084+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: cluster 2026-03-09T16:04:50.795353+0000 mgr.y (mgr.14520) 433 : cluster [DBG] pgmap v703: 324 pgs: 324 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: cluster 2026-03-09T16:04:50.795353+0000 mgr.y (mgr.14520) 433 : cluster [DBG] pgmap v703: 324 pgs: 324 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: cluster 2026-03-09T16:04:51.103468+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: cluster 2026-03-09T16:04:51.103468+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: audit 2026-03-09T16:04:51.196214+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: audit 2026-03-09T16:04:51.196214+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: audit 2026-03-09T16:04:51.196549+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: audit 2026-03-09T16:04:51.196549+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: audit 2026-03-09T16:04:51.196850+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: audit 2026-03-09T16:04:51.196850+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: audit 2026-03-09T16:04:51.197084+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:52 vm01 bash[20728]: audit 2026-03-09T16:04:51.197084+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-87"}]: dispatch 2026-03-09T16:04:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:53 vm01 bash[28152]: cluster 2026-03-09T16:04:52.109782+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T16:04:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:53 vm01 bash[28152]: cluster 2026-03-09T16:04:52.109782+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T16:04:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:53 vm01 bash[28152]: cluster 2026-03-09T16:04:52.795648+0000 mgr.y (mgr.14520) 434 : cluster [DBG] pgmap v706: 260 pgs: 260 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:53 vm01 bash[28152]: cluster 2026-03-09T16:04:52.795648+0000 mgr.y (mgr.14520) 434 : cluster [DBG] pgmap v706: 260 pgs: 260 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:04:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:04:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:04:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:53 vm01 bash[20728]: cluster 2026-03-09T16:04:52.109782+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T16:04:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:53 vm01 bash[20728]: cluster 2026-03-09T16:04:52.109782+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T16:04:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:53 vm01 bash[20728]: cluster 2026-03-09T16:04:52.795648+0000 mgr.y (mgr.14520) 434 : cluster [DBG] pgmap v706: 260 pgs: 260 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:53 vm01 bash[20728]: cluster 2026-03-09T16:04:52.795648+0000 mgr.y (mgr.14520) 434 : cluster [DBG] pgmap v706: 260 pgs: 260 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:53.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:53 vm09 bash[22983]: cluster 2026-03-09T16:04:52.109782+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T16:04:53.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:53 vm09 bash[22983]: cluster 2026-03-09T16:04:52.109782+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-09T16:04:53.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:53 vm09 bash[22983]: cluster 2026-03-09T16:04:52.795648+0000 mgr.y (mgr.14520) 434 : cluster [DBG] pgmap v706: 260 pgs: 260 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:53 vm09 bash[22983]: cluster 2026-03-09T16:04:52.795648+0000 mgr.y (mgr.14520) 434 : cluster [DBG] pgmap v706: 260 pgs: 260 active+clean; 8.3 MiB data, 933 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:54 vm01 bash[28152]: cluster 2026-03-09T16:04:53.121930+0000 mon.a (mon.0) 2958 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:54 vm01 bash[28152]: cluster 2026-03-09T16:04:53.121930+0000 mon.a (mon.0) 2958 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:54 vm01 bash[28152]: audit 2026-03-09T16:04:53.130568+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:54 vm01 bash[28152]: audit 2026-03-09T16:04:53.130568+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:54 vm01 bash[28152]: audit 2026-03-09T16:04:53.132319+0000 mon.a (mon.0) 2959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:54 vm01 bash[28152]: audit 2026-03-09T16:04:53.132319+0000 mon.a (mon.0) 2959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:54 vm01 bash[28152]: cluster 2026-03-09T16:04:53.896755+0000 mon.a (mon.0) 2960 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:54 vm01 bash[28152]: cluster 2026-03-09T16:04:53.896755+0000 mon.a (mon.0) 2960 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:54 vm01 bash[20728]: cluster 2026-03-09T16:04:53.121930+0000 mon.a (mon.0) 2958 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:54 vm01 bash[20728]: cluster 2026-03-09T16:04:53.121930+0000 mon.a (mon.0) 2958 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:54 vm01 bash[20728]: audit 2026-03-09T16:04:53.130568+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:54 vm01 bash[20728]: audit 2026-03-09T16:04:53.130568+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:54 vm01 bash[20728]: audit 2026-03-09T16:04:53.132319+0000 mon.a (mon.0) 2959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:54 vm01 bash[20728]: audit 2026-03-09T16:04:53.132319+0000 mon.a (mon.0) 2959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:54 vm01 bash[20728]: cluster 2026-03-09T16:04:53.896755+0000 mon.a (mon.0) 2960 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:54.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:54 vm01 bash[20728]: cluster 2026-03-09T16:04:53.896755+0000 mon.a (mon.0) 2960 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:54 vm09 bash[22983]: cluster 2026-03-09T16:04:53.121930+0000 mon.a (mon.0) 2958 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T16:04:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:54 vm09 bash[22983]: cluster 2026-03-09T16:04:53.121930+0000 mon.a (mon.0) 2958 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-09T16:04:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:54 vm09 bash[22983]: audit 2026-03-09T16:04:53.130568+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:54 vm09 bash[22983]: audit 2026-03-09T16:04:53.130568+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:54 vm09 bash[22983]: audit 2026-03-09T16:04:53.132319+0000 mon.a (mon.0) 2959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:54 vm09 bash[22983]: audit 2026-03-09T16:04:53.132319+0000 mon.a (mon.0) 2959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:54 vm09 bash[22983]: cluster 2026-03-09T16:04:53.896755+0000 mon.a (mon.0) 2960 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:54 vm09 bash[22983]: cluster 2026-03-09T16:04:53.896755+0000 mon.a (mon.0) 2960 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:54.121519+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:54.121519+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: cluster 2026-03-09T16:04:54.125122+0000 mon.a (mon.0) 2962 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: cluster 2026-03-09T16:04:54.125122+0000 mon.a (mon.0) 2962 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:54.145854+0000 mon.c (mon.2) 462 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:54.145854+0000 mon.c (mon.2) 462 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:54.164590+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:54.164590+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:54.164832+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:54.164832+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: cluster 2026-03-09T16:04:54.796293+0000 mgr.y (mgr.14520) 435 : cluster [DBG] pgmap v709: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: cluster 2026-03-09T16:04:54.796293+0000 mgr.y (mgr.14520) 435 : cluster [DBG] pgmap v709: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:55.125527+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: audit 2026-03-09T16:04:55.125527+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: cluster 2026-03-09T16:04:55.134472+0000 mon.a (mon.0) 2965 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:55 vm01 bash[28152]: cluster 2026-03-09T16:04:55.134472+0000 mon.a (mon.0) 2965 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:54.121519+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:54.121519+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: cluster 2026-03-09T16:04:54.125122+0000 mon.a (mon.0) 2962 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: cluster 2026-03-09T16:04:54.125122+0000 mon.a (mon.0) 2962 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:54.145854+0000 mon.c (mon.2) 462 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:54.145854+0000 mon.c (mon.2) 462 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:54.164590+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:54.164590+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:54.164832+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:54.164832+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: cluster 2026-03-09T16:04:54.796293+0000 mgr.y (mgr.14520) 435 : cluster [DBG] pgmap v709: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: cluster 2026-03-09T16:04:54.796293+0000 mgr.y (mgr.14520) 435 : cluster [DBG] pgmap v709: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:55.125527+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: audit 2026-03-09T16:04:55.125527+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: cluster 2026-03-09T16:04:55.134472+0000 mon.a (mon.0) 2965 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T16:04:55.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:55 vm01 bash[20728]: cluster 2026-03-09T16:04:55.134472+0000 mon.a (mon.0) 2965 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:54.121519+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:54.121519+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: cluster 2026-03-09T16:04:54.125122+0000 mon.a (mon.0) 2962 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: cluster 2026-03-09T16:04:54.125122+0000 mon.a (mon.0) 2962 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:54.145854+0000 mon.c (mon.2) 462 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:54.145854+0000 mon.c (mon.2) 462 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:54.164590+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:54.164590+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:54.164832+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:54.164832+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: cluster 2026-03-09T16:04:54.796293+0000 mgr.y (mgr.14520) 435 : cluster [DBG] pgmap v709: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: cluster 2026-03-09T16:04:54.796293+0000 mgr.y (mgr.14520) 435 : cluster [DBG] pgmap v709: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:55.125527+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: audit 2026-03-09T16:04:55.125527+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: cluster 2026-03-09T16:04:55.134472+0000 mon.a (mon.0) 2965 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T16:04:55.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:55 vm09 bash[22983]: cluster 2026-03-09T16:04:55.134472+0000 mon.a (mon.0) 2965 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-09T16:04:57.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:04:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:04:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:57 vm09 bash[22983]: cluster 2026-03-09T16:04:56.183892+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T16:04:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:57 vm09 bash[22983]: cluster 2026-03-09T16:04:56.183892+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T16:04:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:57 vm09 bash[22983]: audit 2026-03-09T16:04:56.640173+0000 mgr.y (mgr.14520) 436 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:57 vm09 bash[22983]: audit 2026-03-09T16:04:56.640173+0000 mgr.y (mgr.14520) 436 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:57 vm09 bash[22983]: cluster 2026-03-09T16:04:56.796637+0000 mgr.y (mgr.14520) 437 : cluster [DBG] pgmap v712: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:57 vm09 bash[22983]: cluster 2026-03-09T16:04:56.796637+0000 mgr.y (mgr.14520) 437 : cluster [DBG] pgmap v712: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:57 vm01 bash[28152]: cluster 2026-03-09T16:04:56.183892+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:57 vm01 bash[28152]: cluster 2026-03-09T16:04:56.183892+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:57 vm01 bash[28152]: audit 2026-03-09T16:04:56.640173+0000 mgr.y (mgr.14520) 436 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:57 vm01 bash[28152]: audit 2026-03-09T16:04:56.640173+0000 mgr.y (mgr.14520) 436 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:57 vm01 bash[28152]: cluster 2026-03-09T16:04:56.796637+0000 mgr.y (mgr.14520) 437 : cluster [DBG] pgmap v712: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:57 vm01 bash[28152]: cluster 2026-03-09T16:04:56.796637+0000 mgr.y (mgr.14520) 437 : cluster [DBG] pgmap v712: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:57 vm01 bash[20728]: cluster 2026-03-09T16:04:56.183892+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:57 vm01 bash[20728]: cluster 2026-03-09T16:04:56.183892+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:57 vm01 bash[20728]: audit 2026-03-09T16:04:56.640173+0000 mgr.y (mgr.14520) 436 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:57 vm01 bash[20728]: audit 2026-03-09T16:04:56.640173+0000 mgr.y (mgr.14520) 436 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:57 vm01 bash[20728]: cluster 2026-03-09T16:04:56.796637+0000 mgr.y (mgr.14520) 437 : cluster [DBG] pgmap v712: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:57.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:57 vm01 bash[20728]: cluster 2026-03-09T16:04:56.796637+0000 mgr.y (mgr.14520) 437 : cluster [DBG] pgmap v712: 292 pgs: 2 creating+activating, 23 creating+peering, 267 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:04:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: cluster 2026-03-09T16:04:57.197241+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T16:04:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: cluster 2026-03-09T16:04:57.197241+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T16:04:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: audit 2026-03-09T16:04:57.232858+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: audit 2026-03-09T16:04:57.232858+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: audit 2026-03-09T16:04:57.233118+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: audit 2026-03-09T16:04:57.233118+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: audit 2026-03-09T16:04:57.233428+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: audit 2026-03-09T16:04:57.233428+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: audit 2026-03-09T16:04:57.233628+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:58 vm09 bash[22983]: audit 2026-03-09T16:04:57.233628+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: cluster 2026-03-09T16:04:57.197241+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: cluster 2026-03-09T16:04:57.197241+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: audit 2026-03-09T16:04:57.232858+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: audit 2026-03-09T16:04:57.232858+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: audit 2026-03-09T16:04:57.233118+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: audit 2026-03-09T16:04:57.233118+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: audit 2026-03-09T16:04:57.233428+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: audit 2026-03-09T16:04:57.233428+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: audit 2026-03-09T16:04:57.233628+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:58 vm01 bash[28152]: audit 2026-03-09T16:04:57.233628+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: cluster 2026-03-09T16:04:57.197241+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: cluster 2026-03-09T16:04:57.197241+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: audit 2026-03-09T16:04:57.232858+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: audit 2026-03-09T16:04:57.232858+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: audit 2026-03-09T16:04:57.233118+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: audit 2026-03-09T16:04:57.233118+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:04:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: audit 2026-03-09T16:04:57.233428+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: audit 2026-03-09T16:04:57.233428+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: audit 2026-03-09T16:04:57.233628+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:58.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:58 vm01 bash[20728]: audit 2026-03-09T16:04:57.233628+0000 mon.a (mon.0) 2969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-90"}]: dispatch 2026-03-09T16:04:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:59 vm09 bash[22983]: cluster 2026-03-09T16:04:58.202320+0000 mon.a (mon.0) 2970 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T16:04:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:59 vm09 bash[22983]: cluster 2026-03-09T16:04:58.202320+0000 mon.a (mon.0) 2970 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T16:04:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:59 vm09 bash[22983]: cluster 2026-03-09T16:04:58.797105+0000 mgr.y (mgr.14520) 438 : cluster [DBG] pgmap v715: 260 pgs: 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s wr, 1 op/s 2026-03-09T16:04:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:59 vm09 bash[22983]: cluster 2026-03-09T16:04:58.797105+0000 mgr.y (mgr.14520) 438 : cluster [DBG] pgmap v715: 260 pgs: 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s wr, 1 op/s 2026-03-09T16:04:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:59 vm09 bash[22983]: cluster 2026-03-09T16:04:59.209907+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T16:04:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:59 vm09 bash[22983]: cluster 2026-03-09T16:04:59.209907+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T16:04:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:59 vm09 bash[22983]: audit 2026-03-09T16:04:59.212039+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:59.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:04:59 vm09 bash[22983]: audit 2026-03-09T16:04:59.212039+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:59 vm01 bash[28152]: cluster 2026-03-09T16:04:58.202320+0000 mon.a (mon.0) 2970 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T16:04:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:59 vm01 bash[28152]: cluster 2026-03-09T16:04:58.202320+0000 mon.a (mon.0) 2970 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T16:04:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:59 vm01 bash[28152]: cluster 2026-03-09T16:04:58.797105+0000 mgr.y (mgr.14520) 438 : cluster [DBG] pgmap v715: 260 pgs: 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s wr, 1 op/s 2026-03-09T16:04:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:59 vm01 bash[28152]: cluster 2026-03-09T16:04:58.797105+0000 mgr.y (mgr.14520) 438 : cluster [DBG] pgmap v715: 260 pgs: 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s wr, 1 op/s 2026-03-09T16:04:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:59 vm01 bash[28152]: cluster 2026-03-09T16:04:59.209907+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T16:04:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:59 vm01 bash[28152]: cluster 2026-03-09T16:04:59.209907+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T16:04:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:59 vm01 bash[28152]: audit 2026-03-09T16:04:59.212039+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:59.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:04:59 vm01 bash[28152]: audit 2026-03-09T16:04:59.212039+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:59.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:59 vm01 bash[20728]: cluster 2026-03-09T16:04:58.202320+0000 mon.a (mon.0) 2970 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T16:04:59.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:59 vm01 bash[20728]: cluster 2026-03-09T16:04:58.202320+0000 mon.a (mon.0) 2970 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-09T16:04:59.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:59 vm01 bash[20728]: cluster 2026-03-09T16:04:58.797105+0000 mgr.y (mgr.14520) 438 : cluster [DBG] pgmap v715: 260 pgs: 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s wr, 1 op/s 2026-03-09T16:04:59.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:59 vm01 bash[20728]: cluster 2026-03-09T16:04:58.797105+0000 mgr.y (mgr.14520) 438 : cluster [DBG] pgmap v715: 260 pgs: 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s wr, 1 op/s 2026-03-09T16:04:59.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:59 vm01 bash[20728]: cluster 2026-03-09T16:04:59.209907+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T16:04:59.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:59 vm01 bash[20728]: cluster 2026-03-09T16:04:59.209907+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-09T16:04:59.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:59 vm01 bash[20728]: audit 2026-03-09T16:04:59.212039+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:04:59.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:04:59 vm01 bash[20728]: audit 2026-03-09T16:04:59.212039+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: audit 2026-03-09T16:04:59.217968+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: audit 2026-03-09T16:04:59.217968+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: audit 2026-03-09T16:04:59.334321+0000 mon.a (mon.0) 2973 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: audit 2026-03-09T16:04:59.334321+0000 mon.a (mon.0) 2973 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: audit 2026-03-09T16:05:00.203116+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: audit 2026-03-09T16:05:00.203116+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: audit 2026-03-09T16:05:00.216425+0000 mon.c (mon.2) 467 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: audit 2026-03-09T16:05:00.216425+0000 mon.c (mon.2) 467 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: cluster 2026-03-09T16:05:00.218247+0000 mon.a (mon.0) 2975 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T16:05:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:00 vm09 bash[22983]: cluster 2026-03-09T16:05:00.218247+0000 mon.a (mon.0) 2975 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: audit 2026-03-09T16:04:59.217968+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: audit 2026-03-09T16:04:59.217968+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: audit 2026-03-09T16:04:59.334321+0000 mon.a (mon.0) 2973 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: audit 2026-03-09T16:04:59.334321+0000 mon.a (mon.0) 2973 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: audit 2026-03-09T16:05:00.203116+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: audit 2026-03-09T16:05:00.203116+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: audit 2026-03-09T16:05:00.216425+0000 mon.c (mon.2) 467 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: audit 2026-03-09T16:05:00.216425+0000 mon.c (mon.2) 467 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: cluster 2026-03-09T16:05:00.218247+0000 mon.a (mon.0) 2975 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:00 vm01 bash[28152]: cluster 2026-03-09T16:05:00.218247+0000 mon.a (mon.0) 2975 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: audit 2026-03-09T16:04:59.217968+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: audit 2026-03-09T16:04:59.217968+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: audit 2026-03-09T16:04:59.334321+0000 mon.a (mon.0) 2973 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: audit 2026-03-09T16:04:59.334321+0000 mon.a (mon.0) 2973 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: audit 2026-03-09T16:05:00.203116+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: audit 2026-03-09T16:05:00.203116+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: audit 2026-03-09T16:05:00.216425+0000 mon.c (mon.2) 467 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: audit 2026-03-09T16:05:00.216425+0000 mon.c (mon.2) 467 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: cluster 2026-03-09T16:05:00.218247+0000 mon.a (mon.0) 2975 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T16:05:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:00 vm01 bash[20728]: cluster 2026-03-09T16:05:00.218247+0000 mon.a (mon.0) 2975 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-09T16:05:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:01 vm09 bash[22983]: cluster 2026-03-09T16:05:00.797582+0000 mgr.y (mgr.14520) 439 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:05:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:01 vm09 bash[22983]: cluster 2026-03-09T16:05:00.797582+0000 mgr.y (mgr.14520) 439 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:05:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:01 vm09 bash[22983]: cluster 2026-03-09T16:05:01.210791+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T16:05:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:01 vm09 bash[22983]: cluster 2026-03-09T16:05:01.210791+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T16:05:01.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:01 vm01 bash[28152]: cluster 2026-03-09T16:05:00.797582+0000 mgr.y (mgr.14520) 439 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:05:01.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:01 vm01 bash[28152]: cluster 2026-03-09T16:05:00.797582+0000 mgr.y (mgr.14520) 439 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:05:01.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:01 vm01 bash[28152]: cluster 2026-03-09T16:05:01.210791+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T16:05:01.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:01 vm01 bash[28152]: cluster 2026-03-09T16:05:01.210791+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T16:05:01.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:01 vm01 bash[20728]: cluster 2026-03-09T16:05:00.797582+0000 mgr.y (mgr.14520) 439 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:05:01.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:01 vm01 bash[20728]: cluster 2026-03-09T16:05:00.797582+0000 mgr.y (mgr.14520) 439 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:05:01.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:01 vm01 bash[20728]: cluster 2026-03-09T16:05:01.210791+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T16:05:01.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:01 vm01 bash[20728]: cluster 2026-03-09T16:05:01.210791+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-09T16:05:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:05:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:05:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: cluster 2026-03-09T16:05:02.213346+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: cluster 2026-03-09T16:05:02.213346+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: audit 2026-03-09T16:05:02.272068+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: audit 2026-03-09T16:05:02.272068+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: audit 2026-03-09T16:05:02.272493+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: audit 2026-03-09T16:05:02.272493+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: audit 2026-03-09T16:05:02.272890+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: audit 2026-03-09T16:05:02.272890+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: audit 2026-03-09T16:05:02.273080+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: audit 2026-03-09T16:05:02.273080+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: cluster 2026-03-09T16:05:02.797924+0000 mgr.y (mgr.14520) 440 : cluster [DBG] pgmap v721: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:03 vm09 bash[22983]: cluster 2026-03-09T16:05:02.797924+0000 mgr.y (mgr.14520) 440 : cluster [DBG] pgmap v721: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: cluster 2026-03-09T16:05:02.213346+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: cluster 2026-03-09T16:05:02.213346+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: audit 2026-03-09T16:05:02.272068+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: audit 2026-03-09T16:05:02.272068+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: audit 2026-03-09T16:05:02.272493+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: audit 2026-03-09T16:05:02.272493+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: audit 2026-03-09T16:05:02.272890+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: audit 2026-03-09T16:05:02.272890+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: audit 2026-03-09T16:05:02.273080+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: audit 2026-03-09T16:05:02.273080+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: cluster 2026-03-09T16:05:02.797924+0000 mgr.y (mgr.14520) 440 : cluster [DBG] pgmap v721: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:03 vm01 bash[28152]: cluster 2026-03-09T16:05:02.797924+0000 mgr.y (mgr.14520) 440 : cluster [DBG] pgmap v721: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: cluster 2026-03-09T16:05:02.213346+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T16:05:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: cluster 2026-03-09T16:05:02.213346+0000 mon.a (mon.0) 2977 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: audit 2026-03-09T16:05:02.272068+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: audit 2026-03-09T16:05:02.272068+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: audit 2026-03-09T16:05:02.272493+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: audit 2026-03-09T16:05:02.272493+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: audit 2026-03-09T16:05:02.272890+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: audit 2026-03-09T16:05:02.272890+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: audit 2026-03-09T16:05:02.273080+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: audit 2026-03-09T16:05:02.273080+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-92"}]: dispatch 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: cluster 2026-03-09T16:05:02.797924+0000 mgr.y (mgr.14520) 440 : cluster [DBG] pgmap v721: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:03 vm01 bash[20728]: cluster 2026-03-09T16:05:02.797924+0000 mgr.y (mgr.14520) 440 : cluster [DBG] pgmap v721: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:04 vm09 bash[22983]: cluster 2026-03-09T16:05:03.227384+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T16:05:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:04 vm09 bash[22983]: cluster 2026-03-09T16:05:03.227384+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T16:05:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:04 vm01 bash[28152]: cluster 2026-03-09T16:05:03.227384+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T16:05:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:04 vm01 bash[28152]: cluster 2026-03-09T16:05:03.227384+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T16:05:04.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:04 vm01 bash[20728]: cluster 2026-03-09T16:05:03.227384+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T16:05:04.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:04 vm01 bash[20728]: cluster 2026-03-09T16:05:03.227384+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-09T16:05:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:05 vm09 bash[22983]: cluster 2026-03-09T16:05:04.233901+0000 mon.a (mon.0) 2981 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T16:05:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:05 vm09 bash[22983]: cluster 2026-03-09T16:05:04.233901+0000 mon.a (mon.0) 2981 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T16:05:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:05 vm09 bash[22983]: audit 2026-03-09T16:05:04.236909+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:05 vm09 bash[22983]: audit 2026-03-09T16:05:04.236909+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:05 vm09 bash[22983]: audit 2026-03-09T16:05:04.237191+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:05 vm09 bash[22983]: audit 2026-03-09T16:05:04.237191+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:05 vm09 bash[22983]: cluster 2026-03-09T16:05:04.798265+0000 mgr.y (mgr.14520) 441 : cluster [DBG] pgmap v724: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:05 vm09 bash[22983]: cluster 2026-03-09T16:05:04.798265+0000 mgr.y (mgr.14520) 441 : cluster [DBG] pgmap v724: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:05 vm01 bash[28152]: cluster 2026-03-09T16:05:04.233901+0000 mon.a (mon.0) 2981 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:05 vm01 bash[28152]: cluster 2026-03-09T16:05:04.233901+0000 mon.a (mon.0) 2981 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:05 vm01 bash[28152]: audit 2026-03-09T16:05:04.236909+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:05 vm01 bash[28152]: audit 2026-03-09T16:05:04.236909+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:05 vm01 bash[28152]: audit 2026-03-09T16:05:04.237191+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:05 vm01 bash[28152]: audit 2026-03-09T16:05:04.237191+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:05 vm01 bash[28152]: cluster 2026-03-09T16:05:04.798265+0000 mgr.y (mgr.14520) 441 : cluster [DBG] pgmap v724: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:05 vm01 bash[28152]: cluster 2026-03-09T16:05:04.798265+0000 mgr.y (mgr.14520) 441 : cluster [DBG] pgmap v724: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:05 vm01 bash[20728]: cluster 2026-03-09T16:05:04.233901+0000 mon.a (mon.0) 2981 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T16:05:05.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:05 vm01 bash[20728]: cluster 2026-03-09T16:05:04.233901+0000 mon.a (mon.0) 2981 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-09T16:05:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:05 vm01 bash[20728]: audit 2026-03-09T16:05:04.236909+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:05 vm01 bash[20728]: audit 2026-03-09T16:05:04.236909+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:05 vm01 bash[20728]: audit 2026-03-09T16:05:04.237191+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:05 vm01 bash[20728]: audit 2026-03-09T16:05:04.237191+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:05 vm01 bash[20728]: cluster 2026-03-09T16:05:04.798265+0000 mgr.y (mgr.14520) 441 : cluster [DBG] pgmap v724: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:05 vm01 bash[20728]: cluster 2026-03-09T16:05:04.798265+0000 mgr.y (mgr.14520) 441 : cluster [DBG] pgmap v724: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: cluster 2026-03-09T16:05:05.231991+0000 mon.a (mon.0) 2983 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: cluster 2026-03-09T16:05:05.231991+0000 mon.a (mon.0) 2983 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: audit 2026-03-09T16:05:05.234515+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: audit 2026-03-09T16:05:05.234515+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: cluster 2026-03-09T16:05:05.238639+0000 mon.a (mon.0) 2985 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: cluster 2026-03-09T16:05:05.238639+0000 mon.a (mon.0) 2985 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: audit 2026-03-09T16:05:05.277274+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: audit 2026-03-09T16:05:05.277274+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: cluster 2026-03-09T16:05:06.241408+0000 mon.a (mon.0) 2986 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T16:05:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:06 vm09 bash[22983]: cluster 2026-03-09T16:05:06.241408+0000 mon.a (mon.0) 2986 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: cluster 2026-03-09T16:05:05.231991+0000 mon.a (mon.0) 2983 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: cluster 2026-03-09T16:05:05.231991+0000 mon.a (mon.0) 2983 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: audit 2026-03-09T16:05:05.234515+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: audit 2026-03-09T16:05:05.234515+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: cluster 2026-03-09T16:05:05.238639+0000 mon.a (mon.0) 2985 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: cluster 2026-03-09T16:05:05.238639+0000 mon.a (mon.0) 2985 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: audit 2026-03-09T16:05:05.277274+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: audit 2026-03-09T16:05:05.277274+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: cluster 2026-03-09T16:05:06.241408+0000 mon.a (mon.0) 2986 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:06 vm01 bash[28152]: cluster 2026-03-09T16:05:06.241408+0000 mon.a (mon.0) 2986 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T16:05:06.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: cluster 2026-03-09T16:05:05.231991+0000 mon.a (mon.0) 2983 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: cluster 2026-03-09T16:05:05.231991+0000 mon.a (mon.0) 2983 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: audit 2026-03-09T16:05:05.234515+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: audit 2026-03-09T16:05:05.234515+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: cluster 2026-03-09T16:05:05.238639+0000 mon.a (mon.0) 2985 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T16:05:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: cluster 2026-03-09T16:05:05.238639+0000 mon.a (mon.0) 2985 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-09T16:05:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: audit 2026-03-09T16:05:05.277274+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: audit 2026-03-09T16:05:05.277274+0000 mon.c (mon.2) 471 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: cluster 2026-03-09T16:05:06.241408+0000 mon.a (mon.0) 2986 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T16:05:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:06 vm01 bash[20728]: cluster 2026-03-09T16:05:06.241408+0000 mon.a (mon.0) 2986 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-09T16:05:07.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:05:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:05:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:07 vm09 bash[22983]: audit 2026-03-09T16:05:06.648906+0000 mgr.y (mgr.14520) 442 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:07 vm09 bash[22983]: audit 2026-03-09T16:05:06.648906+0000 mgr.y (mgr.14520) 442 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:07 vm09 bash[22983]: cluster 2026-03-09T16:05:06.798627+0000 mgr.y (mgr.14520) 443 : cluster [DBG] pgmap v727: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:07 vm09 bash[22983]: cluster 2026-03-09T16:05:06.798627+0000 mgr.y (mgr.14520) 443 : cluster [DBG] pgmap v727: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:07 vm09 bash[22983]: cluster 2026-03-09T16:05:07.246771+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T16:05:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:07 vm09 bash[22983]: cluster 2026-03-09T16:05:07.246771+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:07 vm01 bash[28152]: audit 2026-03-09T16:05:06.648906+0000 mgr.y (mgr.14520) 442 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:07 vm01 bash[28152]: audit 2026-03-09T16:05:06.648906+0000 mgr.y (mgr.14520) 442 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:07 vm01 bash[28152]: cluster 2026-03-09T16:05:06.798627+0000 mgr.y (mgr.14520) 443 : cluster [DBG] pgmap v727: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:07 vm01 bash[28152]: cluster 2026-03-09T16:05:06.798627+0000 mgr.y (mgr.14520) 443 : cluster [DBG] pgmap v727: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:07 vm01 bash[28152]: cluster 2026-03-09T16:05:07.246771+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:07 vm01 bash[28152]: cluster 2026-03-09T16:05:07.246771+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:07 vm01 bash[20728]: audit 2026-03-09T16:05:06.648906+0000 mgr.y (mgr.14520) 442 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:07 vm01 bash[20728]: audit 2026-03-09T16:05:06.648906+0000 mgr.y (mgr.14520) 442 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:07 vm01 bash[20728]: cluster 2026-03-09T16:05:06.798627+0000 mgr.y (mgr.14520) 443 : cluster [DBG] pgmap v727: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:07 vm01 bash[20728]: cluster 2026-03-09T16:05:06.798627+0000 mgr.y (mgr.14520) 443 : cluster [DBG] pgmap v727: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:07 vm01 bash[20728]: cluster 2026-03-09T16:05:07.246771+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T16:05:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:07 vm01 bash[20728]: cluster 2026-03-09T16:05:07.246771+0000 mon.a (mon.0) 2987 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-09T16:05:08.132 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:05:07 vm09 bash[50619]: logger=cleanup t=2026-03-09T16:05:07.746319318Z level=info msg="Completed cleanup jobs" duration=1.448031ms 2026-03-09T16:05:08.133 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:05:07 vm09 bash[50619]: logger=plugins.update.checker t=2026-03-09T16:05:07.903308557Z level=info msg="Update check succeeded" duration=52.353274ms 2026-03-09T16:05:10.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:09 vm09 bash[22983]: cluster 2026-03-09T16:05:08.799117+0000 mgr.y (mgr.14520) 444 : cluster [DBG] pgmap v729: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 672 B/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:05:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:09 vm09 bash[22983]: cluster 2026-03-09T16:05:08.799117+0000 mgr.y (mgr.14520) 444 : cluster [DBG] pgmap v729: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 672 B/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:05:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:09 vm01 bash[28152]: cluster 2026-03-09T16:05:08.799117+0000 mgr.y (mgr.14520) 444 : cluster [DBG] pgmap v729: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 672 B/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:05:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:09 vm01 bash[28152]: cluster 2026-03-09T16:05:08.799117+0000 mgr.y (mgr.14520) 444 : cluster [DBG] pgmap v729: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 672 B/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:05:10.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:09 vm01 bash[20728]: cluster 2026-03-09T16:05:08.799117+0000 mgr.y (mgr.14520) 444 : cluster [DBG] pgmap v729: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 672 B/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:05:10.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:09 vm01 bash[20728]: cluster 2026-03-09T16:05:08.799117+0000 mgr.y (mgr.14520) 444 : cluster [DBG] pgmap v729: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 672 B/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T16:05:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:11 vm09 bash[22983]: cluster 2026-03-09T16:05:10.799803+0000 mgr.y (mgr.14520) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-09T16:05:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:11 vm09 bash[22983]: cluster 2026-03-09T16:05:10.799803+0000 mgr.y (mgr.14520) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-09T16:05:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:11 vm01 bash[28152]: cluster 2026-03-09T16:05:10.799803+0000 mgr.y (mgr.14520) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-09T16:05:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:11 vm01 bash[28152]: cluster 2026-03-09T16:05:10.799803+0000 mgr.y (mgr.14520) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-09T16:05:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:11 vm01 bash[20728]: cluster 2026-03-09T16:05:10.799803+0000 mgr.y (mgr.14520) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-09T16:05:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:11 vm01 bash[20728]: cluster 2026-03-09T16:05:10.799803+0000 mgr.y (mgr.14520) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-09T16:05:13.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:05:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:05:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:05:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:13 vm01 bash[28152]: cluster 2026-03-09T16:05:12.800126+0000 mgr.y (mgr.14520) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:05:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:13 vm01 bash[28152]: cluster 2026-03-09T16:05:12.800126+0000 mgr.y (mgr.14520) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:05:14.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:13 vm01 bash[20728]: cluster 2026-03-09T16:05:12.800126+0000 mgr.y (mgr.14520) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:05:14.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:13 vm01 bash[20728]: cluster 2026-03-09T16:05:12.800126+0000 mgr.y (mgr.14520) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:05:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:13 vm09 bash[22983]: cluster 2026-03-09T16:05:12.800126+0000 mgr.y (mgr.14520) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:05:14.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:13 vm09 bash[22983]: cluster 2026-03-09T16:05:12.800126+0000 mgr.y (mgr.14520) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-09T16:05:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:14 vm01 bash[28152]: cluster 2026-03-09T16:05:13.899434+0000 mon.a (mon.0) 2988 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:14 vm01 bash[28152]: cluster 2026-03-09T16:05:13.899434+0000 mon.a (mon.0) 2988 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:14 vm01 bash[28152]: audit 2026-03-09T16:05:14.340940+0000 mon.a (mon.0) 2989 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:14 vm01 bash[28152]: audit 2026-03-09T16:05:14.340940+0000 mon.a (mon.0) 2989 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:15.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:14 vm01 bash[20728]: cluster 2026-03-09T16:05:13.899434+0000 mon.a (mon.0) 2988 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:15.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:14 vm01 bash[20728]: cluster 2026-03-09T16:05:13.899434+0000 mon.a (mon.0) 2988 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:15.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:14 vm01 bash[20728]: audit 2026-03-09T16:05:14.340940+0000 mon.a (mon.0) 2989 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:15.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:14 vm01 bash[20728]: audit 2026-03-09T16:05:14.340940+0000 mon.a (mon.0) 2989 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:14 vm09 bash[22983]: cluster 2026-03-09T16:05:13.899434+0000 mon.a (mon.0) 2988 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:14 vm09 bash[22983]: cluster 2026-03-09T16:05:13.899434+0000 mon.a (mon.0) 2988 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:14 vm09 bash[22983]: audit 2026-03-09T16:05:14.340940+0000 mon.a (mon.0) 2989 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:14 vm09 bash[22983]: audit 2026-03-09T16:05:14.340940+0000 mon.a (mon.0) 2989 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:15 vm01 bash[28152]: cluster 2026-03-09T16:05:14.800893+0000 mgr.y (mgr.14520) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 957 B/s wr, 4 op/s 2026-03-09T16:05:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:15 vm01 bash[28152]: cluster 2026-03-09T16:05:14.800893+0000 mgr.y (mgr.14520) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 957 B/s wr, 4 op/s 2026-03-09T16:05:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:15 vm01 bash[20728]: cluster 2026-03-09T16:05:14.800893+0000 mgr.y (mgr.14520) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 957 B/s wr, 4 op/s 2026-03-09T16:05:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:15 vm01 bash[20728]: cluster 2026-03-09T16:05:14.800893+0000 mgr.y (mgr.14520) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 957 B/s wr, 4 op/s 2026-03-09T16:05:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:15 vm09 bash[22983]: cluster 2026-03-09T16:05:14.800893+0000 mgr.y (mgr.14520) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 957 B/s wr, 4 op/s 2026-03-09T16:05:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:15 vm09 bash[22983]: cluster 2026-03-09T16:05:14.800893+0000 mgr.y (mgr.14520) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 957 B/s wr, 4 op/s 2026-03-09T16:05:17.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:05:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:05:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:17 vm09 bash[22983]: audit 2026-03-09T16:05:16.659615+0000 mgr.y (mgr.14520) 448 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:17 vm09 bash[22983]: audit 2026-03-09T16:05:16.659615+0000 mgr.y (mgr.14520) 448 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:17 vm09 bash[22983]: cluster 2026-03-09T16:05:16.801261+0000 mgr.y (mgr.14520) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-09T16:05:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:17 vm09 bash[22983]: cluster 2026-03-09T16:05:16.801261+0000 mgr.y (mgr.14520) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-09T16:05:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:17 vm01 bash[28152]: audit 2026-03-09T16:05:16.659615+0000 mgr.y (mgr.14520) 448 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:17 vm01 bash[28152]: audit 2026-03-09T16:05:16.659615+0000 mgr.y (mgr.14520) 448 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:17 vm01 bash[28152]: cluster 2026-03-09T16:05:16.801261+0000 mgr.y (mgr.14520) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-09T16:05:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:17 vm01 bash[28152]: cluster 2026-03-09T16:05:16.801261+0000 mgr.y (mgr.14520) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-09T16:05:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:17 vm01 bash[20728]: audit 2026-03-09T16:05:16.659615+0000 mgr.y (mgr.14520) 448 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:17 vm01 bash[20728]: audit 2026-03-09T16:05:16.659615+0000 mgr.y (mgr.14520) 448 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:17 vm01 bash[20728]: cluster 2026-03-09T16:05:16.801261+0000 mgr.y (mgr.14520) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-09T16:05:18.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:17 vm01 bash[20728]: cluster 2026-03-09T16:05:16.801261+0000 mgr.y (mgr.14520) 449 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-09T16:05:19.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:19 vm09 bash[22983]: cluster 2026-03-09T16:05:17.972275+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T16:05:19.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:19 vm09 bash[22983]: cluster 2026-03-09T16:05:17.972275+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T16:05:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:19 vm01 bash[28152]: cluster 2026-03-09T16:05:17.972275+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T16:05:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:19 vm01 bash[28152]: cluster 2026-03-09T16:05:17.972275+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T16:05:19.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:19 vm01 bash[20728]: cluster 2026-03-09T16:05:17.972275+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T16:05:19.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:19 vm01 bash[20728]: cluster 2026-03-09T16:05:17.972275+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-09T16:05:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:20 vm09 bash[22983]: cluster 2026-03-09T16:05:18.801802+0000 mgr.y (mgr.14520) 450 : cluster [DBG] pgmap v735: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T16:05:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:20 vm09 bash[22983]: cluster 2026-03-09T16:05:18.801802+0000 mgr.y (mgr.14520) 450 : cluster [DBG] pgmap v735: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T16:05:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:20 vm01 bash[28152]: cluster 2026-03-09T16:05:18.801802+0000 mgr.y (mgr.14520) 450 : cluster [DBG] pgmap v735: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T16:05:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:20 vm01 bash[28152]: cluster 2026-03-09T16:05:18.801802+0000 mgr.y (mgr.14520) 450 : cluster [DBG] pgmap v735: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T16:05:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:20 vm01 bash[20728]: cluster 2026-03-09T16:05:18.801802+0000 mgr.y (mgr.14520) 450 : cluster [DBG] pgmap v735: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T16:05:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:20 vm01 bash[20728]: cluster 2026-03-09T16:05:18.801802+0000 mgr.y (mgr.14520) 450 : cluster [DBG] pgmap v735: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-09T16:05:21.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:21 vm09 bash[22983]: cluster 2026-03-09T16:05:20.802379+0000 mgr.y (mgr.14520) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:21.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:21 vm09 bash[22983]: cluster 2026-03-09T16:05:20.802379+0000 mgr.y (mgr.14520) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:21 vm01 bash[28152]: cluster 2026-03-09T16:05:20.802379+0000 mgr.y (mgr.14520) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:21 vm01 bash[28152]: cluster 2026-03-09T16:05:20.802379+0000 mgr.y (mgr.14520) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:21.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:21 vm01 bash[20728]: cluster 2026-03-09T16:05:20.802379+0000 mgr.y (mgr.14520) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:21.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:21 vm01 bash[20728]: cluster 2026-03-09T16:05:20.802379+0000 mgr.y (mgr.14520) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:05:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:05:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:05:24.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:23 vm09 bash[22983]: cluster 2026-03-09T16:05:22.802678+0000 mgr.y (mgr.14520) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:24.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:23 vm09 bash[22983]: cluster 2026-03-09T16:05:22.802678+0000 mgr.y (mgr.14520) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:23 vm01 bash[28152]: cluster 2026-03-09T16:05:22.802678+0000 mgr.y (mgr.14520) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:23 vm01 bash[28152]: cluster 2026-03-09T16:05:22.802678+0000 mgr.y (mgr.14520) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:23 vm01 bash[20728]: cluster 2026-03-09T16:05:22.802678+0000 mgr.y (mgr.14520) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:23 vm01 bash[20728]: cluster 2026-03-09T16:05:22.802678+0000 mgr.y (mgr.14520) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:24 vm01 bash[28152]: cluster 2026-03-09T16:05:23.908827+0000 mon.a (mon.0) 2991 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T16:05:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:24 vm01 bash[28152]: cluster 2026-03-09T16:05:23.908827+0000 mon.a (mon.0) 2991 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T16:05:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:24 vm01 bash[20728]: cluster 2026-03-09T16:05:23.908827+0000 mon.a (mon.0) 2991 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T16:05:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:24 vm01 bash[20728]: cluster 2026-03-09T16:05:23.908827+0000 mon.a (mon.0) 2991 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T16:05:25.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:24 vm09 bash[22983]: cluster 2026-03-09T16:05:23.908827+0000 mon.a (mon.0) 2991 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T16:05:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:24 vm09 bash[22983]: cluster 2026-03-09T16:05:23.908827+0000 mon.a (mon.0) 2991 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-09T16:05:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:25 vm01 bash[28152]: cluster 2026-03-09T16:05:24.803418+0000 mgr.y (mgr.14520) 453 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:25 vm01 bash[28152]: cluster 2026-03-09T16:05:24.803418+0000 mgr.y (mgr.14520) 453 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:25 vm01 bash[20728]: cluster 2026-03-09T16:05:24.803418+0000 mgr.y (mgr.14520) 453 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:25 vm01 bash[20728]: cluster 2026-03-09T16:05:24.803418+0000 mgr.y (mgr.14520) 453 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:25 vm09 bash[22983]: cluster 2026-03-09T16:05:24.803418+0000 mgr.y (mgr.14520) 453 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:25 vm09 bash[22983]: cluster 2026-03-09T16:05:24.803418+0000 mgr.y (mgr.14520) 453 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:27.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:05:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:05:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:27 vm01 bash[28152]: audit 2026-03-09T16:05:26.661532+0000 mgr.y (mgr.14520) 454 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:27 vm01 bash[28152]: audit 2026-03-09T16:05:26.661532+0000 mgr.y (mgr.14520) 454 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:27 vm01 bash[28152]: cluster 2026-03-09T16:05:26.803775+0000 mgr.y (mgr.14520) 455 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:27 vm01 bash[28152]: cluster 2026-03-09T16:05:26.803775+0000 mgr.y (mgr.14520) 455 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:27 vm01 bash[20728]: audit 2026-03-09T16:05:26.661532+0000 mgr.y (mgr.14520) 454 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:27 vm01 bash[20728]: audit 2026-03-09T16:05:26.661532+0000 mgr.y (mgr.14520) 454 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:27 vm01 bash[20728]: cluster 2026-03-09T16:05:26.803775+0000 mgr.y (mgr.14520) 455 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:27 vm01 bash[20728]: cluster 2026-03-09T16:05:26.803775+0000 mgr.y (mgr.14520) 455 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:27 vm09 bash[22983]: audit 2026-03-09T16:05:26.661532+0000 mgr.y (mgr.14520) 454 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:27 vm09 bash[22983]: audit 2026-03-09T16:05:26.661532+0000 mgr.y (mgr.14520) 454 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:27 vm09 bash[22983]: cluster 2026-03-09T16:05:26.803775+0000 mgr.y (mgr.14520) 455 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:27 vm09 bash[22983]: cluster 2026-03-09T16:05:26.803775+0000 mgr.y (mgr.14520) 455 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:05:29.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:28 vm09 bash[22983]: audit 2026-03-09T16:05:27.990724+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:28 vm09 bash[22983]: audit 2026-03-09T16:05:27.990724+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:28 vm09 bash[22983]: audit 2026-03-09T16:05:27.990976+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:28 vm09 bash[22983]: audit 2026-03-09T16:05:27.990976+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:28 vm09 bash[22983]: audit 2026-03-09T16:05:27.991305+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:28 vm09 bash[22983]: audit 2026-03-09T16:05:27.991305+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:28 vm09 bash[22983]: audit 2026-03-09T16:05:27.991612+0000 mon.a (mon.0) 2993 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:28 vm09 bash[22983]: audit 2026-03-09T16:05:27.991612+0000 mon.a (mon.0) 2993 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:28 vm01 bash[28152]: audit 2026-03-09T16:05:27.990724+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:28 vm01 bash[28152]: audit 2026-03-09T16:05:27.990724+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:28 vm01 bash[28152]: audit 2026-03-09T16:05:27.990976+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:28 vm01 bash[28152]: audit 2026-03-09T16:05:27.990976+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:28 vm01 bash[28152]: audit 2026-03-09T16:05:27.991305+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:28 vm01 bash[28152]: audit 2026-03-09T16:05:27.991305+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:28 vm01 bash[28152]: audit 2026-03-09T16:05:27.991612+0000 mon.a (mon.0) 2993 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:28 vm01 bash[28152]: audit 2026-03-09T16:05:27.991612+0000 mon.a (mon.0) 2993 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:28 vm01 bash[20728]: audit 2026-03-09T16:05:27.990724+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:28 vm01 bash[20728]: audit 2026-03-09T16:05:27.990724+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:28 vm01 bash[20728]: audit 2026-03-09T16:05:27.990976+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:28 vm01 bash[20728]: audit 2026-03-09T16:05:27.990976+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:29.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:28 vm01 bash[20728]: audit 2026-03-09T16:05:27.991305+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:28 vm01 bash[20728]: audit 2026-03-09T16:05:27.991305+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:28 vm01 bash[20728]: audit 2026-03-09T16:05:27.991612+0000 mon.a (mon.0) 2993 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:29.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:28 vm01 bash[20728]: audit 2026-03-09T16:05:27.991612+0000 mon.a (mon.0) 2993 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-94"}]: dispatch 2026-03-09T16:05:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:29 vm09 bash[22983]: cluster 2026-03-09T16:05:28.804265+0000 mgr.y (mgr.14520) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 1 op/s 2026-03-09T16:05:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:29 vm09 bash[22983]: cluster 2026-03-09T16:05:28.804265+0000 mgr.y (mgr.14520) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 1 op/s 2026-03-09T16:05:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:29 vm09 bash[22983]: cluster 2026-03-09T16:05:28.949871+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T16:05:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:29 vm09 bash[22983]: cluster 2026-03-09T16:05:28.949871+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T16:05:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:29 vm09 bash[22983]: audit 2026-03-09T16:05:29.347582+0000 mon.a (mon.0) 2995 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:29 vm09 bash[22983]: audit 2026-03-09T16:05:29.347582+0000 mon.a (mon.0) 2995 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:29 vm01 bash[28152]: cluster 2026-03-09T16:05:28.804265+0000 mgr.y (mgr.14520) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 1 op/s 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:29 vm01 bash[28152]: cluster 2026-03-09T16:05:28.804265+0000 mgr.y (mgr.14520) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 1 op/s 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:29 vm01 bash[28152]: cluster 2026-03-09T16:05:28.949871+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:29 vm01 bash[28152]: cluster 2026-03-09T16:05:28.949871+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:29 vm01 bash[28152]: audit 2026-03-09T16:05:29.347582+0000 mon.a (mon.0) 2995 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:29 vm01 bash[28152]: audit 2026-03-09T16:05:29.347582+0000 mon.a (mon.0) 2995 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:29 vm01 bash[20728]: cluster 2026-03-09T16:05:28.804265+0000 mgr.y (mgr.14520) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 1 op/s 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:29 vm01 bash[20728]: cluster 2026-03-09T16:05:28.804265+0000 mgr.y (mgr.14520) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 1 op/s 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:29 vm01 bash[20728]: cluster 2026-03-09T16:05:28.949871+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:29 vm01 bash[20728]: cluster 2026-03-09T16:05:28.949871+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-09T16:05:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:29 vm01 bash[20728]: audit 2026-03-09T16:05:29.347582+0000 mon.a (mon.0) 2995 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:30.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:29 vm01 bash[20728]: audit 2026-03-09T16:05:29.347582+0000 mon.a (mon.0) 2995 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:30 vm09 bash[22983]: cluster 2026-03-09T16:05:29.949640+0000 mon.a (mon.0) 2996 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T16:05:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:30 vm09 bash[22983]: cluster 2026-03-09T16:05:29.949640+0000 mon.a (mon.0) 2996 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T16:05:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:30 vm09 bash[22983]: audit 2026-03-09T16:05:29.972141+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:30 vm09 bash[22983]: audit 2026-03-09T16:05:29.972141+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:30 vm09 bash[22983]: audit 2026-03-09T16:05:29.972410+0000 mon.a (mon.0) 2997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:30 vm09 bash[22983]: audit 2026-03-09T16:05:29.972410+0000 mon.a (mon.0) 2997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:30 vm01 bash[28152]: cluster 2026-03-09T16:05:29.949640+0000 mon.a (mon.0) 2996 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T16:05:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:30 vm01 bash[28152]: cluster 2026-03-09T16:05:29.949640+0000 mon.a (mon.0) 2996 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T16:05:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:30 vm01 bash[28152]: audit 2026-03-09T16:05:29.972141+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:30 vm01 bash[28152]: audit 2026-03-09T16:05:29.972141+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:30 vm01 bash[28152]: audit 2026-03-09T16:05:29.972410+0000 mon.a (mon.0) 2997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:30 vm01 bash[28152]: audit 2026-03-09T16:05:29.972410+0000 mon.a (mon.0) 2997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:30 vm01 bash[20728]: cluster 2026-03-09T16:05:29.949640+0000 mon.a (mon.0) 2996 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T16:05:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:30 vm01 bash[20728]: cluster 2026-03-09T16:05:29.949640+0000 mon.a (mon.0) 2996 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-09T16:05:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:30 vm01 bash[20728]: audit 2026-03-09T16:05:29.972141+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:30 vm01 bash[20728]: audit 2026-03-09T16:05:29.972141+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:30 vm01 bash[20728]: audit 2026-03-09T16:05:29.972410+0000 mon.a (mon.0) 2997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:31.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:30 vm01 bash[20728]: audit 2026-03-09T16:05:29.972410+0000 mon.a (mon.0) 2997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: cluster 2026-03-09T16:05:30.804572+0000 mgr.y (mgr.14520) 457 : cluster [DBG] pgmap v744: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: cluster 2026-03-09T16:05:30.804572+0000 mgr.y (mgr.14520) 457 : cluster [DBG] pgmap v744: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: cluster 2026-03-09T16:05:30.947397+0000 mon.a (mon.0) 2998 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: cluster 2026-03-09T16:05:30.947397+0000 mon.a (mon.0) 2998 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: audit 2026-03-09T16:05:30.958269+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: audit 2026-03-09T16:05:30.958269+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: audit 2026-03-09T16:05:30.972471+0000 mon.c (mon.2) 475 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: audit 2026-03-09T16:05:30.972471+0000 mon.c (mon.2) 475 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: cluster 2026-03-09T16:05:30.977484+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T16:05:32.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:31 vm09 bash[22983]: cluster 2026-03-09T16:05:30.977484+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T16:05:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: cluster 2026-03-09T16:05:30.804572+0000 mgr.y (mgr.14520) 457 : cluster [DBG] pgmap v744: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: cluster 2026-03-09T16:05:30.804572+0000 mgr.y (mgr.14520) 457 : cluster [DBG] pgmap v744: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: cluster 2026-03-09T16:05:30.947397+0000 mon.a (mon.0) 2998 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: cluster 2026-03-09T16:05:30.947397+0000 mon.a (mon.0) 2998 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: audit 2026-03-09T16:05:30.958269+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: audit 2026-03-09T16:05:30.958269+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: audit 2026-03-09T16:05:30.972471+0000 mon.c (mon.2) 475 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: audit 2026-03-09T16:05:30.972471+0000 mon.c (mon.2) 475 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: cluster 2026-03-09T16:05:30.977484+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:31 vm01 bash[28152]: cluster 2026-03-09T16:05:30.977484+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: cluster 2026-03-09T16:05:30.804572+0000 mgr.y (mgr.14520) 457 : cluster [DBG] pgmap v744: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: cluster 2026-03-09T16:05:30.804572+0000 mgr.y (mgr.14520) 457 : cluster [DBG] pgmap v744: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: cluster 2026-03-09T16:05:30.947397+0000 mon.a (mon.0) 2998 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: cluster 2026-03-09T16:05:30.947397+0000 mon.a (mon.0) 2998 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: audit 2026-03-09T16:05:30.958269+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: audit 2026-03-09T16:05:30.958269+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: audit 2026-03-09T16:05:30.972471+0000 mon.c (mon.2) 475 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: audit 2026-03-09T16:05:30.972471+0000 mon.c (mon.2) 475 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: cluster 2026-03-09T16:05:30.977484+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T16:05:32.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:31 vm01 bash[20728]: cluster 2026-03-09T16:05:30.977484+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-09T16:05:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:32 vm01 bash[28152]: cluster 2026-03-09T16:05:31.966025+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T16:05:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:32 vm01 bash[28152]: cluster 2026-03-09T16:05:31.966025+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T16:05:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:05:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:05:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:05:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:32 vm01 bash[20728]: cluster 2026-03-09T16:05:31.966025+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T16:05:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:32 vm01 bash[20728]: cluster 2026-03-09T16:05:31.966025+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T16:05:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:32 vm09 bash[22983]: cluster 2026-03-09T16:05:31.966025+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T16:05:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:32 vm09 bash[22983]: cluster 2026-03-09T16:05:31.966025+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-09T16:05:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:33 vm09 bash[22983]: cluster 2026-03-09T16:05:32.804895+0000 mgr.y (mgr.14520) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:33 vm09 bash[22983]: cluster 2026-03-09T16:05:32.804895+0000 mgr.y (mgr.14520) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:34 vm01 bash[28152]: cluster 2026-03-09T16:05:32.804895+0000 mgr.y (mgr.14520) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:34 vm01 bash[28152]: cluster 2026-03-09T16:05:32.804895+0000 mgr.y (mgr.14520) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:34 vm01 bash[20728]: cluster 2026-03-09T16:05:32.804895+0000 mgr.y (mgr.14520) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:34 vm01 bash[20728]: cluster 2026-03-09T16:05:32.804895+0000 mgr.y (mgr.14520) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: cluster 2026-03-09T16:05:34.806065+0000 mgr.y (mgr.14520) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 872 B/s wr, 3 op/s 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: cluster 2026-03-09T16:05:34.806065+0000 mgr.y (mgr.14520) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 872 B/s wr, 3 op/s 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.183650+0000 mon.a (mon.0) 3002 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.183650+0000 mon.a (mon.0) 3002 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.514479+0000 mon.a (mon.0) 3003 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.514479+0000 mon.a (mon.0) 3003 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.515051+0000 mon.a (mon.0) 3004 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.515051+0000 mon.a (mon.0) 3004 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.515601+0000 mon.a (mon.0) 3005 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.515601+0000 mon.a (mon.0) 3005 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.515987+0000 mon.a (mon.0) 3006 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.515987+0000 mon.a (mon.0) 3006 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.523106+0000 mon.a (mon.0) 3007 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:05:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:36 vm09 bash[22983]: audit 2026-03-09T16:05:35.523106+0000 mon.a (mon.0) 3007 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: cluster 2026-03-09T16:05:34.806065+0000 mgr.y (mgr.14520) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 872 B/s wr, 3 op/s 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: cluster 2026-03-09T16:05:34.806065+0000 mgr.y (mgr.14520) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 872 B/s wr, 3 op/s 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.183650+0000 mon.a (mon.0) 3002 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.183650+0000 mon.a (mon.0) 3002 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.514479+0000 mon.a (mon.0) 3003 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.514479+0000 mon.a (mon.0) 3003 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.515051+0000 mon.a (mon.0) 3004 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.515051+0000 mon.a (mon.0) 3004 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.515601+0000 mon.a (mon.0) 3005 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.515601+0000 mon.a (mon.0) 3005 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.515987+0000 mon.a (mon.0) 3006 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.515987+0000 mon.a (mon.0) 3006 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.523106+0000 mon.a (mon.0) 3007 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:36 vm01 bash[28152]: audit 2026-03-09T16:05:35.523106+0000 mon.a (mon.0) 3007 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: cluster 2026-03-09T16:05:34.806065+0000 mgr.y (mgr.14520) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 872 B/s wr, 3 op/s 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: cluster 2026-03-09T16:05:34.806065+0000 mgr.y (mgr.14520) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 872 B/s wr, 3 op/s 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.183650+0000 mon.a (mon.0) 3002 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.183650+0000 mon.a (mon.0) 3002 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.514479+0000 mon.a (mon.0) 3003 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.514479+0000 mon.a (mon.0) 3003 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.515051+0000 mon.a (mon.0) 3004 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.515051+0000 mon.a (mon.0) 3004 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:05:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.515601+0000 mon.a (mon.0) 3005 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:05:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.515601+0000 mon.a (mon.0) 3005 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:05:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.515987+0000 mon.a (mon.0) 3006 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:05:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.515987+0000 mon.a (mon.0) 3006 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:05:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.523106+0000 mon.a (mon.0) 3007 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:05:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:36 vm01 bash[20728]: audit 2026-03-09T16:05:35.523106+0000 mon.a (mon.0) 3007 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:05:37.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:05:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:05:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:38 vm09 bash[22983]: audit 2026-03-09T16:05:36.667994+0000 mgr.y (mgr.14520) 460 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:38 vm09 bash[22983]: audit 2026-03-09T16:05:36.667994+0000 mgr.y (mgr.14520) 460 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:38 vm09 bash[22983]: cluster 2026-03-09T16:05:36.806416+0000 mgr.y (mgr.14520) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 746 B/s rd, 746 B/s wr, 2 op/s 2026-03-09T16:05:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:38 vm09 bash[22983]: cluster 2026-03-09T16:05:36.806416+0000 mgr.y (mgr.14520) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 746 B/s rd, 746 B/s wr, 2 op/s 2026-03-09T16:05:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:38 vm01 bash[28152]: audit 2026-03-09T16:05:36.667994+0000 mgr.y (mgr.14520) 460 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:38 vm01 bash[28152]: audit 2026-03-09T16:05:36.667994+0000 mgr.y (mgr.14520) 460 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:38 vm01 bash[28152]: cluster 2026-03-09T16:05:36.806416+0000 mgr.y (mgr.14520) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 746 B/s rd, 746 B/s wr, 2 op/s 2026-03-09T16:05:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:38 vm01 bash[28152]: cluster 2026-03-09T16:05:36.806416+0000 mgr.y (mgr.14520) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 746 B/s rd, 746 B/s wr, 2 op/s 2026-03-09T16:05:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:38 vm01 bash[20728]: audit 2026-03-09T16:05:36.667994+0000 mgr.y (mgr.14520) 460 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:38 vm01 bash[20728]: audit 2026-03-09T16:05:36.667994+0000 mgr.y (mgr.14520) 460 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:38 vm01 bash[20728]: cluster 2026-03-09T16:05:36.806416+0000 mgr.y (mgr.14520) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 746 B/s rd, 746 B/s wr, 2 op/s 2026-03-09T16:05:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:38 vm01 bash[20728]: cluster 2026-03-09T16:05:36.806416+0000 mgr.y (mgr.14520) 461 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 746 B/s rd, 746 B/s wr, 2 op/s 2026-03-09T16:05:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:39 vm09 bash[22983]: cluster 2026-03-09T16:05:38.903367+0000 mon.a (mon.0) 3008 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:39 vm09 bash[22983]: cluster 2026-03-09T16:05:38.903367+0000 mon.a (mon.0) 3008 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:39 vm01 bash[28152]: cluster 2026-03-09T16:05:38.903367+0000 mon.a (mon.0) 3008 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:39 vm01 bash[28152]: cluster 2026-03-09T16:05:38.903367+0000 mon.a (mon.0) 3008 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:39 vm01 bash[20728]: cluster 2026-03-09T16:05:38.903367+0000 mon.a (mon.0) 3008 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:39 vm01 bash[20728]: cluster 2026-03-09T16:05:38.903367+0000 mon.a (mon.0) 3008 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:05:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:40 vm09 bash[22983]: cluster 2026-03-09T16:05:38.807043+0000 mgr.y (mgr.14520) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 639 B/s wr, 1 op/s 2026-03-09T16:05:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:40 vm09 bash[22983]: cluster 2026-03-09T16:05:38.807043+0000 mgr.y (mgr.14520) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 639 B/s wr, 1 op/s 2026-03-09T16:05:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:40 vm01 bash[28152]: cluster 2026-03-09T16:05:38.807043+0000 mgr.y (mgr.14520) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 639 B/s wr, 1 op/s 2026-03-09T16:05:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:40 vm01 bash[28152]: cluster 2026-03-09T16:05:38.807043+0000 mgr.y (mgr.14520) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 639 B/s wr, 1 op/s 2026-03-09T16:05:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:40 vm01 bash[20728]: cluster 2026-03-09T16:05:38.807043+0000 mgr.y (mgr.14520) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 639 B/s wr, 1 op/s 2026-03-09T16:05:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:40 vm01 bash[20728]: cluster 2026-03-09T16:05:38.807043+0000 mgr.y (mgr.14520) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 639 B/s wr, 1 op/s 2026-03-09T16:05:42.382 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:05:42 vm09 bash[37995]: debug 2026-03-09T16:05:42.000+0000 7f58ec758640 -1 snap_mapper.add_oid found existing snaps mapped on 104:24715e3f:test-rados-api-vm01-59821-97::foo:21, removing 2026-03-09T16:05:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: cluster 2026-03-09T16:05:40.807625+0000 mgr.y (mgr.14520) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 519 B/s wr, 2 op/s 2026-03-09T16:05:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: cluster 2026-03-09T16:05:40.807625+0000 mgr.y (mgr.14520) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 519 B/s wr, 2 op/s 2026-03-09T16:05:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: audit 2026-03-09T16:05:42.017873+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: audit 2026-03-09T16:05:42.017873+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: audit 2026-03-09T16:05:42.018283+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: audit 2026-03-09T16:05:42.018283+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: audit 2026-03-09T16:05:42.018871+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: audit 2026-03-09T16:05:42.018871+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: audit 2026-03-09T16:05:42.019090+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:42 vm09 bash[22983]: audit 2026-03-09T16:05:42.019090+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 16:05:42 vm01 bash[36842]: debug 2026-03-09T16:05:42.002+0000 7f43c5c14640 -1 snap_mapper.add_oid found existing snaps mapped on 104:24715e3f:test-rados-api-vm01-59821-97::foo:21, removing 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: cluster 2026-03-09T16:05:40.807625+0000 mgr.y (mgr.14520) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 519 B/s wr, 2 op/s 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: cluster 2026-03-09T16:05:40.807625+0000 mgr.y (mgr.14520) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 519 B/s wr, 2 op/s 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: audit 2026-03-09T16:05:42.017873+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: audit 2026-03-09T16:05:42.017873+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: audit 2026-03-09T16:05:42.018283+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: audit 2026-03-09T16:05:42.018283+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: audit 2026-03-09T16:05:42.018871+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: audit 2026-03-09T16:05:42.018871+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: audit 2026-03-09T16:05:42.019090+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:42 vm01 bash[28152]: audit 2026-03-09T16:05:42.019090+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: cluster 2026-03-09T16:05:40.807625+0000 mgr.y (mgr.14520) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 519 B/s wr, 2 op/s 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: cluster 2026-03-09T16:05:40.807625+0000 mgr.y (mgr.14520) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 519 B/s wr, 2 op/s 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: audit 2026-03-09T16:05:42.017873+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: audit 2026-03-09T16:05:42.017873+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: audit 2026-03-09T16:05:42.018283+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: audit 2026-03-09T16:05:42.018283+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: audit 2026-03-09T16:05:42.018871+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: audit 2026-03-09T16:05:42.018871+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: audit 2026-03-09T16:05:42.019090+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:42 vm01 bash[20728]: audit 2026-03-09T16:05:42.019090+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-96"}]: dispatch 2026-03-09T16:05:42.427 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 16:05:42 vm01 bash[31061]: debug 2026-03-09T16:05:42.002+0000 7f9fa5638640 -1 snap_mapper.add_oid found existing snaps mapped on 104:24715e3f:test-rados-api-vm01-59821-97::foo:21, removing 2026-03-09T16:05:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:43 vm01 bash[28152]: cluster 2026-03-09T16:05:42.091499+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T16:05:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:43 vm01 bash[28152]: cluster 2026-03-09T16:05:42.091499+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T16:05:43.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:05:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:05:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:05:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:43 vm01 bash[20728]: cluster 2026-03-09T16:05:42.091499+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T16:05:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:43 vm01 bash[20728]: cluster 2026-03-09T16:05:42.091499+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T16:05:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:43 vm09 bash[22983]: cluster 2026-03-09T16:05:42.091499+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T16:05:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:43 vm09 bash[22983]: cluster 2026-03-09T16:05:42.091499+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-09T16:05:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:44 vm01 bash[28152]: cluster 2026-03-09T16:05:42.807965+0000 mgr.y (mgr.14520) 464 : cluster [DBG] pgmap v753: 260 pgs: 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T16:05:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:44 vm01 bash[28152]: cluster 2026-03-09T16:05:42.807965+0000 mgr.y (mgr.14520) 464 : cluster [DBG] pgmap v753: 260 pgs: 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T16:05:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:44 vm01 bash[28152]: cluster 2026-03-09T16:05:43.118876+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T16:05:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:44 vm01 bash[28152]: cluster 2026-03-09T16:05:43.118876+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:44 vm01 bash[28152]: audit 2026-03-09T16:05:43.120867+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:44 vm01 bash[28152]: audit 2026-03-09T16:05:43.120867+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:44 vm01 bash[28152]: audit 2026-03-09T16:05:43.124103+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:44 vm01 bash[28152]: audit 2026-03-09T16:05:43.124103+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:44 vm01 bash[20728]: cluster 2026-03-09T16:05:42.807965+0000 mgr.y (mgr.14520) 464 : cluster [DBG] pgmap v753: 260 pgs: 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:44 vm01 bash[20728]: cluster 2026-03-09T16:05:42.807965+0000 mgr.y (mgr.14520) 464 : cluster [DBG] pgmap v753: 260 pgs: 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:44 vm01 bash[20728]: cluster 2026-03-09T16:05:43.118876+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:44 vm01 bash[20728]: cluster 2026-03-09T16:05:43.118876+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:44 vm01 bash[20728]: audit 2026-03-09T16:05:43.120867+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:44 vm01 bash[20728]: audit 2026-03-09T16:05:43.120867+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:44 vm01 bash[20728]: audit 2026-03-09T16:05:43.124103+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:44 vm01 bash[20728]: audit 2026-03-09T16:05:43.124103+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:44 vm09 bash[22983]: cluster 2026-03-09T16:05:42.807965+0000 mgr.y (mgr.14520) 464 : cluster [DBG] pgmap v753: 260 pgs: 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T16:05:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:44 vm09 bash[22983]: cluster 2026-03-09T16:05:42.807965+0000 mgr.y (mgr.14520) 464 : cluster [DBG] pgmap v753: 260 pgs: 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T16:05:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:44 vm09 bash[22983]: cluster 2026-03-09T16:05:43.118876+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T16:05:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:44 vm09 bash[22983]: cluster 2026-03-09T16:05:43.118876+0000 mon.a (mon.0) 3012 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-09T16:05:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:44 vm09 bash[22983]: audit 2026-03-09T16:05:43.120867+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:44 vm09 bash[22983]: audit 2026-03-09T16:05:43.120867+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:44 vm09 bash[22983]: audit 2026-03-09T16:05:43.124103+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:44 vm09 bash[22983]: audit 2026-03-09T16:05:43.124103+0000 mon.a (mon.0) 3013 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: audit 2026-03-09T16:05:44.118549+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: audit 2026-03-09T16:05:44.118549+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: cluster 2026-03-09T16:05:44.135802+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T16:05:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: cluster 2026-03-09T16:05:44.135802+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T16:05:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: audit 2026-03-09T16:05:44.193833+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: audit 2026-03-09T16:05:44.193833+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: audit 2026-03-09T16:05:44.194114+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: audit 2026-03-09T16:05:44.194114+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: audit 2026-03-09T16:05:44.353783+0000 mon.a (mon.0) 3017 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: audit 2026-03-09T16:05:44.353783+0000 mon.a (mon.0) 3017 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: cluster 2026-03-09T16:05:44.808560+0000 mgr.y (mgr.14520) 465 : cluster [DBG] pgmap v756: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:45 vm01 bash[28152]: cluster 2026-03-09T16:05:44.808560+0000 mgr.y (mgr.14520) 465 : cluster [DBG] pgmap v756: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: audit 2026-03-09T16:05:44.118549+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: audit 2026-03-09T16:05:44.118549+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: cluster 2026-03-09T16:05:44.135802+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: cluster 2026-03-09T16:05:44.135802+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: audit 2026-03-09T16:05:44.193833+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: audit 2026-03-09T16:05:44.193833+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: audit 2026-03-09T16:05:44.194114+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: audit 2026-03-09T16:05:44.194114+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: audit 2026-03-09T16:05:44.353783+0000 mon.a (mon.0) 3017 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: audit 2026-03-09T16:05:44.353783+0000 mon.a (mon.0) 3017 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: cluster 2026-03-09T16:05:44.808560+0000 mgr.y (mgr.14520) 465 : cluster [DBG] pgmap v756: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-09T16:05:45.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:45 vm01 bash[20728]: cluster 2026-03-09T16:05:44.808560+0000 mgr.y (mgr.14520) 465 : cluster [DBG] pgmap v756: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-09T16:05:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: audit 2026-03-09T16:05:44.118549+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: audit 2026-03-09T16:05:44.118549+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: cluster 2026-03-09T16:05:44.135802+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: cluster 2026-03-09T16:05:44.135802+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: audit 2026-03-09T16:05:44.193833+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: audit 2026-03-09T16:05:44.193833+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: audit 2026-03-09T16:05:44.194114+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: audit 2026-03-09T16:05:44.194114+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: audit 2026-03-09T16:05:44.353783+0000 mon.a (mon.0) 3017 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: audit 2026-03-09T16:05:44.353783+0000 mon.a (mon.0) 3017 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: cluster 2026-03-09T16:05:44.808560+0000 mgr.y (mgr.14520) 465 : cluster [DBG] pgmap v756: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-09T16:05:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:45 vm09 bash[22983]: cluster 2026-03-09T16:05:44.808560+0000 mgr.y (mgr.14520) 465 : cluster [DBG] pgmap v756: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-09T16:05:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:46 vm09 bash[22983]: audit 2026-03-09T16:05:45.161042+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:46 vm09 bash[22983]: audit 2026-03-09T16:05:45.161042+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:46 vm09 bash[22983]: audit 2026-03-09T16:05:45.174177+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:46 vm09 bash[22983]: audit 2026-03-09T16:05:45.174177+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:46 vm09 bash[22983]: cluster 2026-03-09T16:05:45.176635+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T16:05:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:46 vm09 bash[22983]: cluster 2026-03-09T16:05:45.176635+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T16:05:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:46 vm09 bash[22983]: audit 2026-03-09T16:05:45.177306+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:46 vm09 bash[22983]: audit 2026-03-09T16:05:45.177306+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:46 vm01 bash[28152]: audit 2026-03-09T16:05:45.161042+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:46 vm01 bash[28152]: audit 2026-03-09T16:05:45.161042+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:46 vm01 bash[28152]: audit 2026-03-09T16:05:45.174177+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:46 vm01 bash[28152]: audit 2026-03-09T16:05:45.174177+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:46 vm01 bash[28152]: cluster 2026-03-09T16:05:45.176635+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T16:05:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:46 vm01 bash[28152]: cluster 2026-03-09T16:05:45.176635+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T16:05:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:46 vm01 bash[28152]: audit 2026-03-09T16:05:45.177306+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:46 vm01 bash[28152]: audit 2026-03-09T16:05:45.177306+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:46 vm01 bash[20728]: audit 2026-03-09T16:05:45.161042+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:46 vm01 bash[20728]: audit 2026-03-09T16:05:45.161042+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:46 vm01 bash[20728]: audit 2026-03-09T16:05:45.174177+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:46 vm01 bash[20728]: audit 2026-03-09T16:05:45.174177+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:46 vm01 bash[20728]: cluster 2026-03-09T16:05:45.176635+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T16:05:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:46 vm01 bash[20728]: cluster 2026-03-09T16:05:45.176635+0000 mon.a (mon.0) 3019 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-09T16:05:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:46 vm01 bash[20728]: audit 2026-03-09T16:05:45.177306+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:46 vm01 bash[20728]: audit 2026-03-09T16:05:45.177306+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]: dispatch 2026-03-09T16:05:47.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:05:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: cluster 2026-03-09T16:05:46.161159+0000 mon.a (mon.0) 3021 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: cluster 2026-03-09T16:05:46.161159+0000 mon.a (mon.0) 3021 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:46.165427+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]': finished 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:46.165427+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]': finished 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:46.169525+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:46.169525+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: cluster 2026-03-09T16:05:46.182031+0000 mon.a (mon.0) 3023 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: cluster 2026-03-09T16:05:46.182031+0000 mon.a (mon.0) 3023 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:46.188307+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:46.188307+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:46.678779+0000 mgr.y (mgr.14520) 466 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:46.678779+0000 mgr.y (mgr.14520) 466 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: cluster 2026-03-09T16:05:46.808963+0000 mgr.y (mgr.14520) 467 : cluster [DBG] pgmap v759: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: cluster 2026-03-09T16:05:46.808963+0000 mgr.y (mgr.14520) 467 : cluster [DBG] pgmap v759: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:47.169824+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:47.169824+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:47.177848+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:47.177848+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: cluster 2026-03-09T16:05:47.184862+0000 mon.a (mon.0) 3026 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: cluster 2026-03-09T16:05:47.184862+0000 mon.a (mon.0) 3026 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:47.185952+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:47 vm09 bash[22983]: audit 2026-03-09T16:05:47.185952+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: cluster 2026-03-09T16:05:46.161159+0000 mon.a (mon.0) 3021 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:05:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: cluster 2026-03-09T16:05:46.161159+0000 mon.a (mon.0) 3021 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:05:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:46.165427+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]': finished 2026-03-09T16:05:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:46.165427+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]': finished 2026-03-09T16:05:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:46.169525+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:46.169525+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: cluster 2026-03-09T16:05:46.182031+0000 mon.a (mon.0) 3023 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: cluster 2026-03-09T16:05:46.182031+0000 mon.a (mon.0) 3023 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:46.188307+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:46.188307+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:46.678779+0000 mgr.y (mgr.14520) 466 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:46.678779+0000 mgr.y (mgr.14520) 466 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: cluster 2026-03-09T16:05:46.808963+0000 mgr.y (mgr.14520) 467 : cluster [DBG] pgmap v759: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: cluster 2026-03-09T16:05:46.808963+0000 mgr.y (mgr.14520) 467 : cluster [DBG] pgmap v759: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:47.169824+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:47.169824+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:47.177848+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:47.177848+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: cluster 2026-03-09T16:05:47.184862+0000 mon.a (mon.0) 3026 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: cluster 2026-03-09T16:05:47.184862+0000 mon.a (mon.0) 3026 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:47.185952+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:47 vm01 bash[28152]: audit 2026-03-09T16:05:47.185952+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: cluster 2026-03-09T16:05:46.161159+0000 mon.a (mon.0) 3021 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: cluster 2026-03-09T16:05:46.161159+0000 mon.a (mon.0) 3021 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:46.165427+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]': finished 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:46.165427+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-98", "mode": "writeback"}]': finished 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:46.169525+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:46.169525+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: cluster 2026-03-09T16:05:46.182031+0000 mon.a (mon.0) 3023 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: cluster 2026-03-09T16:05:46.182031+0000 mon.a (mon.0) 3023 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:46.188307+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:46.188307+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:46.678779+0000 mgr.y (mgr.14520) 466 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:46.678779+0000 mgr.y (mgr.14520) 466 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: cluster 2026-03-09T16:05:46.808963+0000 mgr.y (mgr.14520) 467 : cluster [DBG] pgmap v759: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: cluster 2026-03-09T16:05:46.808963+0000 mgr.y (mgr.14520) 467 : cluster [DBG] pgmap v759: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:47.169824+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:47.169824+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:47.177848+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:47.177848+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: cluster 2026-03-09T16:05:47.184862+0000 mon.a (mon.0) 3026 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: cluster 2026-03-09T16:05:47.184862+0000 mon.a (mon.0) 3026 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:47.185952+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:47 vm01 bash[20728]: audit 2026-03-09T16:05:47.185952+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:05:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: cluster 2026-03-09T16:05:48.170091+0000 mon.a (mon.0) 3028 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:05:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: cluster 2026-03-09T16:05:48.170091+0000 mon.a (mon.0) 3028 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:05:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: audit 2026-03-09T16:05:48.264725+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:05:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: audit 2026-03-09T16:05:48.264725+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:05:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: audit 2026-03-09T16:05:48.273756+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: audit 2026-03-09T16:05:48.273756+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: cluster 2026-03-09T16:05:48.276616+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T16:05:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: cluster 2026-03-09T16:05:48.276616+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T16:05:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: audit 2026-03-09T16:05:48.280384+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:48 vm09 bash[22983]: audit 2026-03-09T16:05:48.280384+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: cluster 2026-03-09T16:05:48.170091+0000 mon.a (mon.0) 3028 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: cluster 2026-03-09T16:05:48.170091+0000 mon.a (mon.0) 3028 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: audit 2026-03-09T16:05:48.264725+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: audit 2026-03-09T16:05:48.264725+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: audit 2026-03-09T16:05:48.273756+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: audit 2026-03-09T16:05:48.273756+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: cluster 2026-03-09T16:05:48.276616+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: cluster 2026-03-09T16:05:48.276616+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: audit 2026-03-09T16:05:48.280384+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:48 vm01 bash[28152]: audit 2026-03-09T16:05:48.280384+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: cluster 2026-03-09T16:05:48.170091+0000 mon.a (mon.0) 3028 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: cluster 2026-03-09T16:05:48.170091+0000 mon.a (mon.0) 3028 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: audit 2026-03-09T16:05:48.264725+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: audit 2026-03-09T16:05:48.264725+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: audit 2026-03-09T16:05:48.273756+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: audit 2026-03-09T16:05:48.273756+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: cluster 2026-03-09T16:05:48.276616+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: cluster 2026-03-09T16:05:48.276616+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: audit 2026-03-09T16:05:48.280384+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:48 vm01 bash[20728]: audit 2026-03-09T16:05:48.280384+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: cluster 2026-03-09T16:05:48.809547+0000 mgr.y (mgr.14520) 468 : cluster [DBG] pgmap v762: 292 pgs: 9 creating+activating, 283 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: cluster 2026-03-09T16:05:48.809547+0000 mgr.y (mgr.14520) 468 : cluster [DBG] pgmap v762: 292 pgs: 9 creating+activating, 283 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: audit 2026-03-09T16:05:49.268930+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: audit 2026-03-09T16:05:49.268930+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: audit 2026-03-09T16:05:49.280835+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: audit 2026-03-09T16:05:49.280835+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: cluster 2026-03-09T16:05:49.281621+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: cluster 2026-03-09T16:05:49.281621+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: audit 2026-03-09T16:05:49.282371+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:49 vm09 bash[22983]: audit 2026-03-09T16:05:49.282371+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: cluster 2026-03-09T16:05:48.809547+0000 mgr.y (mgr.14520) 468 : cluster [DBG] pgmap v762: 292 pgs: 9 creating+activating, 283 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:05:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: cluster 2026-03-09T16:05:48.809547+0000 mgr.y (mgr.14520) 468 : cluster [DBG] pgmap v762: 292 pgs: 9 creating+activating, 283 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:05:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: audit 2026-03-09T16:05:49.268930+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T16:05:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: audit 2026-03-09T16:05:49.268930+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T16:05:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: audit 2026-03-09T16:05:49.280835+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: audit 2026-03-09T16:05:49.280835+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: cluster 2026-03-09T16:05:49.281621+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: cluster 2026-03-09T16:05:49.281621+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: audit 2026-03-09T16:05:49.282371+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:49 vm01 bash[28152]: audit 2026-03-09T16:05:49.282371+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: cluster 2026-03-09T16:05:48.809547+0000 mgr.y (mgr.14520) 468 : cluster [DBG] pgmap v762: 292 pgs: 9 creating+activating, 283 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: cluster 2026-03-09T16:05:48.809547+0000 mgr.y (mgr.14520) 468 : cluster [DBG] pgmap v762: 292 pgs: 9 creating+activating, 283 active+clean; 8.3 MiB data, 966 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: audit 2026-03-09T16:05:49.268930+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: audit 2026-03-09T16:05:49.268930+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: audit 2026-03-09T16:05:49.280835+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: audit 2026-03-09T16:05:49.280835+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: cluster 2026-03-09T16:05:49.281621+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: cluster 2026-03-09T16:05:49.281621+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: audit 2026-03-09T16:05:49.282371+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:50.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:49 vm01 bash[20728]: audit 2026-03-09T16:05:49.282371+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:05:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: audit 2026-03-09T16:05:50.272085+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:05:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: audit 2026-03-09T16:05:50.272085+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:05:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: cluster 2026-03-09T16:05:50.275429+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T16:05:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: cluster 2026-03-09T16:05:50.275429+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T16:05:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: audit 2026-03-09T16:05:50.276295+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: audit 2026-03-09T16:05:50.276295+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: audit 2026-03-09T16:05:50.276636+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: audit 2026-03-09T16:05:50.276636+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: cluster 2026-03-09T16:05:50.809905+0000 mgr.y (mgr.14520) 469 : cluster [DBG] pgmap v765: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:51 vm09 bash[22983]: cluster 2026-03-09T16:05:50.809905+0000 mgr.y (mgr.14520) 469 : cluster [DBG] pgmap v765: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: audit 2026-03-09T16:05:50.272085+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: audit 2026-03-09T16:05:50.272085+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: cluster 2026-03-09T16:05:50.275429+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: cluster 2026-03-09T16:05:50.275429+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: audit 2026-03-09T16:05:50.276295+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: audit 2026-03-09T16:05:50.276295+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: audit 2026-03-09T16:05:50.276636+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: audit 2026-03-09T16:05:50.276636+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: cluster 2026-03-09T16:05:50.809905+0000 mgr.y (mgr.14520) 469 : cluster [DBG] pgmap v765: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:51 vm01 bash[28152]: cluster 2026-03-09T16:05:50.809905+0000 mgr.y (mgr.14520) 469 : cluster [DBG] pgmap v765: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: audit 2026-03-09T16:05:50.272085+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: audit 2026-03-09T16:05:50.272085+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: cluster 2026-03-09T16:05:50.275429+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: cluster 2026-03-09T16:05:50.275429+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: audit 2026-03-09T16:05:50.276295+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: audit 2026-03-09T16:05:50.276295+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: audit 2026-03-09T16:05:50.276636+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: audit 2026-03-09T16:05:50.276636+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: cluster 2026-03-09T16:05:50.809905+0000 mgr.y (mgr.14520) 469 : cluster [DBG] pgmap v765: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:51 vm01 bash[20728]: cluster 2026-03-09T16:05:50.809905+0000 mgr.y (mgr.14520) 469 : cluster [DBG] pgmap v765: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:05:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:52 vm09 bash[22983]: audit 2026-03-09T16:05:51.276103+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T16:05:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:52 vm09 bash[22983]: audit 2026-03-09T16:05:51.276103+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T16:05:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:52 vm09 bash[22983]: cluster 2026-03-09T16:05:51.279524+0000 mon.a (mon.0) 3039 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T16:05:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:52 vm09 bash[22983]: cluster 2026-03-09T16:05:51.279524+0000 mon.a (mon.0) 3039 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T16:05:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:52 vm09 bash[22983]: audit 2026-03-09T16:05:51.334312+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:52 vm09 bash[22983]: audit 2026-03-09T16:05:51.334312+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:52 vm09 bash[22983]: audit 2026-03-09T16:05:51.334620+0000 mon.a (mon.0) 3040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:52 vm09 bash[22983]: audit 2026-03-09T16:05:51.334620+0000 mon.a (mon.0) 3040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:52 vm01 bash[28152]: audit 2026-03-09T16:05:51.276103+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T16:05:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:52 vm01 bash[28152]: audit 2026-03-09T16:05:51.276103+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T16:05:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:52 vm01 bash[28152]: cluster 2026-03-09T16:05:51.279524+0000 mon.a (mon.0) 3039 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T16:05:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:52 vm01 bash[28152]: cluster 2026-03-09T16:05:51.279524+0000 mon.a (mon.0) 3039 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T16:05:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:52 vm01 bash[28152]: audit 2026-03-09T16:05:51.334312+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:52 vm01 bash[28152]: audit 2026-03-09T16:05:51.334312+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:52 vm01 bash[28152]: audit 2026-03-09T16:05:51.334620+0000 mon.a (mon.0) 3040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:52 vm01 bash[28152]: audit 2026-03-09T16:05:51.334620+0000 mon.a (mon.0) 3040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:52 vm01 bash[20728]: audit 2026-03-09T16:05:51.276103+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T16:05:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:52 vm01 bash[20728]: audit 2026-03-09T16:05:51.276103+0000 mon.a (mon.0) 3038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T16:05:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:52 vm01 bash[20728]: cluster 2026-03-09T16:05:51.279524+0000 mon.a (mon.0) 3039 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T16:05:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:52 vm01 bash[20728]: cluster 2026-03-09T16:05:51.279524+0000 mon.a (mon.0) 3039 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-09T16:05:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:52 vm01 bash[20728]: audit 2026-03-09T16:05:51.334312+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:52 vm01 bash[20728]: audit 2026-03-09T16:05:51.334312+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:52 vm01 bash[20728]: audit 2026-03-09T16:05:51.334620+0000 mon.a (mon.0) 3040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:52 vm01 bash[20728]: audit 2026-03-09T16:05:51.334620+0000 mon.a (mon.0) 3040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:05:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:05:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:05:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:05:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: audit 2026-03-09T16:05:52.302899+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:05:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: audit 2026-03-09T16:05:52.302899+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:05:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: cluster 2026-03-09T16:05:52.308022+0000 mon.a (mon.0) 3042 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T16:05:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: cluster 2026-03-09T16:05:52.308022+0000 mon.a (mon.0) 3042 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T16:05:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: audit 2026-03-09T16:05:52.311883+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: audit 2026-03-09T16:05:52.311883+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: audit 2026-03-09T16:05:52.314014+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: audit 2026-03-09T16:05:52.314014+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: cluster 2026-03-09T16:05:52.810218+0000 mgr.y (mgr.14520) 470 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:53.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:53 vm09 bash[22983]: cluster 2026-03-09T16:05:52.810218+0000 mgr.y (mgr.14520) 470 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: audit 2026-03-09T16:05:52.302899+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: audit 2026-03-09T16:05:52.302899+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: cluster 2026-03-09T16:05:52.308022+0000 mon.a (mon.0) 3042 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: cluster 2026-03-09T16:05:52.308022+0000 mon.a (mon.0) 3042 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: audit 2026-03-09T16:05:52.311883+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: audit 2026-03-09T16:05:52.311883+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: audit 2026-03-09T16:05:52.314014+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: audit 2026-03-09T16:05:52.314014+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: cluster 2026-03-09T16:05:52.810218+0000 mgr.y (mgr.14520) 470 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:53 vm01 bash[28152]: cluster 2026-03-09T16:05:52.810218+0000 mgr.y (mgr.14520) 470 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: audit 2026-03-09T16:05:52.302899+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: audit 2026-03-09T16:05:52.302899+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: cluster 2026-03-09T16:05:52.308022+0000 mon.a (mon.0) 3042 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: cluster 2026-03-09T16:05:52.308022+0000 mon.a (mon.0) 3042 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: audit 2026-03-09T16:05:52.311883+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: audit 2026-03-09T16:05:52.311883+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: audit 2026-03-09T16:05:52.314014+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: audit 2026-03-09T16:05:52.314014+0000 mon.a (mon.0) 3043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]: dispatch 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: cluster 2026-03-09T16:05:52.810218+0000 mgr.y (mgr.14520) 470 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:53 vm01 bash[20728]: cluster 2026-03-09T16:05:52.810218+0000 mgr.y (mgr.14520) 470 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 967 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:54 vm09 bash[22983]: audit 2026-03-09T16:05:53.337539+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:54 vm09 bash[22983]: audit 2026-03-09T16:05:53.337539+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:54 vm09 bash[22983]: cluster 2026-03-09T16:05:53.342471+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T16:05:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:54 vm09 bash[22983]: cluster 2026-03-09T16:05:53.342471+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T16:05:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:54 vm01 bash[28152]: audit 2026-03-09T16:05:53.337539+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:54 vm01 bash[28152]: audit 2026-03-09T16:05:53.337539+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:54 vm01 bash[28152]: cluster 2026-03-09T16:05:53.342471+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T16:05:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:54 vm01 bash[28152]: cluster 2026-03-09T16:05:53.342471+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T16:05:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:54 vm01 bash[20728]: audit 2026-03-09T16:05:53.337539+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:54 vm01 bash[20728]: audit 2026-03-09T16:05:53.337539+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-98"}]': finished 2026-03-09T16:05:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:54 vm01 bash[20728]: cluster 2026-03-09T16:05:53.342471+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T16:05:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:54 vm01 bash[20728]: cluster 2026-03-09T16:05:53.342471+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-09T16:05:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:55 vm01 bash[28152]: cluster 2026-03-09T16:05:54.375677+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T16:05:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:55 vm01 bash[28152]: cluster 2026-03-09T16:05:54.375677+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T16:05:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:55 vm01 bash[28152]: cluster 2026-03-09T16:05:54.810575+0000 mgr.y (mgr.14520) 471 : cluster [DBG] pgmap v771: 260 pgs: 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:55.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:55 vm01 bash[28152]: cluster 2026-03-09T16:05:54.810575+0000 mgr.y (mgr.14520) 471 : cluster [DBG] pgmap v771: 260 pgs: 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:55 vm01 bash[20728]: cluster 2026-03-09T16:05:54.375677+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T16:05:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:55 vm01 bash[20728]: cluster 2026-03-09T16:05:54.375677+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T16:05:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:55 vm01 bash[20728]: cluster 2026-03-09T16:05:54.810575+0000 mgr.y (mgr.14520) 471 : cluster [DBG] pgmap v771: 260 pgs: 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:55 vm01 bash[20728]: cluster 2026-03-09T16:05:54.810575+0000 mgr.y (mgr.14520) 471 : cluster [DBG] pgmap v771: 260 pgs: 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:55 vm09 bash[22983]: cluster 2026-03-09T16:05:54.375677+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T16:05:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:55 vm09 bash[22983]: cluster 2026-03-09T16:05:54.375677+0000 mon.a (mon.0) 3046 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-09T16:05:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:55 vm09 bash[22983]: cluster 2026-03-09T16:05:54.810575+0000 mgr.y (mgr.14520) 471 : cluster [DBG] pgmap v771: 260 pgs: 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:55 vm09 bash[22983]: cluster 2026-03-09T16:05:54.810575+0000 mgr.y (mgr.14520) 471 : cluster [DBG] pgmap v771: 260 pgs: 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:56.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:56 vm01 bash[28152]: cluster 2026-03-09T16:05:55.411074+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T16:05:56.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:56 vm01 bash[28152]: cluster 2026-03-09T16:05:55.411074+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T16:05:56.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:56 vm01 bash[28152]: audit 2026-03-09T16:05:55.412040+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:56 vm01 bash[28152]: audit 2026-03-09T16:05:55.412040+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:56 vm01 bash[28152]: audit 2026-03-09T16:05:55.419306+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:56 vm01 bash[28152]: audit 2026-03-09T16:05:55.419306+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:56 vm01 bash[20728]: cluster 2026-03-09T16:05:55.411074+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T16:05:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:56 vm01 bash[20728]: cluster 2026-03-09T16:05:55.411074+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T16:05:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:56 vm01 bash[20728]: audit 2026-03-09T16:05:55.412040+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:56 vm01 bash[20728]: audit 2026-03-09T16:05:55.412040+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:56 vm01 bash[20728]: audit 2026-03-09T16:05:55.419306+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:56 vm01 bash[20728]: audit 2026-03-09T16:05:55.419306+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.679 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:56 vm09 bash[22983]: cluster 2026-03-09T16:05:55.411074+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T16:05:56.679 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:56 vm09 bash[22983]: cluster 2026-03-09T16:05:55.411074+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-09T16:05:56.679 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:56 vm09 bash[22983]: audit 2026-03-09T16:05:55.412040+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.679 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:56 vm09 bash[22983]: audit 2026-03-09T16:05:55.412040+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.680 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:56 vm09 bash[22983]: audit 2026-03-09T16:05:55.419306+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:56.680 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:56 vm09 bash[22983]: audit 2026-03-09T16:05:55.419306+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:05:57.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:05:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: audit 2026-03-09T16:05:56.421622+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: audit 2026-03-09T16:05:56.421622+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: cluster 2026-03-09T16:05:56.438299+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: cluster 2026-03-09T16:05:56.438299+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: audit 2026-03-09T16:05:56.439468+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: audit 2026-03-09T16:05:56.439468+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: audit 2026-03-09T16:05:56.454927+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: audit 2026-03-09T16:05:56.454927+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: audit 2026-03-09T16:05:56.683137+0000 mgr.y (mgr.14520) 472 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: audit 2026-03-09T16:05:56.683137+0000 mgr.y (mgr.14520) 472 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: cluster 2026-03-09T16:05:56.810987+0000 mgr.y (mgr.14520) 473 : cluster [DBG] pgmap v774: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:57 vm09 bash[22983]: cluster 2026-03-09T16:05:56.810987+0000 mgr.y (mgr.14520) 473 : cluster [DBG] pgmap v774: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: audit 2026-03-09T16:05:56.421622+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: audit 2026-03-09T16:05:56.421622+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: cluster 2026-03-09T16:05:56.438299+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: cluster 2026-03-09T16:05:56.438299+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: audit 2026-03-09T16:05:56.439468+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: audit 2026-03-09T16:05:56.439468+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: audit 2026-03-09T16:05:56.454927+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: audit 2026-03-09T16:05:56.454927+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: audit 2026-03-09T16:05:56.683137+0000 mgr.y (mgr.14520) 472 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: audit 2026-03-09T16:05:56.683137+0000 mgr.y (mgr.14520) 472 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: cluster 2026-03-09T16:05:56.810987+0000 mgr.y (mgr.14520) 473 : cluster [DBG] pgmap v774: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:57 vm01 bash[28152]: cluster 2026-03-09T16:05:56.810987+0000 mgr.y (mgr.14520) 473 : cluster [DBG] pgmap v774: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: audit 2026-03-09T16:05:56.421622+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: audit 2026-03-09T16:05:56.421622+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: cluster 2026-03-09T16:05:56.438299+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: cluster 2026-03-09T16:05:56.438299+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: audit 2026-03-09T16:05:56.439468+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: audit 2026-03-09T16:05:56.439468+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: audit 2026-03-09T16:05:56.454927+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: audit 2026-03-09T16:05:56.454927+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: audit 2026-03-09T16:05:56.683137+0000 mgr.y (mgr.14520) 472 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: audit 2026-03-09T16:05:56.683137+0000 mgr.y (mgr.14520) 472 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: cluster 2026-03-09T16:05:56.810987+0000 mgr.y (mgr.14520) 473 : cluster [DBG] pgmap v774: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:57 vm01 bash[20728]: cluster 2026-03-09T16:05:56.810987+0000 mgr.y (mgr.14520) 473 : cluster [DBG] pgmap v774: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:05:58.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:58 vm09 bash[22983]: audit 2026-03-09T16:05:57.424615+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:58 vm09 bash[22983]: audit 2026-03-09T16:05:57.424615+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:58 vm09 bash[22983]: cluster 2026-03-09T16:05:57.430887+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T16:05:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:58 vm09 bash[22983]: cluster 2026-03-09T16:05:57.430887+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T16:05:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:58 vm09 bash[22983]: audit 2026-03-09T16:05:57.438254+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:58 vm09 bash[22983]: audit 2026-03-09T16:05:57.438254+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:58 vm09 bash[22983]: audit 2026-03-09T16:05:57.451414+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:58 vm09 bash[22983]: audit 2026-03-09T16:05:57.451414+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:58 vm01 bash[28152]: audit 2026-03-09T16:05:57.424615+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:58 vm01 bash[28152]: audit 2026-03-09T16:05:57.424615+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:58 vm01 bash[28152]: cluster 2026-03-09T16:05:57.430887+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:58 vm01 bash[28152]: cluster 2026-03-09T16:05:57.430887+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:58 vm01 bash[28152]: audit 2026-03-09T16:05:57.438254+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:58 vm01 bash[28152]: audit 2026-03-09T16:05:57.438254+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:58 vm01 bash[28152]: audit 2026-03-09T16:05:57.451414+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:58 vm01 bash[28152]: audit 2026-03-09T16:05:57.451414+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:58 vm01 bash[20728]: audit 2026-03-09T16:05:57.424615+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:58 vm01 bash[20728]: audit 2026-03-09T16:05:57.424615+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:58 vm01 bash[20728]: cluster 2026-03-09T16:05:57.430887+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:58 vm01 bash[20728]: cluster 2026-03-09T16:05:57.430887+0000 mon.a (mon.0) 3053 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:58 vm01 bash[20728]: audit 2026-03-09T16:05:57.438254+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:58 vm01 bash[20728]: audit 2026-03-09T16:05:57.438254+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:58 vm01 bash[20728]: audit 2026-03-09T16:05:57.451414+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:58 vm01 bash[20728]: audit 2026-03-09T16:05:57.451414+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:05:59.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: audit 2026-03-09T16:05:58.442024+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: audit 2026-03-09T16:05:58.442024+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: audit 2026-03-09T16:05:58.445680+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: audit 2026-03-09T16:05:58.445680+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: cluster 2026-03-09T16:05:58.451149+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: cluster 2026-03-09T16:05:58.451149+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: audit 2026-03-09T16:05:58.452547+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: audit 2026-03-09T16:05:58.452547+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: cluster 2026-03-09T16:05:58.811597+0000 mgr.y (mgr.14520) 474 : cluster [DBG] pgmap v777: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: cluster 2026-03-09T16:05:58.811597+0000 mgr.y (mgr.14520) 474 : cluster [DBG] pgmap v777: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: audit 2026-03-09T16:05:59.360943+0000 mon.a (mon.0) 3058 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:05:59 vm09 bash[22983]: audit 2026-03-09T16:05:59.360943+0000 mon.a (mon.0) 3058 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:59.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: audit 2026-03-09T16:05:58.442024+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: audit 2026-03-09T16:05:58.442024+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: audit 2026-03-09T16:05:58.445680+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: audit 2026-03-09T16:05:58.445680+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: cluster 2026-03-09T16:05:58.451149+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: cluster 2026-03-09T16:05:58.451149+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: audit 2026-03-09T16:05:58.452547+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: audit 2026-03-09T16:05:58.452547+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: cluster 2026-03-09T16:05:58.811597+0000 mgr.y (mgr.14520) 474 : cluster [DBG] pgmap v777: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: cluster 2026-03-09T16:05:58.811597+0000 mgr.y (mgr.14520) 474 : cluster [DBG] pgmap v777: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: audit 2026-03-09T16:05:59.360943+0000 mon.a (mon.0) 3058 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:05:59 vm01 bash[28152]: audit 2026-03-09T16:05:59.360943+0000 mon.a (mon.0) 3058 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: audit 2026-03-09T16:05:58.442024+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: audit 2026-03-09T16:05:58.442024+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-6", "overlaypool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: audit 2026-03-09T16:05:58.445680+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: audit 2026-03-09T16:05:58.445680+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: cluster 2026-03-09T16:05:58.451149+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: cluster 2026-03-09T16:05:58.451149+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: audit 2026-03-09T16:05:58.452547+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: audit 2026-03-09T16:05:58.452547+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: cluster 2026-03-09T16:05:58.811597+0000 mgr.y (mgr.14520) 474 : cluster [DBG] pgmap v777: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: cluster 2026-03-09T16:05:58.811597+0000 mgr.y (mgr.14520) 474 : cluster [DBG] pgmap v777: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: audit 2026-03-09T16:05:59.360943+0000 mon.a (mon.0) 3058 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:05:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:05:59 vm01 bash[20728]: audit 2026-03-09T16:05:59.360943+0000 mon.a (mon.0) 3058 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: cluster 2026-03-09T16:05:59.442047+0000 mon.a (mon.0) 3059 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: cluster 2026-03-09T16:05:59.442047+0000 mon.a (mon.0) 3059 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:05:59.536307+0000 mon.a (mon.0) 3060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]': finished 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:05:59.536307+0000 mon.a (mon.0) 3060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]': finished 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: cluster 2026-03-09T16:05:59.541506+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: cluster 2026-03-09T16:05:59.541506+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:05:59.543731+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:05:59.543731+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:05:59.569544+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:05:59.569544+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:06:00.553489+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:06:00.553489+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: cluster 2026-03-09T16:06:00.567154+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: cluster 2026-03-09T16:06:00.567154+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:06:00.568959+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:06:00.568959+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:06:00.569528+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:00 vm09 bash[22983]: audit 2026-03-09T16:06:00.569528+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: cluster 2026-03-09T16:05:59.442047+0000 mon.a (mon.0) 3059 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: cluster 2026-03-09T16:05:59.442047+0000 mon.a (mon.0) 3059 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:05:59.536307+0000 mon.a (mon.0) 3060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]': finished 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:05:59.536307+0000 mon.a (mon.0) 3060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]': finished 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: cluster 2026-03-09T16:05:59.541506+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: cluster 2026-03-09T16:05:59.541506+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:05:59.543731+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:05:59.543731+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:05:59.569544+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:05:59.569544+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:06:00.553489+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:06:00.553489+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: cluster 2026-03-09T16:06:00.567154+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: cluster 2026-03-09T16:06:00.567154+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:06:00.568959+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:06:00.568959+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:06:00.569528+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:00 vm01 bash[28152]: audit 2026-03-09T16:06:00.569528+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: cluster 2026-03-09T16:05:59.442047+0000 mon.a (mon.0) 3059 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: cluster 2026-03-09T16:05:59.442047+0000 mon.a (mon.0) 3059 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:05:59.536307+0000 mon.a (mon.0) 3060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]': finished 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:05:59.536307+0000 mon.a (mon.0) 3060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-100", "mode": "writeback"}]': finished 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: cluster 2026-03-09T16:05:59.541506+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: cluster 2026-03-09T16:05:59.541506+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:05:59.543731+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:05:59.543731+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:05:59.569544+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:05:59.569544+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:06:00.553489+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:06:00.553489+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: cluster 2026-03-09T16:06:00.567154+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: cluster 2026-03-09T16:06:00.567154+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:06:00.568959+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:06:00.568959+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:06:00.569528+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:00 vm01 bash[20728]: audit 2026-03-09T16:06:00.569528+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:01.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: cluster 2026-03-09T16:06:00.811915+0000 mgr.y (mgr.14520) 475 : cluster [DBG] pgmap v780: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: cluster 2026-03-09T16:06:00.811915+0000 mgr.y (mgr.14520) 475 : cluster [DBG] pgmap v780: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: audit 2026-03-09T16:06:01.576015+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: audit 2026-03-09T16:06:01.576015+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: audit 2026-03-09T16:06:01.587256+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: audit 2026-03-09T16:06:01.587256+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: cluster 2026-03-09T16:06:01.587640+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T16:06:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: cluster 2026-03-09T16:06:01.587640+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T16:06:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: audit 2026-03-09T16:06:01.588378+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:01 vm09 bash[22983]: audit 2026-03-09T16:06:01.588378+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: cluster 2026-03-09T16:06:00.811915+0000 mgr.y (mgr.14520) 475 : cluster [DBG] pgmap v780: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: cluster 2026-03-09T16:06:00.811915+0000 mgr.y (mgr.14520) 475 : cluster [DBG] pgmap v780: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: audit 2026-03-09T16:06:01.576015+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: audit 2026-03-09T16:06:01.576015+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: audit 2026-03-09T16:06:01.587256+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: audit 2026-03-09T16:06:01.587256+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: cluster 2026-03-09T16:06:01.587640+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: cluster 2026-03-09T16:06:01.587640+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: audit 2026-03-09T16:06:01.588378+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:01 vm01 bash[28152]: audit 2026-03-09T16:06:01.588378+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: cluster 2026-03-09T16:06:00.811915+0000 mgr.y (mgr.14520) 475 : cluster [DBG] pgmap v780: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: cluster 2026-03-09T16:06:00.811915+0000 mgr.y (mgr.14520) 475 : cluster [DBG] pgmap v780: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: audit 2026-03-09T16:06:01.576015+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: audit 2026-03-09T16:06:01.576015+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: audit 2026-03-09T16:06:01.587256+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: audit 2026-03-09T16:06:01.587256+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: cluster 2026-03-09T16:06:01.587640+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: cluster 2026-03-09T16:06:01.587640+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-09T16:06:01.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: audit 2026-03-09T16:06:01.588378+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:01.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:01 vm01 bash[20728]: audit 2026-03-09T16:06:01.588378+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:02.877 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: cluster 2026-03-09T16:06:02.576068+0000 mon.a (mon.0) 3069 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: cluster 2026-03-09T16:06:02.576068+0000 mon.a (mon.0) 3069 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: audit 2026-03-09T16:06:02.578948+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: audit 2026-03-09T16:06:02.578948+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: audit 2026-03-09T16:06:02.586003+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: audit 2026-03-09T16:06:02.586003+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: cluster 2026-03-09T16:06:02.586111+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: cluster 2026-03-09T16:06:02.586111+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: audit 2026-03-09T16:06:02.586976+0000 mon.a (mon.0) 3072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:02 vm01 bash[28152]: audit 2026-03-09T16:06:02.586976+0000 mon.a (mon.0) 3072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: cluster 2026-03-09T16:06:02.576068+0000 mon.a (mon.0) 3069 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: cluster 2026-03-09T16:06:02.576068+0000 mon.a (mon.0) 3069 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: audit 2026-03-09T16:06:02.578948+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: audit 2026-03-09T16:06:02.578948+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: audit 2026-03-09T16:06:02.586003+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: audit 2026-03-09T16:06:02.586003+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: cluster 2026-03-09T16:06:02.586111+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: cluster 2026-03-09T16:06:02.586111+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: audit 2026-03-09T16:06:02.586976+0000 mon.a (mon.0) 3072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.878 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:02 vm01 bash[20728]: audit 2026-03-09T16:06:02.586976+0000 mon.a (mon.0) 3072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: cluster 2026-03-09T16:06:02.576068+0000 mon.a (mon.0) 3069 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: cluster 2026-03-09T16:06:02.576068+0000 mon.a (mon.0) 3069 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: audit 2026-03-09T16:06:02.578948+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: audit 2026-03-09T16:06:02.578948+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: audit 2026-03-09T16:06:02.586003+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: audit 2026-03-09T16:06:02.586003+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: cluster 2026-03-09T16:06:02.586111+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T16:06:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: cluster 2026-03-09T16:06:02.586111+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-09T16:06:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: audit 2026-03-09T16:06:02.586976+0000 mon.a (mon.0) 3072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:02.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:02 vm09 bash[22983]: audit 2026-03-09T16:06:02.586976+0000 mon.a (mon.0) 3072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T16:06:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:06:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:06:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:06:03.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:03 vm09 bash[22983]: cluster 2026-03-09T16:06:02.812177+0000 mgr.y (mgr.14520) 476 : cluster [DBG] pgmap v783: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:03 vm09 bash[22983]: cluster 2026-03-09T16:06:02.812177+0000 mgr.y (mgr.14520) 476 : cluster [DBG] pgmap v783: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:03 vm09 bash[22983]: audit 2026-03-09T16:06:03.583659+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T16:06:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:03 vm09 bash[22983]: audit 2026-03-09T16:06:03.583659+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T16:06:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:03 vm09 bash[22983]: cluster 2026-03-09T16:06:03.586209+0000 mon.a (mon.0) 3074 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T16:06:03.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:03 vm09 bash[22983]: cluster 2026-03-09T16:06:03.586209+0000 mon.a (mon.0) 3074 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:03 vm01 bash[28152]: cluster 2026-03-09T16:06:02.812177+0000 mgr.y (mgr.14520) 476 : cluster [DBG] pgmap v783: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:03 vm01 bash[28152]: cluster 2026-03-09T16:06:02.812177+0000 mgr.y (mgr.14520) 476 : cluster [DBG] pgmap v783: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:03 vm01 bash[28152]: audit 2026-03-09T16:06:03.583659+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:03 vm01 bash[28152]: audit 2026-03-09T16:06:03.583659+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:03 vm01 bash[28152]: cluster 2026-03-09T16:06:03.586209+0000 mon.a (mon.0) 3074 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:03 vm01 bash[28152]: cluster 2026-03-09T16:06:03.586209+0000 mon.a (mon.0) 3074 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:03 vm01 bash[20728]: cluster 2026-03-09T16:06:02.812177+0000 mgr.y (mgr.14520) 476 : cluster [DBG] pgmap v783: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:03 vm01 bash[20728]: cluster 2026-03-09T16:06:02.812177+0000 mgr.y (mgr.14520) 476 : cluster [DBG] pgmap v783: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:03 vm01 bash[20728]: audit 2026-03-09T16:06:03.583659+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:03 vm01 bash[20728]: audit 2026-03-09T16:06:03.583659+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T16:06:03.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:03 vm01 bash[20728]: cluster 2026-03-09T16:06:03.586209+0000 mon.a (mon.0) 3074 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T16:06:03.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:03 vm01 bash[20728]: cluster 2026-03-09T16:06:03.586209+0000 mon.a (mon.0) 3074 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-09T16:06:04.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:04 vm09 bash[22983]: audit 2026-03-09T16:06:03.622695+0000 mon.c (mon.2) 496 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:04 vm09 bash[22983]: audit 2026-03-09T16:06:03.622695+0000 mon.c (mon.2) 496 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:04 vm09 bash[22983]: audit 2026-03-09T16:06:03.622954+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:04 vm09 bash[22983]: audit 2026-03-09T16:06:03.622954+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:04 vm01 bash[28152]: audit 2026-03-09T16:06:03.622695+0000 mon.c (mon.2) 496 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:04 vm01 bash[28152]: audit 2026-03-09T16:06:03.622695+0000 mon.c (mon.2) 496 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:04 vm01 bash[28152]: audit 2026-03-09T16:06:03.622954+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:04 vm01 bash[28152]: audit 2026-03-09T16:06:03.622954+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:04 vm01 bash[20728]: audit 2026-03-09T16:06:03.622695+0000 mon.c (mon.2) 496 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:04 vm01 bash[20728]: audit 2026-03-09T16:06:03.622695+0000 mon.c (mon.2) 496 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:04 vm01 bash[20728]: audit 2026-03-09T16:06:03.622954+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:04.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:04 vm01 bash[20728]: audit 2026-03-09T16:06:03.622954+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: audit 2026-03-09T16:06:04.616885+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:06:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: audit 2026-03-09T16:06:04.616885+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:06:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: cluster 2026-03-09T16:06:04.625509+0000 mon.a (mon.0) 3077 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T16:06:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: cluster 2026-03-09T16:06:04.625509+0000 mon.a (mon.0) 3077 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T16:06:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: audit 2026-03-09T16:06:04.625698+0000 mon.c (mon.2) 497 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: audit 2026-03-09T16:06:04.625698+0000 mon.c (mon.2) 497 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: audit 2026-03-09T16:06:04.627128+0000 mon.a (mon.0) 3078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: audit 2026-03-09T16:06:04.627128+0000 mon.a (mon.0) 3078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: cluster 2026-03-09T16:06:04.812529+0000 mgr.y (mgr.14520) 477 : cluster [DBG] pgmap v786: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:06:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:05 vm09 bash[22983]: cluster 2026-03-09T16:06:04.812529+0000 mgr.y (mgr.14520) 477 : cluster [DBG] pgmap v786: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: audit 2026-03-09T16:06:04.616885+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: audit 2026-03-09T16:06:04.616885+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: cluster 2026-03-09T16:06:04.625509+0000 mon.a (mon.0) 3077 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: cluster 2026-03-09T16:06:04.625509+0000 mon.a (mon.0) 3077 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: audit 2026-03-09T16:06:04.625698+0000 mon.c (mon.2) 497 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: audit 2026-03-09T16:06:04.625698+0000 mon.c (mon.2) 497 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: audit 2026-03-09T16:06:04.627128+0000 mon.a (mon.0) 3078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: audit 2026-03-09T16:06:04.627128+0000 mon.a (mon.0) 3078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: cluster 2026-03-09T16:06:04.812529+0000 mgr.y (mgr.14520) 477 : cluster [DBG] pgmap v786: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:05 vm01 bash[28152]: cluster 2026-03-09T16:06:04.812529+0000 mgr.y (mgr.14520) 477 : cluster [DBG] pgmap v786: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: audit 2026-03-09T16:06:04.616885+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: audit 2026-03-09T16:06:04.616885+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]': finished 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: cluster 2026-03-09T16:06:04.625509+0000 mon.a (mon.0) 3077 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T16:06:05.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: cluster 2026-03-09T16:06:04.625509+0000 mon.a (mon.0) 3077 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-09T16:06:05.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: audit 2026-03-09T16:06:04.625698+0000 mon.c (mon.2) 497 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: audit 2026-03-09T16:06:04.625698+0000 mon.c (mon.2) 497 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: audit 2026-03-09T16:06:04.627128+0000 mon.a (mon.0) 3078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: audit 2026-03-09T16:06:04.627128+0000 mon.a (mon.0) 3078 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]: dispatch 2026-03-09T16:06:05.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: cluster 2026-03-09T16:06:04.812529+0000 mgr.y (mgr.14520) 477 : cluster [DBG] pgmap v786: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:06:05.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:05 vm01 bash[20728]: cluster 2026-03-09T16:06:04.812529+0000 mgr.y (mgr.14520) 477 : cluster [DBG] pgmap v786: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:06:06.883 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:06:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:06:06.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:06 vm09 bash[22983]: audit 2026-03-09T16:06:05.625841+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:06:06.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:06 vm09 bash[22983]: audit 2026-03-09T16:06:05.625841+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:06:06.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:06 vm09 bash[22983]: cluster 2026-03-09T16:06:05.638468+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T16:06:06.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:06 vm09 bash[22983]: cluster 2026-03-09T16:06:05.638468+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T16:06:06.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:06 vm01 bash[28152]: audit 2026-03-09T16:06:05.625841+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:06:06.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:06 vm01 bash[28152]: audit 2026-03-09T16:06:05.625841+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:06:06.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:06 vm01 bash[28152]: cluster 2026-03-09T16:06:05.638468+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T16:06:06.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:06 vm01 bash[28152]: cluster 2026-03-09T16:06:05.638468+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T16:06:06.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:06 vm01 bash[20728]: audit 2026-03-09T16:06:05.625841+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:06:06.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:06 vm01 bash[20728]: audit 2026-03-09T16:06:05.625841+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-100"}]': finished 2026-03-09T16:06:06.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:06 vm01 bash[20728]: cluster 2026-03-09T16:06:05.638468+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T16:06:06.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:06 vm01 bash[20728]: cluster 2026-03-09T16:06:05.638468+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:07 vm01 bash[28152]: cluster 2026-03-09T16:06:06.652645+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:07 vm01 bash[28152]: cluster 2026-03-09T16:06:06.652645+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:07 vm01 bash[28152]: audit 2026-03-09T16:06:06.687862+0000 mgr.y (mgr.14520) 478 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:07 vm01 bash[28152]: audit 2026-03-09T16:06:06.687862+0000 mgr.y (mgr.14520) 478 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:07 vm01 bash[28152]: cluster 2026-03-09T16:06:06.812822+0000 mgr.y (mgr.14520) 479 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:07 vm01 bash[28152]: cluster 2026-03-09T16:06:06.812822+0000 mgr.y (mgr.14520) 479 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:07 vm01 bash[20728]: cluster 2026-03-09T16:06:06.652645+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:07 vm01 bash[20728]: cluster 2026-03-09T16:06:06.652645+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:07 vm01 bash[20728]: audit 2026-03-09T16:06:06.687862+0000 mgr.y (mgr.14520) 478 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:07 vm01 bash[20728]: audit 2026-03-09T16:06:06.687862+0000 mgr.y (mgr.14520) 478 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:07 vm01 bash[20728]: cluster 2026-03-09T16:06:06.812822+0000 mgr.y (mgr.14520) 479 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:07.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:07 vm01 bash[20728]: cluster 2026-03-09T16:06:06.812822+0000 mgr.y (mgr.14520) 479 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:07 vm09 bash[22983]: cluster 2026-03-09T16:06:06.652645+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T16:06:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:07 vm09 bash[22983]: cluster 2026-03-09T16:06:06.652645+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-09T16:06:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:07 vm09 bash[22983]: audit 2026-03-09T16:06:06.687862+0000 mgr.y (mgr.14520) 478 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:07 vm09 bash[22983]: audit 2026-03-09T16:06:06.687862+0000 mgr.y (mgr.14520) 478 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:07 vm09 bash[22983]: cluster 2026-03-09T16:06:06.812822+0000 mgr.y (mgr.14520) 479 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:07 vm09 bash[22983]: cluster 2026-03-09T16:06:06.812822+0000 mgr.y (mgr.14520) 479 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:09.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:08 vm09 bash[22983]: cluster 2026-03-09T16:06:07.666457+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T16:06:09.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:08 vm09 bash[22983]: cluster 2026-03-09T16:06:07.666457+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T16:06:09.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:08 vm09 bash[22983]: audit 2026-03-09T16:06:07.667580+0000 mon.c (mon.2) 498 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:08 vm09 bash[22983]: audit 2026-03-09T16:06:07.667580+0000 mon.c (mon.2) 498 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:08 vm09 bash[22983]: audit 2026-03-09T16:06:07.668827+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:08 vm09 bash[22983]: audit 2026-03-09T16:06:07.668827+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:08 vm01 bash[28152]: cluster 2026-03-09T16:06:07.666457+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:08 vm01 bash[28152]: cluster 2026-03-09T16:06:07.666457+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:08 vm01 bash[28152]: audit 2026-03-09T16:06:07.667580+0000 mon.c (mon.2) 498 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:08 vm01 bash[28152]: audit 2026-03-09T16:06:07.667580+0000 mon.c (mon.2) 498 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:08 vm01 bash[28152]: audit 2026-03-09T16:06:07.668827+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:08 vm01 bash[28152]: audit 2026-03-09T16:06:07.668827+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:08 vm01 bash[20728]: cluster 2026-03-09T16:06:07.666457+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:08 vm01 bash[20728]: cluster 2026-03-09T16:06:07.666457+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:08 vm01 bash[20728]: audit 2026-03-09T16:06:07.667580+0000 mon.c (mon.2) 498 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:08 vm01 bash[20728]: audit 2026-03-09T16:06:07.667580+0000 mon.c (mon.2) 498 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:08 vm01 bash[20728]: audit 2026-03-09T16:06:07.668827+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:09.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:08 vm01 bash[20728]: audit 2026-03-09T16:06:07.668827+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:08.650062+0000 mon.a (mon.0) 3084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:08.650062+0000 mon.a (mon.0) 3084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: cluster 2026-03-09T16:06:08.662838+0000 mon.a (mon.0) 3085 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: cluster 2026-03-09T16:06:08.662838+0000 mon.a (mon.0) 3085 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:08.680363+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:08.680363+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:08.706210+0000 mon.c (mon.2) 500 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:08.706210+0000 mon.c (mon.2) 500 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:08.706922+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:08.706922+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: cluster 2026-03-09T16:06:08.813411+0000 mgr.y (mgr.14520) 480 : cluster [DBG] pgmap v792: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: cluster 2026-03-09T16:06:08.813411+0000 mgr.y (mgr.14520) 480 : cluster [DBG] pgmap v792: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:09.652874+0000 mon.a (mon.0) 3087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:09.652874+0000 mon.a (mon.0) 3087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: cluster 2026-03-09T16:06:09.655993+0000 mon.a (mon.0) 3088 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: cluster 2026-03-09T16:06:09.655993+0000 mon.a (mon.0) 3088 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:09.664710+0000 mon.c (mon.2) 501 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:09.664710+0000 mon.c (mon.2) 501 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:09.665673+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:09 vm09 bash[22983]: audit 2026-03-09T16:06:09.665673+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:08.650062+0000 mon.a (mon.0) 3084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:08.650062+0000 mon.a (mon.0) 3084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: cluster 2026-03-09T16:06:08.662838+0000 mon.a (mon.0) 3085 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: cluster 2026-03-09T16:06:08.662838+0000 mon.a (mon.0) 3085 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:08.680363+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:08.680363+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:08.706210+0000 mon.c (mon.2) 500 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:08.706210+0000 mon.c (mon.2) 500 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:08.706922+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:08.706922+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: cluster 2026-03-09T16:06:08.813411+0000 mgr.y (mgr.14520) 480 : cluster [DBG] pgmap v792: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: cluster 2026-03-09T16:06:08.813411+0000 mgr.y (mgr.14520) 480 : cluster [DBG] pgmap v792: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:09.652874+0000 mon.a (mon.0) 3087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:09.652874+0000 mon.a (mon.0) 3087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: cluster 2026-03-09T16:06:09.655993+0000 mon.a (mon.0) 3088 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: cluster 2026-03-09T16:06:09.655993+0000 mon.a (mon.0) 3088 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:09.664710+0000 mon.c (mon.2) 501 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:09.664710+0000 mon.c (mon.2) 501 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:09.665673+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:09 vm01 bash[28152]: audit 2026-03-09T16:06:09.665673+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:08.650062+0000 mon.a (mon.0) 3084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:08.650062+0000 mon.a (mon.0) 3084 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: cluster 2026-03-09T16:06:08.662838+0000 mon.a (mon.0) 3085 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: cluster 2026-03-09T16:06:08.662838+0000 mon.a (mon.0) 3085 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:08.680363+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:08.680363+0000 mon.c (mon.2) 499 : audit [DBG] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:08.706210+0000 mon.c (mon.2) 500 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:08.706210+0000 mon.c (mon.2) 500 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:08.706922+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:08.706922+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: cluster 2026-03-09T16:06:08.813411+0000 mgr.y (mgr.14520) 480 : cluster [DBG] pgmap v792: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: cluster 2026-03-09T16:06:08.813411+0000 mgr.y (mgr.14520) 480 : cluster [DBG] pgmap v792: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:09.652874+0000 mon.a (mon.0) 3087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:09.652874+0000 mon.a (mon.0) 3087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: cluster 2026-03-09T16:06:09.655993+0000 mon.a (mon.0) 3088 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: cluster 2026-03-09T16:06:09.655993+0000 mon.a (mon.0) 3088 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:09.664710+0000 mon.c (mon.2) 501 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:09.664710+0000 mon.c (mon.2) 501 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:09.665673+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:10.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:09 vm01 bash[20728]: audit 2026-03-09T16:06:09.665673+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T16:06:11.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: audit 2026-03-09T16:06:10.655740+0000 mon.a (mon.0) 3090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:06:11.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: audit 2026-03-09T16:06:10.655740+0000 mon.a (mon.0) 3090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:06:11.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: cluster 2026-03-09T16:06:10.659600+0000 mon.a (mon.0) 3091 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T16:06:11.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: cluster 2026-03-09T16:06:10.659600+0000 mon.a (mon.0) 3091 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T16:06:11.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: audit 2026-03-09T16:06:10.662790+0000 mon.c (mon.2) 502 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:11.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: audit 2026-03-09T16:06:10.662790+0000 mon.c (mon.2) 502 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:11.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: audit 2026-03-09T16:06:10.663210+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: audit 2026-03-09T16:06:10.663210+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: cluster 2026-03-09T16:06:10.813985+0000 mgr.y (mgr.14520) 481 : cluster [DBG] pgmap v795: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:11 vm01 bash[28152]: cluster 2026-03-09T16:06:10.813985+0000 mgr.y (mgr.14520) 481 : cluster [DBG] pgmap v795: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: audit 2026-03-09T16:06:10.655740+0000 mon.a (mon.0) 3090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: audit 2026-03-09T16:06:10.655740+0000 mon.a (mon.0) 3090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: cluster 2026-03-09T16:06:10.659600+0000 mon.a (mon.0) 3091 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: cluster 2026-03-09T16:06:10.659600+0000 mon.a (mon.0) 3091 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: audit 2026-03-09T16:06:10.662790+0000 mon.c (mon.2) 502 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: audit 2026-03-09T16:06:10.662790+0000 mon.c (mon.2) 502 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: audit 2026-03-09T16:06:10.663210+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: audit 2026-03-09T16:06:10.663210+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: cluster 2026-03-09T16:06:10.813985+0000 mgr.y (mgr.14520) 481 : cluster [DBG] pgmap v795: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:11.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:11 vm01 bash[20728]: cluster 2026-03-09T16:06:10.813985+0000 mgr.y (mgr.14520) 481 : cluster [DBG] pgmap v795: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: audit 2026-03-09T16:06:10.655740+0000 mon.a (mon.0) 3090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: audit 2026-03-09T16:06:10.655740+0000 mon.a (mon.0) 3090 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: cluster 2026-03-09T16:06:10.659600+0000 mon.a (mon.0) 3091 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: cluster 2026-03-09T16:06:10.659600+0000 mon.a (mon.0) 3091 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: audit 2026-03-09T16:06:10.662790+0000 mon.c (mon.2) 502 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: audit 2026-03-09T16:06:10.662790+0000 mon.c (mon.2) 502 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: audit 2026-03-09T16:06:10.663210+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: audit 2026-03-09T16:06:10.663210+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: cluster 2026-03-09T16:06:10.813985+0000 mgr.y (mgr.14520) 481 : cluster [DBG] pgmap v795: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:11 vm09 bash[22983]: cluster 2026-03-09T16:06:10.813985+0000 mgr.y (mgr.14520) 481 : cluster [DBG] pgmap v795: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.659037+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:06:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.659037+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:06:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: cluster 2026-03-09T16:06:11.669023+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T16:06:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: cluster 2026-03-09T16:06:11.669023+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T16:06:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.716144+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.716144+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.716502+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.716502+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.717237+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.717237+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.717623+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:12 vm09 bash[22983]: audit 2026-03-09T16:06:11.717623+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.659037+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.659037+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: cluster 2026-03-09T16:06:11.669023+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: cluster 2026-03-09T16:06:11.669023+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.716144+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.716144+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.716502+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.716502+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.717237+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.717237+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.717623+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:12 vm01 bash[28152]: audit 2026-03-09T16:06:11.717623+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:06:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:06:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.659037+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.659037+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: cluster 2026-03-09T16:06:11.669023+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T16:06:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: cluster 2026-03-09T16:06:11.669023+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-09T16:06:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.716144+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.716144+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.716502+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.716502+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-6"}]: dispatch 2026-03-09T16:06:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.717237+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.717237+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.101:0/3982969029' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.717623+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:13.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:12 vm01 bash[20728]: audit 2026-03-09T16:06:11.717623+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-6", "tierpool": "test-rados-api-vm01-59821-102"}]: dispatch 2026-03-09T16:06:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:13 vm09 bash[22983]: cluster 2026-03-09T16:06:12.689231+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T16:06:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:13 vm09 bash[22983]: cluster 2026-03-09T16:06:12.689231+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T16:06:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:13 vm09 bash[22983]: cluster 2026-03-09T16:06:12.814295+0000 mgr.y (mgr.14520) 482 : cluster [DBG] pgmap v798: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:13 vm09 bash[22983]: cluster 2026-03-09T16:06:12.814295+0000 mgr.y (mgr.14520) 482 : cluster [DBG] pgmap v798: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:13 vm09 bash[22983]: cluster 2026-03-09T16:06:13.695154+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T16:06:14.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:13 vm09 bash[22983]: cluster 2026-03-09T16:06:13.695154+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T16:06:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:13 vm01 bash[28152]: cluster 2026-03-09T16:06:12.689231+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T16:06:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:13 vm01 bash[28152]: cluster 2026-03-09T16:06:12.689231+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T16:06:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:13 vm01 bash[28152]: cluster 2026-03-09T16:06:12.814295+0000 mgr.y (mgr.14520) 482 : cluster [DBG] pgmap v798: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:13 vm01 bash[28152]: cluster 2026-03-09T16:06:12.814295+0000 mgr.y (mgr.14520) 482 : cluster [DBG] pgmap v798: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:13 vm01 bash[28152]: cluster 2026-03-09T16:06:13.695154+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T16:06:14.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:13 vm01 bash[28152]: cluster 2026-03-09T16:06:13.695154+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T16:06:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:13 vm01 bash[20728]: cluster 2026-03-09T16:06:12.689231+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T16:06:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:13 vm01 bash[20728]: cluster 2026-03-09T16:06:12.689231+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-09T16:06:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:13 vm01 bash[20728]: cluster 2026-03-09T16:06:12.814295+0000 mgr.y (mgr.14520) 482 : cluster [DBG] pgmap v798: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:13 vm01 bash[20728]: cluster 2026-03-09T16:06:12.814295+0000 mgr.y (mgr.14520) 482 : cluster [DBG] pgmap v798: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:13 vm01 bash[20728]: cluster 2026-03-09T16:06:13.695154+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T16:06:14.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:13 vm01 bash[20728]: cluster 2026-03-09T16:06:13.695154+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.777603+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.777603+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.787158+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.787158+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.787648+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.787648+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.790402+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.790402+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.790952+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.790952+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.791431+0000 mon.a (mon.0) 3101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:13.791431+0000 mon.a (mon.0) 3101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:14.368233+0000 mon.a (mon.0) 3102 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:14 vm09 bash[22983]: audit 2026-03-09T16:06:14.368233+0000 mon.a (mon.0) 3102 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.777603+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.777603+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.787158+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.787158+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.787648+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.787648+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.790402+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.790402+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.790952+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.790952+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.791431+0000 mon.a (mon.0) 3101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:13.791431+0000 mon.a (mon.0) 3101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:14.368233+0000 mon.a (mon.0) 3102 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:15.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:14 vm01 bash[28152]: audit 2026-03-09T16:06:14.368233+0000 mon.a (mon.0) 3102 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.777603+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.777603+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.787158+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.787158+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.787648+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.787648+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.790402+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.790402+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.790952+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.790952+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.791431+0000 mon.a (mon.0) 3101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:13.791431+0000 mon.a (mon.0) 3101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:14.368233+0000 mon.a (mon.0) 3102 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:15.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:14 vm01 bash[20728]: audit 2026-03-09T16:06:14.368233+0000 mon.a (mon.0) 3102 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: audit 2026-03-09T16:06:14.784240+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: audit 2026-03-09T16:06:14.784240+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: audit 2026-03-09T16:06:14.784417+0000 mon.a (mon.0) 3103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: audit 2026-03-09T16:06:14.784417+0000 mon.a (mon.0) 3103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: cluster 2026-03-09T16:06:14.788990+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T16:06:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: cluster 2026-03-09T16:06:14.788990+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T16:06:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: audit 2026-03-09T16:06:14.799180+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: audit 2026-03-09T16:06:14.799180+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: cluster 2026-03-09T16:06:14.814620+0000 mgr.y (mgr.14520) 483 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:16.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:15 vm09 bash[22983]: cluster 2026-03-09T16:06:14.814620+0000 mgr.y (mgr.14520) 483 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: audit 2026-03-09T16:06:14.784240+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: audit 2026-03-09T16:06:14.784240+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: audit 2026-03-09T16:06:14.784417+0000 mon.a (mon.0) 3103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: audit 2026-03-09T16:06:14.784417+0000 mon.a (mon.0) 3103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: cluster 2026-03-09T16:06:14.788990+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: cluster 2026-03-09T16:06:14.788990+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: audit 2026-03-09T16:06:14.799180+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: audit 2026-03-09T16:06:14.799180+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: cluster 2026-03-09T16:06:14.814620+0000 mgr.y (mgr.14520) 483 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:15 vm01 bash[28152]: cluster 2026-03-09T16:06:14.814620+0000 mgr.y (mgr.14520) 483 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: audit 2026-03-09T16:06:14.784240+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: audit 2026-03-09T16:06:14.784240+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: audit 2026-03-09T16:06:14.784417+0000 mon.a (mon.0) 3103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: audit 2026-03-09T16:06:14.784417+0000 mon.a (mon.0) 3103 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm01-59821-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: cluster 2026-03-09T16:06:14.788990+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T16:06:16.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: cluster 2026-03-09T16:06:14.788990+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-09T16:06:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: audit 2026-03-09T16:06:14.799180+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: audit 2026-03-09T16:06:14.799180+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: cluster 2026-03-09T16:06:14.814620+0000 mgr.y (mgr.14520) 483 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:16.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:15 vm01 bash[20728]: cluster 2026-03-09T16:06:14.814620+0000 mgr.y (mgr.14520) 483 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:17.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:06:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:06:17.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:16 vm09 bash[22983]: cluster 2026-03-09T16:06:15.829097+0000 mon.a (mon.0) 3106 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T16:06:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:16 vm09 bash[22983]: cluster 2026-03-09T16:06:15.829097+0000 mon.a (mon.0) 3106 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T16:06:17.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:16 vm01 bash[28152]: cluster 2026-03-09T16:06:15.829097+0000 mon.a (mon.0) 3106 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T16:06:17.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:16 vm01 bash[28152]: cluster 2026-03-09T16:06:15.829097+0000 mon.a (mon.0) 3106 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T16:06:17.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:16 vm01 bash[20728]: cluster 2026-03-09T16:06:15.829097+0000 mon.a (mon.0) 3106 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T16:06:17.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:16 vm01 bash[20728]: cluster 2026-03-09T16:06:15.829097+0000 mon.a (mon.0) 3106 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-09T16:06:18.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: audit 2026-03-09T16:06:16.691225+0000 mgr.y (mgr.14520) 484 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:18.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: audit 2026-03-09T16:06:16.691225+0000 mgr.y (mgr.14520) 484 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:18.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: audit 2026-03-09T16:06:16.815011+0000 mon.a (mon.0) 3107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:18.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: audit 2026-03-09T16:06:16.815011+0000 mon.a (mon.0) 3107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:18.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: cluster 2026-03-09T16:06:16.815067+0000 mgr.y (mgr.14520) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: cluster 2026-03-09T16:06:16.815067+0000 mgr.y (mgr.14520) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: cluster 2026-03-09T16:06:16.830045+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T16:06:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: cluster 2026-03-09T16:06:16.830045+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T16:06:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: cluster 2026-03-09T16:06:17.832111+0000 mon.a (mon.0) 3109 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T16:06:18.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:17 vm09 bash[22983]: cluster 2026-03-09T16:06:17.832111+0000 mon.a (mon.0) 3109 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: audit 2026-03-09T16:06:16.691225+0000 mgr.y (mgr.14520) 484 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: audit 2026-03-09T16:06:16.691225+0000 mgr.y (mgr.14520) 484 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: audit 2026-03-09T16:06:16.815011+0000 mon.a (mon.0) 3107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: audit 2026-03-09T16:06:16.815011+0000 mon.a (mon.0) 3107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: cluster 2026-03-09T16:06:16.815067+0000 mgr.y (mgr.14520) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: cluster 2026-03-09T16:06:16.815067+0000 mgr.y (mgr.14520) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: cluster 2026-03-09T16:06:16.830045+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: cluster 2026-03-09T16:06:16.830045+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: cluster 2026-03-09T16:06:17.832111+0000 mon.a (mon.0) 3109 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T16:06:18.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:17 vm01 bash[28152]: cluster 2026-03-09T16:06:17.832111+0000 mon.a (mon.0) 3109 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: audit 2026-03-09T16:06:16.691225+0000 mgr.y (mgr.14520) 484 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: audit 2026-03-09T16:06:16.691225+0000 mgr.y (mgr.14520) 484 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: audit 2026-03-09T16:06:16.815011+0000 mon.a (mon.0) 3107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: audit 2026-03-09T16:06:16.815011+0000 mon.a (mon.0) 3107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm01-59821-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: cluster 2026-03-09T16:06:16.815067+0000 mgr.y (mgr.14520) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: cluster 2026-03-09T16:06:16.815067+0000 mgr.y (mgr.14520) 485 : cluster [DBG] pgmap v803: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: cluster 2026-03-09T16:06:16.830045+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: cluster 2026-03-09T16:06:16.830045+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: cluster 2026-03-09T16:06:17.832111+0000 mon.a (mon.0) 3109 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T16:06:18.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:17 vm01 bash[20728]: cluster 2026-03-09T16:06:17.832111+0000 mon.a (mon.0) 3109 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: cluster 2026-03-09T16:06:18.815765+0000 mgr.y (mgr.14520) 486 : cluster [DBG] pgmap v806: 236 pgs: 4 unknown, 232 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: cluster 2026-03-09T16:06:18.815765+0000 mgr.y (mgr.14520) 486 : cluster [DBG] pgmap v806: 236 pgs: 4 unknown, 232 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: cluster 2026-03-09T16:06:18.865898+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: cluster 2026-03-09T16:06:18.865898+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: audit 2026-03-09T16:06:18.867136+0000 mon.c (mon.2) 505 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: audit 2026-03-09T16:06:18.867136+0000 mon.c (mon.2) 505 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: cluster 2026-03-09T16:06:18.872226+0000 mon.a (mon.0) 3111 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: cluster 2026-03-09T16:06:18.872226+0000 mon.a (mon.0) 3111 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: audit 2026-03-09T16:06:18.874601+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:19 vm01 bash[28152]: audit 2026-03-09T16:06:18.874601+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: cluster 2026-03-09T16:06:18.815765+0000 mgr.y (mgr.14520) 486 : cluster [DBG] pgmap v806: 236 pgs: 4 unknown, 232 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: cluster 2026-03-09T16:06:18.815765+0000 mgr.y (mgr.14520) 486 : cluster [DBG] pgmap v806: 236 pgs: 4 unknown, 232 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: cluster 2026-03-09T16:06:18.865898+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: cluster 2026-03-09T16:06:18.865898+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: audit 2026-03-09T16:06:18.867136+0000 mon.c (mon.2) 505 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: audit 2026-03-09T16:06:18.867136+0000 mon.c (mon.2) 505 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: cluster 2026-03-09T16:06:18.872226+0000 mon.a (mon.0) 3111 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: cluster 2026-03-09T16:06:18.872226+0000 mon.a (mon.0) 3111 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: audit 2026-03-09T16:06:18.874601+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:19 vm01 bash[20728]: audit 2026-03-09T16:06:18.874601+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: cluster 2026-03-09T16:06:18.815765+0000 mgr.y (mgr.14520) 486 : cluster [DBG] pgmap v806: 236 pgs: 4 unknown, 232 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: cluster 2026-03-09T16:06:18.815765+0000 mgr.y (mgr.14520) 486 : cluster [DBG] pgmap v806: 236 pgs: 4 unknown, 232 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: cluster 2026-03-09T16:06:18.865898+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T16:06:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: cluster 2026-03-09T16:06:18.865898+0000 mon.a (mon.0) 3110 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-09T16:06:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: audit 2026-03-09T16:06:18.867136+0000 mon.c (mon.2) 505 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: audit 2026-03-09T16:06:18.867136+0000 mon.c (mon.2) 505 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: cluster 2026-03-09T16:06:18.872226+0000 mon.a (mon.0) 3111 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: cluster 2026-03-09T16:06:18.872226+0000 mon.a (mon.0) 3111 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: audit 2026-03-09T16:06:18.874601+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:19 vm09 bash[22983]: audit 2026-03-09T16:06:18.874601+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:20 vm01 bash[28152]: audit 2026-03-09T16:06:19.845033+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:20 vm01 bash[28152]: audit 2026-03-09T16:06:19.845033+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:20 vm01 bash[28152]: cluster 2026-03-09T16:06:19.851664+0000 mon.a (mon.0) 3114 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:20 vm01 bash[28152]: cluster 2026-03-09T16:06:19.851664+0000 mon.a (mon.0) 3114 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:20 vm01 bash[28152]: cluster 2026-03-09T16:06:20.893767+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:20 vm01 bash[28152]: cluster 2026-03-09T16:06:20.893767+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:20 vm01 bash[20728]: audit 2026-03-09T16:06:19.845033+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:20 vm01 bash[20728]: audit 2026-03-09T16:06:19.845033+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:20 vm01 bash[20728]: cluster 2026-03-09T16:06:19.851664+0000 mon.a (mon.0) 3114 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:20 vm01 bash[20728]: cluster 2026-03-09T16:06:19.851664+0000 mon.a (mon.0) 3114 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:20 vm01 bash[20728]: cluster 2026-03-09T16:06:20.893767+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T16:06:21.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:20 vm01 bash[20728]: cluster 2026-03-09T16:06:20.893767+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T16:06:21.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:20 vm09 bash[22983]: audit 2026-03-09T16:06:19.845033+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:20 vm09 bash[22983]: audit 2026-03-09T16:06:19.845033+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:20 vm09 bash[22983]: cluster 2026-03-09T16:06:19.851664+0000 mon.a (mon.0) 3114 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T16:06:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:20 vm09 bash[22983]: cluster 2026-03-09T16:06:19.851664+0000 mon.a (mon.0) 3114 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-09T16:06:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:20 vm09 bash[22983]: cluster 2026-03-09T16:06:20.893767+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T16:06:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:20 vm09 bash[22983]: cluster 2026-03-09T16:06:20.893767+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: cluster 2026-03-09T16:06:20.816077+0000 mgr.y (mgr.14520) 487 : cluster [DBG] pgmap v809: 268 pgs: 22 creating+peering, 10 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: cluster 2026-03-09T16:06:20.816077+0000 mgr.y (mgr.14520) 487 : cluster [DBG] pgmap v809: 268 pgs: 22 creating+peering, 10 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:20.894458+0000 mon.c (mon.2) 506 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:20.894458+0000 mon.c (mon.2) 506 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:20.897916+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:20.897916+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:21.876321+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:21.876321+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: cluster 2026-03-09T16:06:21.882577+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: cluster 2026-03-09T16:06:21.882577+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:21.897433+0000 mon.c (mon.2) 507 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:21.897433+0000 mon.c (mon.2) 507 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:21.902406+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:21 vm01 bash[28152]: audit 2026-03-09T16:06:21.902406+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: cluster 2026-03-09T16:06:20.816077+0000 mgr.y (mgr.14520) 487 : cluster [DBG] pgmap v809: 268 pgs: 22 creating+peering, 10 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: cluster 2026-03-09T16:06:20.816077+0000 mgr.y (mgr.14520) 487 : cluster [DBG] pgmap v809: 268 pgs: 22 creating+peering, 10 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:22.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:20.894458+0000 mon.c (mon.2) 506 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:20.894458+0000 mon.c (mon.2) 506 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:20.897916+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:20.897916+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:21.876321+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:21.876321+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: cluster 2026-03-09T16:06:21.882577+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: cluster 2026-03-09T16:06:21.882577+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:21.897433+0000 mon.c (mon.2) 507 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:21.897433+0000 mon.c (mon.2) 507 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:21.902406+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:21 vm01 bash[20728]: audit 2026-03-09T16:06:21.902406+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: cluster 2026-03-09T16:06:20.816077+0000 mgr.y (mgr.14520) 487 : cluster [DBG] pgmap v809: 268 pgs: 22 creating+peering, 10 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: cluster 2026-03-09T16:06:20.816077+0000 mgr.y (mgr.14520) 487 : cluster [DBG] pgmap v809: 268 pgs: 22 creating+peering, 10 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:20.894458+0000 mon.c (mon.2) 506 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:20.894458+0000 mon.c (mon.2) 506 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:20.897916+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:20.897916+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:21.876321+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:21.876321+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: cluster 2026-03-09T16:06:21.882577+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: cluster 2026-03-09T16:06:21.882577+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:21.897433+0000 mon.c (mon.2) 507 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:21.897433+0000 mon.c (mon.2) 507 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:21.902406+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:21 vm09 bash[22983]: audit 2026-03-09T16:06:21.902406+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:06:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:06:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:06:24.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: cluster 2026-03-09T16:06:22.816556+0000 mgr.y (mgr.14520) 488 : cluster [DBG] pgmap v812: 300 pgs: 22 creating+peering, 42 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:24.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: cluster 2026-03-09T16:06:22.816556+0000 mgr.y (mgr.14520) 488 : cluster [DBG] pgmap v812: 300 pgs: 22 creating+peering, 42 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:24.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: audit 2026-03-09T16:06:22.879737+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: audit 2026-03-09T16:06:22.879737+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: audit 2026-03-09T16:06:22.890006+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: audit 2026-03-09T16:06:22.890006+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: cluster 2026-03-09T16:06:22.894133+0000 mon.a (mon.0) 3121 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T16:06:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: cluster 2026-03-09T16:06:22.894133+0000 mon.a (mon.0) 3121 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T16:06:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: audit 2026-03-09T16:06:22.895098+0000 mon.a (mon.0) 3122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:23 vm09 bash[22983]: audit 2026-03-09T16:06:22.895098+0000 mon.a (mon.0) 3122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: cluster 2026-03-09T16:06:22.816556+0000 mgr.y (mgr.14520) 488 : cluster [DBG] pgmap v812: 300 pgs: 22 creating+peering, 42 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: cluster 2026-03-09T16:06:22.816556+0000 mgr.y (mgr.14520) 488 : cluster [DBG] pgmap v812: 300 pgs: 22 creating+peering, 42 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: audit 2026-03-09T16:06:22.879737+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: audit 2026-03-09T16:06:22.879737+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: audit 2026-03-09T16:06:22.890006+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: audit 2026-03-09T16:06:22.890006+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: cluster 2026-03-09T16:06:22.894133+0000 mon.a (mon.0) 3121 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: cluster 2026-03-09T16:06:22.894133+0000 mon.a (mon.0) 3121 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: audit 2026-03-09T16:06:22.895098+0000 mon.a (mon.0) 3122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:23 vm01 bash[28152]: audit 2026-03-09T16:06:22.895098+0000 mon.a (mon.0) 3122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: cluster 2026-03-09T16:06:22.816556+0000 mgr.y (mgr.14520) 488 : cluster [DBG] pgmap v812: 300 pgs: 22 creating+peering, 42 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: cluster 2026-03-09T16:06:22.816556+0000 mgr.y (mgr.14520) 488 : cluster [DBG] pgmap v812: 300 pgs: 22 creating+peering, 42 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:06:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: audit 2026-03-09T16:06:22.879737+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: audit 2026-03-09T16:06:22.879737+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: audit 2026-03-09T16:06:22.890006+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: audit 2026-03-09T16:06:22.890006+0000 mon.c (mon.2) 508 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: cluster 2026-03-09T16:06:22.894133+0000 mon.a (mon.0) 3121 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T16:06:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: cluster 2026-03-09T16:06:22.894133+0000 mon.a (mon.0) 3121 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-09T16:06:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: audit 2026-03-09T16:06:22.895098+0000 mon.a (mon.0) 3122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:23 vm01 bash[20728]: audit 2026-03-09T16:06:22.895098+0000 mon.a (mon.0) 3122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:25.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:24 vm09 bash[22983]: audit 2026-03-09T16:06:23.891284+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:25.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:24 vm09 bash[22983]: audit 2026-03-09T16:06:23.891284+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:25.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:24 vm09 bash[22983]: cluster 2026-03-09T16:06:23.896042+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T16:06:25.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:24 vm09 bash[22983]: cluster 2026-03-09T16:06:23.896042+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T16:06:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:24 vm09 bash[22983]: audit 2026-03-09T16:06:23.899481+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:24 vm09 bash[22983]: audit 2026-03-09T16:06:23.899481+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:24 vm09 bash[22983]: audit 2026-03-09T16:06:23.899994+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:24 vm09 bash[22983]: audit 2026-03-09T16:06:23.899994+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:24 vm01 bash[28152]: audit 2026-03-09T16:06:23.891284+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:24 vm01 bash[28152]: audit 2026-03-09T16:06:23.891284+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:24 vm01 bash[28152]: cluster 2026-03-09T16:06:23.896042+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:24 vm01 bash[28152]: cluster 2026-03-09T16:06:23.896042+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:24 vm01 bash[28152]: audit 2026-03-09T16:06:23.899481+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:24 vm01 bash[28152]: audit 2026-03-09T16:06:23.899481+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:24 vm01 bash[28152]: audit 2026-03-09T16:06:23.899994+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:24 vm01 bash[28152]: audit 2026-03-09T16:06:23.899994+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:24 vm01 bash[20728]: audit 2026-03-09T16:06:23.891284+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:25.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:24 vm01 bash[20728]: audit 2026-03-09T16:06:23.891284+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-107", "overlaypool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:25.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:24 vm01 bash[20728]: cluster 2026-03-09T16:06:23.896042+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T16:06:25.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:24 vm01 bash[20728]: cluster 2026-03-09T16:06:23.896042+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-09T16:06:25.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:24 vm01 bash[20728]: audit 2026-03-09T16:06:23.899481+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:24 vm01 bash[20728]: audit 2026-03-09T16:06:23.899481+0000 mon.c (mon.2) 509 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:24 vm01 bash[20728]: audit 2026-03-09T16:06:23.899994+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:25.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:24 vm01 bash[20728]: audit 2026-03-09T16:06:23.899994+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: cluster 2026-03-09T16:06:24.817164+0000 mgr.y (mgr.14520) 489 : cluster [DBG] pgmap v815: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: cluster 2026-03-09T16:06:24.817164+0000 mgr.y (mgr.14520) 489 : cluster [DBG] pgmap v815: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: cluster 2026-03-09T16:06:24.891295+0000 mon.a (mon.0) 3126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: cluster 2026-03-09T16:06:24.891295+0000 mon.a (mon.0) 3126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: audit 2026-03-09T16:06:24.920035+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: audit 2026-03-09T16:06:24.920035+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: cluster 2026-03-09T16:06:24.925594+0000 mon.a (mon.0) 3128 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T16:06:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: cluster 2026-03-09T16:06:24.925594+0000 mon.a (mon.0) 3128 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T16:06:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: audit 2026-03-09T16:06:24.962687+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: audit 2026-03-09T16:06:24.962687+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: audit 2026-03-09T16:06:24.963191+0000 mon.a (mon.0) 3129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:25 vm09 bash[22983]: audit 2026-03-09T16:06:24.963191+0000 mon.a (mon.0) 3129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: cluster 2026-03-09T16:06:24.817164+0000 mgr.y (mgr.14520) 489 : cluster [DBG] pgmap v815: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: cluster 2026-03-09T16:06:24.817164+0000 mgr.y (mgr.14520) 489 : cluster [DBG] pgmap v815: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: cluster 2026-03-09T16:06:24.891295+0000 mon.a (mon.0) 3126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: cluster 2026-03-09T16:06:24.891295+0000 mon.a (mon.0) 3126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: audit 2026-03-09T16:06:24.920035+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: audit 2026-03-09T16:06:24.920035+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: cluster 2026-03-09T16:06:24.925594+0000 mon.a (mon.0) 3128 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: cluster 2026-03-09T16:06:24.925594+0000 mon.a (mon.0) 3128 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: audit 2026-03-09T16:06:24.962687+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: audit 2026-03-09T16:06:24.962687+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: audit 2026-03-09T16:06:24.963191+0000 mon.a (mon.0) 3129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:26 vm01 bash[28152]: audit 2026-03-09T16:06:24.963191+0000 mon.a (mon.0) 3129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: cluster 2026-03-09T16:06:24.817164+0000 mgr.y (mgr.14520) 489 : cluster [DBG] pgmap v815: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: cluster 2026-03-09T16:06:24.817164+0000 mgr.y (mgr.14520) 489 : cluster [DBG] pgmap v815: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: cluster 2026-03-09T16:06:24.891295+0000 mon.a (mon.0) 3126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: cluster 2026-03-09T16:06:24.891295+0000 mon.a (mon.0) 3126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: audit 2026-03-09T16:06:24.920035+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: audit 2026-03-09T16:06:24.920035+0000 mon.a (mon.0) 3127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-107-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: cluster 2026-03-09T16:06:24.925594+0000 mon.a (mon.0) 3128 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: cluster 2026-03-09T16:06:24.925594+0000 mon.a (mon.0) 3128 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: audit 2026-03-09T16:06:24.962687+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: audit 2026-03-09T16:06:24.962687+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: audit 2026-03-09T16:06:24.963191+0000 mon.a (mon.0) 3129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:26 vm01 bash[20728]: audit 2026-03-09T16:06:24.963191+0000 mon.a (mon.0) 3129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]: dispatch 2026-03-09T16:06:27.011 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:06:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: audit 2026-03-09T16:06:25.991001+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]': finished 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: audit 2026-03-09T16:06:25.991001+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]': finished 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: audit 2026-03-09T16:06:25.996480+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: audit 2026-03-09T16:06:25.996480+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: cluster 2026-03-09T16:06:25.997998+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: cluster 2026-03-09T16:06:25.997998+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: audit 2026-03-09T16:06:25.999491+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: audit 2026-03-09T16:06:25.999491+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: cluster 2026-03-09T16:06:26.991048+0000 mon.a (mon.0) 3133 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:27.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: cluster 2026-03-09T16:06:26.991048+0000 mon.a (mon.0) 3133 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: audit 2026-03-09T16:06:26.995358+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:27.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:27 vm09 bash[22983]: audit 2026-03-09T16:06:26.995358+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: audit 2026-03-09T16:06:25.991001+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]': finished 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: audit 2026-03-09T16:06:25.991001+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]': finished 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: audit 2026-03-09T16:06:25.996480+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: audit 2026-03-09T16:06:25.996480+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: cluster 2026-03-09T16:06:25.997998+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: cluster 2026-03-09T16:06:25.997998+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: audit 2026-03-09T16:06:25.999491+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: audit 2026-03-09T16:06:25.999491+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: cluster 2026-03-09T16:06:26.991048+0000 mon.a (mon.0) 3133 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: cluster 2026-03-09T16:06:26.991048+0000 mon.a (mon.0) 3133 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: audit 2026-03-09T16:06:26.995358+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:27 vm01 bash[28152]: audit 2026-03-09T16:06:26.995358+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: audit 2026-03-09T16:06:25.991001+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]': finished 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: audit 2026-03-09T16:06:25.991001+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-107"}]': finished 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: audit 2026-03-09T16:06:25.996480+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: audit 2026-03-09T16:06:25.996480+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.101:0/2103378927' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: cluster 2026-03-09T16:06:25.997998+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: cluster 2026-03-09T16:06:25.997998+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: audit 2026-03-09T16:06:25.999491+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: audit 2026-03-09T16:06:25.999491+0000 mon.a (mon.0) 3132 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]: dispatch 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: cluster 2026-03-09T16:06:26.991048+0000 mon.a (mon.0) 3133 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: cluster 2026-03-09T16:06:26.991048+0000 mon.a (mon.0) 3133 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: audit 2026-03-09T16:06:26.995358+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:27.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:27 vm01 bash[20728]: audit 2026-03-09T16:06:26.995358+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-107", "tierpool": "test-rados-api-vm01-59821-107-cache"}]': finished 2026-03-09T16:06:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:28 vm09 bash[22983]: audit 2026-03-09T16:06:26.692734+0000 mgr.y (mgr.14520) 490 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:28 vm09 bash[22983]: audit 2026-03-09T16:06:26.692734+0000 mgr.y (mgr.14520) 490 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:28 vm09 bash[22983]: cluster 2026-03-09T16:06:26.817437+0000 mgr.y (mgr.14520) 491 : cluster [DBG] pgmap v818: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:28 vm09 bash[22983]: cluster 2026-03-09T16:06:26.817437+0000 mgr.y (mgr.14520) 491 : cluster [DBG] pgmap v818: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:28 vm09 bash[22983]: cluster 2026-03-09T16:06:27.004695+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T16:06:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:28 vm09 bash[22983]: cluster 2026-03-09T16:06:27.004695+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T16:06:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:28 vm09 bash[22983]: cluster 2026-03-09T16:06:28.001498+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T16:06:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:28 vm09 bash[22983]: cluster 2026-03-09T16:06:28.001498+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:28 vm01 bash[28152]: audit 2026-03-09T16:06:26.692734+0000 mgr.y (mgr.14520) 490 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:28 vm01 bash[28152]: audit 2026-03-09T16:06:26.692734+0000 mgr.y (mgr.14520) 490 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:28 vm01 bash[28152]: cluster 2026-03-09T16:06:26.817437+0000 mgr.y (mgr.14520) 491 : cluster [DBG] pgmap v818: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:28 vm01 bash[28152]: cluster 2026-03-09T16:06:26.817437+0000 mgr.y (mgr.14520) 491 : cluster [DBG] pgmap v818: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:28 vm01 bash[28152]: cluster 2026-03-09T16:06:27.004695+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:28 vm01 bash[28152]: cluster 2026-03-09T16:06:27.004695+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:28 vm01 bash[28152]: cluster 2026-03-09T16:06:28.001498+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:28 vm01 bash[28152]: cluster 2026-03-09T16:06:28.001498+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:28 vm01 bash[20728]: audit 2026-03-09T16:06:26.692734+0000 mgr.y (mgr.14520) 490 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:28 vm01 bash[20728]: audit 2026-03-09T16:06:26.692734+0000 mgr.y (mgr.14520) 490 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:28 vm01 bash[20728]: cluster 2026-03-09T16:06:26.817437+0000 mgr.y (mgr.14520) 491 : cluster [DBG] pgmap v818: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:28 vm01 bash[20728]: cluster 2026-03-09T16:06:26.817437+0000 mgr.y (mgr.14520) 491 : cluster [DBG] pgmap v818: 300 pgs: 300 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:28 vm01 bash[20728]: cluster 2026-03-09T16:06:27.004695+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:28 vm01 bash[20728]: cluster 2026-03-09T16:06:27.004695+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:28 vm01 bash[20728]: cluster 2026-03-09T16:06:28.001498+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T16:06:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:28 vm01 bash[20728]: cluster 2026-03-09T16:06:28.001498+0000 mon.a (mon.0) 3136 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-09T16:06:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:29 vm01 bash[28152]: cluster 2026-03-09T16:06:28.818031+0000 mgr.y (mgr.14520) 492 : cluster [DBG] pgmap v821: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:29 vm01 bash[28152]: cluster 2026-03-09T16:06:28.818031+0000 mgr.y (mgr.14520) 492 : cluster [DBG] pgmap v821: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:29 vm01 bash[20728]: cluster 2026-03-09T16:06:28.818031+0000 mgr.y (mgr.14520) 492 : cluster [DBG] pgmap v821: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:29.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:29 vm01 bash[20728]: cluster 2026-03-09T16:06:28.818031+0000 mgr.y (mgr.14520) 492 : cluster [DBG] pgmap v821: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:29.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:29 vm09 bash[22983]: cluster 2026-03-09T16:06:28.818031+0000 mgr.y (mgr.14520) 492 : cluster [DBG] pgmap v821: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:29.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:29 vm09 bash[22983]: cluster 2026-03-09T16:06:28.818031+0000 mgr.y (mgr.14520) 492 : cluster [DBG] pgmap v821: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: cluster 2026-03-09T16:06:29.245904+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: cluster 2026-03-09T16:06:29.245904+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.351836+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.351836+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.375310+0000 mon.a (mon.0) 3138 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.375310+0000 mon.a (mon.0) 3138 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.609718+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.609718+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.610984+0000 mon.c (mon.2) 513 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.610984+0000 mon.c (mon.2) 513 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.611208+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.611208+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.612107+0000 mon.c (mon.2) 514 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.612107+0000 mon.c (mon.2) 514 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.612345+0000 mon.a (mon.0) 3141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:30 vm01 bash[28152]: audit 2026-03-09T16:06:29.612345+0000 mon.a (mon.0) 3141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: cluster 2026-03-09T16:06:29.245904+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: cluster 2026-03-09T16:06:29.245904+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.351836+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.351836+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.375310+0000 mon.a (mon.0) 3138 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.375310+0000 mon.a (mon.0) 3138 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.609718+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.609718+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.610984+0000 mon.c (mon.2) 513 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.610984+0000 mon.c (mon.2) 513 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.611208+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.611208+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.612107+0000 mon.c (mon.2) 514 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.612107+0000 mon.c (mon.2) 514 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.612345+0000 mon.a (mon.0) 3141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.678 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:30 vm01 bash[20728]: audit 2026-03-09T16:06:29.612345+0000 mon.a (mon.0) 3141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: cluster 2026-03-09T16:06:29.245904+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: cluster 2026-03-09T16:06:29.245904+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.351836+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.351836+0000 mon.c (mon.2) 512 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.375310+0000 mon.a (mon.0) 3138 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.375310+0000 mon.a (mon.0) 3138 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.609718+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.609718+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.610984+0000 mon.c (mon.2) 513 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.610984+0000 mon.c (mon.2) 513 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.611208+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.611208+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.612107+0000 mon.c (mon.2) 514 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.612107+0000 mon.c (mon.2) 514 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.612345+0000 mon.a (mon.0) 3141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:30.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:30 vm09 bash[22983]: audit 2026-03-09T16:06:29.612345+0000 mon.a (mon.0) 3141 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: audit 2026-03-09T16:06:30.395832+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: audit 2026-03-09T16:06:30.395832+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: audit 2026-03-09T16:06:30.402634+0000 mon.c (mon.2) 515 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: audit 2026-03-09T16:06:30.402634+0000 mon.c (mon.2) 515 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: cluster 2026-03-09T16:06:30.406317+0000 mon.a (mon.0) 3143 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: cluster 2026-03-09T16:06:30.406317+0000 mon.a (mon.0) 3143 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: audit 2026-03-09T16:06:30.408029+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: audit 2026-03-09T16:06:30.408029+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: cluster 2026-03-09T16:06:30.818416+0000 mgr.y (mgr.14520) 493 : cluster [DBG] pgmap v824: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: cluster 2026-03-09T16:06:30.818416+0000 mgr.y (mgr.14520) 493 : cluster [DBG] pgmap v824: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: cluster 2026-03-09T16:06:31.418272+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T16:06:31.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:31 vm09 bash[22983]: cluster 2026-03-09T16:06:31.418272+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T16:06:31.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: audit 2026-03-09T16:06:30.395832+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:31.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: audit 2026-03-09T16:06:30.395832+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:31.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: audit 2026-03-09T16:06:30.402634+0000 mon.c (mon.2) 515 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: audit 2026-03-09T16:06:30.402634+0000 mon.c (mon.2) 515 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: cluster 2026-03-09T16:06:30.406317+0000 mon.a (mon.0) 3143 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: cluster 2026-03-09T16:06:30.406317+0000 mon.a (mon.0) 3143 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: audit 2026-03-09T16:06:30.408029+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: audit 2026-03-09T16:06:30.408029+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: cluster 2026-03-09T16:06:30.818416+0000 mgr.y (mgr.14520) 493 : cluster [DBG] pgmap v824: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: cluster 2026-03-09T16:06:30.818416+0000 mgr.y (mgr.14520) 493 : cluster [DBG] pgmap v824: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: cluster 2026-03-09T16:06:31.418272+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:31 vm01 bash[28152]: cluster 2026-03-09T16:06:31.418272+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: audit 2026-03-09T16:06:30.395832+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: audit 2026-03-09T16:06:30.395832+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: audit 2026-03-09T16:06:30.402634+0000 mon.c (mon.2) 515 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: audit 2026-03-09T16:06:30.402634+0000 mon.c (mon.2) 515 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: cluster 2026-03-09T16:06:30.406317+0000 mon.a (mon.0) 3143 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: cluster 2026-03-09T16:06:30.406317+0000 mon.a (mon.0) 3143 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: audit 2026-03-09T16:06:30.408029+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: audit 2026-03-09T16:06:30.408029+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: cluster 2026-03-09T16:06:30.818416+0000 mgr.y (mgr.14520) 493 : cluster [DBG] pgmap v824: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: cluster 2026-03-09T16:06:30.818416+0000 mgr.y (mgr.14520) 493 : cluster [DBG] pgmap v824: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: cluster 2026-03-09T16:06:31.418272+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T16:06:31.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:31 vm01 bash[20728]: cluster 2026-03-09T16:06:31.418272+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-09T16:06:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:06:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:06:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:06:33.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:33 vm01 bash[28152]: audit 2026-03-09T16:06:32.602315+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:33.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:33 vm01 bash[28152]: audit 2026-03-09T16:06:32.602315+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:33.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:33 vm01 bash[28152]: cluster 2026-03-09T16:06:32.608975+0000 mon.a (mon.0) 3147 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T16:06:33.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:33 vm01 bash[28152]: cluster 2026-03-09T16:06:32.608975+0000 mon.a (mon.0) 3147 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T16:06:33.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:33 vm01 bash[28152]: cluster 2026-03-09T16:06:32.818900+0000 mgr.y (mgr.14520) 494 : cluster [DBG] pgmap v827: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:33.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:33 vm01 bash[28152]: cluster 2026-03-09T16:06:32.818900+0000 mgr.y (mgr.14520) 494 : cluster [DBG] pgmap v827: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:33.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:33 vm01 bash[20728]: audit 2026-03-09T16:06:32.602315+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:33.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:33 vm01 bash[20728]: audit 2026-03-09T16:06:32.602315+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:33.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:33 vm01 bash[20728]: cluster 2026-03-09T16:06:32.608975+0000 mon.a (mon.0) 3147 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T16:06:33.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:33 vm01 bash[20728]: cluster 2026-03-09T16:06:32.608975+0000 mon.a (mon.0) 3147 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T16:06:33.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:33 vm01 bash[20728]: cluster 2026-03-09T16:06:32.818900+0000 mgr.y (mgr.14520) 494 : cluster [DBG] pgmap v827: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:33.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:33 vm01 bash[20728]: cluster 2026-03-09T16:06:32.818900+0000 mgr.y (mgr.14520) 494 : cluster [DBG] pgmap v827: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:34.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:33 vm09 bash[22983]: audit 2026-03-09T16:06:32.602315+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:34.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:33 vm09 bash[22983]: audit 2026-03-09T16:06:32.602315+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:34.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:33 vm09 bash[22983]: cluster 2026-03-09T16:06:32.608975+0000 mon.a (mon.0) 3147 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T16:06:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:33 vm09 bash[22983]: cluster 2026-03-09T16:06:32.608975+0000 mon.a (mon.0) 3147 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-09T16:06:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:33 vm09 bash[22983]: cluster 2026-03-09T16:06:32.818900+0000 mgr.y (mgr.14520) 494 : cluster [DBG] pgmap v827: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:34.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:33 vm09 bash[22983]: cluster 2026-03-09T16:06:32.818900+0000 mgr.y (mgr.14520) 494 : cluster [DBG] pgmap v827: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: cluster 2026-03-09T16:06:33.603296+0000 mon.a (mon.0) 3148 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: cluster 2026-03-09T16:06:33.603296+0000 mon.a (mon.0) 3148 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: cluster 2026-03-09T16:06:33.609453+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: cluster 2026-03-09T16:06:33.609453+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:33.622541+0000 mon.c (mon.2) 516 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:33.622541+0000 mon.c (mon.2) 516 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:33.629633+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:33.629633+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:34.608555+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:34.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:34.608555+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: cluster 2026-03-09T16:06:34.612483+0000 mon.a (mon.0) 3152 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: cluster 2026-03-09T16:06:34.612483+0000 mon.a (mon.0) 3152 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:34.616331+0000 mon.c (mon.2) 517 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:34.616331+0000 mon.c (mon.2) 517 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:34.630744+0000 mon.a (mon.0) 3153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:34 vm01 bash[28152]: audit 2026-03-09T16:06:34.630744+0000 mon.a (mon.0) 3153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: cluster 2026-03-09T16:06:33.603296+0000 mon.a (mon.0) 3148 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: cluster 2026-03-09T16:06:33.603296+0000 mon.a (mon.0) 3148 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: cluster 2026-03-09T16:06:33.609453+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: cluster 2026-03-09T16:06:33.609453+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:33.622541+0000 mon.c (mon.2) 516 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:33.622541+0000 mon.c (mon.2) 516 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:33.629633+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:33.629633+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:34.608555+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:34.608555+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: cluster 2026-03-09T16:06:34.612483+0000 mon.a (mon.0) 3152 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: cluster 2026-03-09T16:06:34.612483+0000 mon.a (mon.0) 3152 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:34.616331+0000 mon.c (mon.2) 517 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:34.616331+0000 mon.c (mon.2) 517 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:34.630744+0000 mon.a (mon.0) 3153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:34.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:34 vm01 bash[20728]: audit 2026-03-09T16:06:34.630744+0000 mon.a (mon.0) 3153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: cluster 2026-03-09T16:06:33.603296+0000 mon.a (mon.0) 3148 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: cluster 2026-03-09T16:06:33.603296+0000 mon.a (mon.0) 3148 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: cluster 2026-03-09T16:06:33.609453+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: cluster 2026-03-09T16:06:33.609453+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:33.622541+0000 mon.c (mon.2) 516 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:33.622541+0000 mon.c (mon.2) 516 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:33.629633+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:33.629633+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:34.608555+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:34.608555+0000 mon.a (mon.0) 3151 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: cluster 2026-03-09T16:06:34.612483+0000 mon.a (mon.0) 3152 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: cluster 2026-03-09T16:06:34.612483+0000 mon.a (mon.0) 3152 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:34.616331+0000 mon.c (mon.2) 517 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:34.616331+0000 mon.c (mon.2) 517 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:34.630744+0000 mon.a (mon.0) 3153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:34 vm09 bash[22983]: audit 2026-03-09T16:06:34.630744+0000 mon.a (mon.0) 3153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: cluster 2026-03-09T16:06:34.819387+0000 mgr.y (mgr.14520) 495 : cluster [DBG] pgmap v830: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: cluster 2026-03-09T16:06:34.819387+0000 mgr.y (mgr.14520) 495 : cluster [DBG] pgmap v830: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: audit 2026-03-09T16:06:35.565792+0000 mon.a (mon.0) 3154 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: audit 2026-03-09T16:06:35.565792+0000 mon.a (mon.0) 3154 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: audit 2026-03-09T16:06:35.612697+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: audit 2026-03-09T16:06:35.612697+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: audit 2026-03-09T16:06:35.625192+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: audit 2026-03-09T16:06:35.625192+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: cluster 2026-03-09T16:06:35.627510+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T16:06:35.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: cluster 2026-03-09T16:06:35.627510+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: audit 2026-03-09T16:06:35.632487+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:35 vm01 bash[20728]: audit 2026-03-09T16:06:35.632487+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: cluster 2026-03-09T16:06:34.819387+0000 mgr.y (mgr.14520) 495 : cluster [DBG] pgmap v830: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: cluster 2026-03-09T16:06:34.819387+0000 mgr.y (mgr.14520) 495 : cluster [DBG] pgmap v830: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: audit 2026-03-09T16:06:35.565792+0000 mon.a (mon.0) 3154 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: audit 2026-03-09T16:06:35.565792+0000 mon.a (mon.0) 3154 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: audit 2026-03-09T16:06:35.612697+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: audit 2026-03-09T16:06:35.612697+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: audit 2026-03-09T16:06:35.625192+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: audit 2026-03-09T16:06:35.625192+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: cluster 2026-03-09T16:06:35.627510+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: cluster 2026-03-09T16:06:35.627510+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: audit 2026-03-09T16:06:35.632487+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:35.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:35 vm01 bash[28152]: audit 2026-03-09T16:06:35.632487+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: cluster 2026-03-09T16:06:34.819387+0000 mgr.y (mgr.14520) 495 : cluster [DBG] pgmap v830: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: cluster 2026-03-09T16:06:34.819387+0000 mgr.y (mgr.14520) 495 : cluster [DBG] pgmap v830: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: audit 2026-03-09T16:06:35.565792+0000 mon.a (mon.0) 3154 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: audit 2026-03-09T16:06:35.565792+0000 mon.a (mon.0) 3154 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: audit 2026-03-09T16:06:35.612697+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: audit 2026-03-09T16:06:35.612697+0000 mon.a (mon.0) 3155 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: audit 2026-03-09T16:06:35.625192+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: audit 2026-03-09T16:06:35.625192+0000 mon.c (mon.2) 518 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: cluster 2026-03-09T16:06:35.627510+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: cluster 2026-03-09T16:06:35.627510+0000 mon.a (mon.0) 3156 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: audit 2026-03-09T16:06:35.632487+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:36.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:35 vm09 bash[22983]: audit 2026-03-09T16:06:35.632487+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:37.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:06:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: audit 2026-03-09T16:06:36.616436+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: audit 2026-03-09T16:06:36.616436+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: cluster 2026-03-09T16:06:36.620005+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: cluster 2026-03-09T16:06:36.620005+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: audit 2026-03-09T16:06:36.623818+0000 mon.c (mon.2) 519 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: audit 2026-03-09T16:06:36.623818+0000 mon.c (mon.2) 519 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: audit 2026-03-09T16:06:36.624408+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: audit 2026-03-09T16:06:36.624408+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: audit 2026-03-09T16:06:36.703126+0000 mgr.y (mgr.14520) 496 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: audit 2026-03-09T16:06:36.703126+0000 mgr.y (mgr.14520) 496 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: cluster 2026-03-09T16:06:36.819792+0000 mgr.y (mgr.14520) 497 : cluster [DBG] pgmap v833: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:37.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:37 vm09 bash[22983]: cluster 2026-03-09T16:06:36.819792+0000 mgr.y (mgr.14520) 497 : cluster [DBG] pgmap v833: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:37.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: audit 2026-03-09T16:06:36.616436+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: audit 2026-03-09T16:06:36.616436+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: cluster 2026-03-09T16:06:36.620005+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: cluster 2026-03-09T16:06:36.620005+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: audit 2026-03-09T16:06:36.623818+0000 mon.c (mon.2) 519 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: audit 2026-03-09T16:06:36.623818+0000 mon.c (mon.2) 519 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: audit 2026-03-09T16:06:36.624408+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: audit 2026-03-09T16:06:36.624408+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: audit 2026-03-09T16:06:36.703126+0000 mgr.y (mgr.14520) 496 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: audit 2026-03-09T16:06:36.703126+0000 mgr.y (mgr.14520) 496 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: cluster 2026-03-09T16:06:36.819792+0000 mgr.y (mgr.14520) 497 : cluster [DBG] pgmap v833: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:37 vm01 bash[28152]: cluster 2026-03-09T16:06:36.819792+0000 mgr.y (mgr.14520) 497 : cluster [DBG] pgmap v833: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: audit 2026-03-09T16:06:36.616436+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: audit 2026-03-09T16:06:36.616436+0000 mon.a (mon.0) 3158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-109", "overlaypool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: cluster 2026-03-09T16:06:36.620005+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: cluster 2026-03-09T16:06:36.620005+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: audit 2026-03-09T16:06:36.623818+0000 mon.c (mon.2) 519 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: audit 2026-03-09T16:06:36.623818+0000 mon.c (mon.2) 519 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: audit 2026-03-09T16:06:36.624408+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: audit 2026-03-09T16:06:36.624408+0000 mon.a (mon.0) 3160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: audit 2026-03-09T16:06:36.703126+0000 mgr.y (mgr.14520) 496 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: audit 2026-03-09T16:06:36.703126+0000 mgr.y (mgr.14520) 496 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: cluster 2026-03-09T16:06:36.819792+0000 mgr.y (mgr.14520) 497 : cluster [DBG] pgmap v833: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:37.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:37 vm01 bash[20728]: cluster 2026-03-09T16:06:36.819792+0000 mgr.y (mgr.14520) 497 : cluster [DBG] pgmap v833: 276 pgs: 32 creating+peering, 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: cluster 2026-03-09T16:06:37.616469+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: cluster 2026-03-09T16:06:37.616469+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: audit 2026-03-09T16:06:37.619735+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: audit 2026-03-09T16:06:37.619735+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: cluster 2026-03-09T16:06:37.622817+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: cluster 2026-03-09T16:06:37.622817+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: audit 2026-03-09T16:06:37.626547+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: audit 2026-03-09T16:06:37.626547+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: audit 2026-03-09T16:06:37.638081+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:38 vm09 bash[22983]: audit 2026-03-09T16:06:37.638081+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: cluster 2026-03-09T16:06:37.616469+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: cluster 2026-03-09T16:06:37.616469+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: audit 2026-03-09T16:06:37.619735+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: audit 2026-03-09T16:06:37.619735+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: cluster 2026-03-09T16:06:37.622817+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: cluster 2026-03-09T16:06:37.622817+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: audit 2026-03-09T16:06:37.626547+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: audit 2026-03-09T16:06:37.626547+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: audit 2026-03-09T16:06:37.638081+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:38 vm01 bash[28152]: audit 2026-03-09T16:06:37.638081+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: cluster 2026-03-09T16:06:37.616469+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: cluster 2026-03-09T16:06:37.616469+0000 mon.a (mon.0) 3161 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:06:39.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: audit 2026-03-09T16:06:37.619735+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:39.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: audit 2026-03-09T16:06:37.619735+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-109-cache", "mode": "writeback"}]': finished 2026-03-09T16:06:39.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: cluster 2026-03-09T16:06:37.622817+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T16:06:39.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: cluster 2026-03-09T16:06:37.622817+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-09T16:06:39.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: audit 2026-03-09T16:06:37.626547+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: audit 2026-03-09T16:06:37.626547+0000 mon.c (mon.2) 520 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: audit 2026-03-09T16:06:37.638081+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:39.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:38 vm01 bash[20728]: audit 2026-03-09T16:06:37.638081+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: audit 2026-03-09T16:06:38.748214+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: audit 2026-03-09T16:06:38.748214+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: cluster 2026-03-09T16:06:38.751529+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: cluster 2026-03-09T16:06:38.751529+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: audit 2026-03-09T16:06:38.760627+0000 mon.c (mon.2) 521 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: audit 2026-03-09T16:06:38.760627+0000 mon.c (mon.2) 521 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: audit 2026-03-09T16:06:38.780433+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: audit 2026-03-09T16:06:38.780433+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: cluster 2026-03-09T16:06:38.820410+0000 mgr.y (mgr.14520) 498 : cluster [DBG] pgmap v836: 276 pgs: 18 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:40.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:39 vm09 bash[22983]: cluster 2026-03-09T16:06:38.820410+0000 mgr.y (mgr.14520) 498 : cluster [DBG] pgmap v836: 276 pgs: 18 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:40.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: audit 2026-03-09T16:06:38.748214+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:40.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: audit 2026-03-09T16:06:38.748214+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:40.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: cluster 2026-03-09T16:06:38.751529+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T16:06:40.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: cluster 2026-03-09T16:06:38.751529+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: audit 2026-03-09T16:06:38.760627+0000 mon.c (mon.2) 521 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: audit 2026-03-09T16:06:38.760627+0000 mon.c (mon.2) 521 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: audit 2026-03-09T16:06:38.780433+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: audit 2026-03-09T16:06:38.780433+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: cluster 2026-03-09T16:06:38.820410+0000 mgr.y (mgr.14520) 498 : cluster [DBG] pgmap v836: 276 pgs: 18 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:39 vm01 bash[28152]: cluster 2026-03-09T16:06:38.820410+0000 mgr.y (mgr.14520) 498 : cluster [DBG] pgmap v836: 276 pgs: 18 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: audit 2026-03-09T16:06:38.748214+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: audit 2026-03-09T16:06:38.748214+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: cluster 2026-03-09T16:06:38.751529+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: cluster 2026-03-09T16:06:38.751529+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: audit 2026-03-09T16:06:38.760627+0000 mon.c (mon.2) 521 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: audit 2026-03-09T16:06:38.760627+0000 mon.c (mon.2) 521 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: audit 2026-03-09T16:06:38.780433+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: audit 2026-03-09T16:06:38.780433+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: cluster 2026-03-09T16:06:38.820410+0000 mgr.y (mgr.14520) 498 : cluster [DBG] pgmap v836: 276 pgs: 18 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:40.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:39 vm01 bash[20728]: cluster 2026-03-09T16:06:38.820410+0000 mgr.y (mgr.14520) 498 : cluster [DBG] pgmap v836: 276 pgs: 18 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: audit 2026-03-09T16:06:39.768274+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: audit 2026-03-09T16:06:39.768274+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: cluster 2026-03-09T16:06:39.772422+0000 mon.a (mon.0) 3169 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: cluster 2026-03-09T16:06:39.772422+0000 mon.a (mon.0) 3169 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: audit 2026-03-09T16:06:39.780787+0000 mon.c (mon.2) 522 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: audit 2026-03-09T16:06:39.780787+0000 mon.c (mon.2) 522 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: audit 2026-03-09T16:06:39.781741+0000 mon.a (mon.0) 3170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: audit 2026-03-09T16:06:39.781741+0000 mon.a (mon.0) 3170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: cluster 2026-03-09T16:06:40.768405+0000 mon.a (mon.0) 3171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: cluster 2026-03-09T16:06:40.768405+0000 mon.a (mon.0) 3171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: audit 2026-03-09T16:06:40.772462+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: audit 2026-03-09T16:06:40.772462+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: cluster 2026-03-09T16:06:40.775397+0000 mon.a (mon.0) 3173 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T16:06:41.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:40 vm09 bash[22983]: cluster 2026-03-09T16:06:40.775397+0000 mon.a (mon.0) 3173 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T16:06:41.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: audit 2026-03-09T16:06:39.768274+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: audit 2026-03-09T16:06:39.768274+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: cluster 2026-03-09T16:06:39.772422+0000 mon.a (mon.0) 3169 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: cluster 2026-03-09T16:06:39.772422+0000 mon.a (mon.0) 3169 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: audit 2026-03-09T16:06:39.780787+0000 mon.c (mon.2) 522 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: audit 2026-03-09T16:06:39.780787+0000 mon.c (mon.2) 522 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: audit 2026-03-09T16:06:39.781741+0000 mon.a (mon.0) 3170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: audit 2026-03-09T16:06:39.781741+0000 mon.a (mon.0) 3170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: cluster 2026-03-09T16:06:40.768405+0000 mon.a (mon.0) 3171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: cluster 2026-03-09T16:06:40.768405+0000 mon.a (mon.0) 3171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: audit 2026-03-09T16:06:40.772462+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: audit 2026-03-09T16:06:40.772462+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: cluster 2026-03-09T16:06:40.775397+0000 mon.a (mon.0) 3173 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:40 vm01 bash[20728]: cluster 2026-03-09T16:06:40.775397+0000 mon.a (mon.0) 3173 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: audit 2026-03-09T16:06:39.768274+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: audit 2026-03-09T16:06:39.768274+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: cluster 2026-03-09T16:06:39.772422+0000 mon.a (mon.0) 3169 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: cluster 2026-03-09T16:06:39.772422+0000 mon.a (mon.0) 3169 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: audit 2026-03-09T16:06:39.780787+0000 mon.c (mon.2) 522 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: audit 2026-03-09T16:06:39.780787+0000 mon.c (mon.2) 522 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: audit 2026-03-09T16:06:39.781741+0000 mon.a (mon.0) 3170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: audit 2026-03-09T16:06:39.781741+0000 mon.a (mon.0) 3170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: cluster 2026-03-09T16:06:40.768405+0000 mon.a (mon.0) 3171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: cluster 2026-03-09T16:06:40.768405+0000 mon.a (mon.0) 3171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: audit 2026-03-09T16:06:40.772462+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: audit 2026-03-09T16:06:40.772462+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: cluster 2026-03-09T16:06:40.775397+0000 mon.a (mon.0) 3173 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T16:06:41.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:40 vm01 bash[28152]: cluster 2026-03-09T16:06:40.775397+0000 mon.a (mon.0) 3173 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-09T16:06:42.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:40.787531+0000 mon.c (mon.2) 523 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:40.787531+0000 mon.c (mon.2) 523 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:40.788582+0000 mon.a (mon.0) 3174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:40.788582+0000 mon.a (mon.0) 3174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: cluster 2026-03-09T16:06:40.820821+0000 mgr.y (mgr.14520) 499 : cluster [DBG] pgmap v839: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: cluster 2026-03-09T16:06:40.820821+0000 mgr.y (mgr.14520) 499 : cluster [DBG] pgmap v839: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:40.907750+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:40.907750+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:40.935575+0000 mon.a (mon.0) 3176 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:40.935575+0000 mon.a (mon.0) 3176 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.145889+0000 mon.a (mon.0) 3177 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.145889+0000 mon.a (mon.0) 3177 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.295317+0000 mon.a (mon.0) 3178 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.295317+0000 mon.a (mon.0) 3178 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.632479+0000 mon.a (mon.0) 3179 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.632479+0000 mon.a (mon.0) 3179 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.633138+0000 mon.a (mon.0) 3180 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.633138+0000 mon.a (mon.0) 3180 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.649432+0000 mon.a (mon.0) 3181 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.649432+0000 mon.a (mon.0) 3181 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.775620+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: audit 2026-03-09T16:06:41.775620+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: cluster 2026-03-09T16:06:41.778645+0000 mon.a (mon.0) 3183 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T16:06:42.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:41 vm09 bash[22983]: cluster 2026-03-09T16:06:41.778645+0000 mon.a (mon.0) 3183 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:40.787531+0000 mon.c (mon.2) 523 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:40.787531+0000 mon.c (mon.2) 523 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:40.788582+0000 mon.a (mon.0) 3174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:40.788582+0000 mon.a (mon.0) 3174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: cluster 2026-03-09T16:06:40.820821+0000 mgr.y (mgr.14520) 499 : cluster [DBG] pgmap v839: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: cluster 2026-03-09T16:06:40.820821+0000 mgr.y (mgr.14520) 499 : cluster [DBG] pgmap v839: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:40.907750+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:40.907750+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:40.935575+0000 mon.a (mon.0) 3176 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:40.935575+0000 mon.a (mon.0) 3176 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.145889+0000 mon.a (mon.0) 3177 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.145889+0000 mon.a (mon.0) 3177 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.295317+0000 mon.a (mon.0) 3178 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.295317+0000 mon.a (mon.0) 3178 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.632479+0000 mon.a (mon.0) 3179 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.632479+0000 mon.a (mon.0) 3179 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.633138+0000 mon.a (mon.0) 3180 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.633138+0000 mon.a (mon.0) 3180 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.649432+0000 mon.a (mon.0) 3181 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.649432+0000 mon.a (mon.0) 3181 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.775620+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: audit 2026-03-09T16:06:41.775620+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: cluster 2026-03-09T16:06:41.778645+0000 mon.a (mon.0) 3183 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T16:06:42.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:41 vm01 bash[28152]: cluster 2026-03-09T16:06:41.778645+0000 mon.a (mon.0) 3183 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:40.787531+0000 mon.c (mon.2) 523 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:40.787531+0000 mon.c (mon.2) 523 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:40.788582+0000 mon.a (mon.0) 3174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:40.788582+0000 mon.a (mon.0) 3174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: cluster 2026-03-09T16:06:40.820821+0000 mgr.y (mgr.14520) 499 : cluster [DBG] pgmap v839: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: cluster 2026-03-09T16:06:40.820821+0000 mgr.y (mgr.14520) 499 : cluster [DBG] pgmap v839: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:40.907750+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:40.907750+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:40.935575+0000 mon.a (mon.0) 3176 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:40.935575+0000 mon.a (mon.0) 3176 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.145889+0000 mon.a (mon.0) 3177 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.145889+0000 mon.a (mon.0) 3177 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.295317+0000 mon.a (mon.0) 3178 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.295317+0000 mon.a (mon.0) 3178 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.632479+0000 mon.a (mon.0) 3179 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.632479+0000 mon.a (mon.0) 3179 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.633138+0000 mon.a (mon.0) 3180 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.633138+0000 mon.a (mon.0) 3180 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.649432+0000 mon.a (mon.0) 3181 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.649432+0000 mon.a (mon.0) 3181 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.775620+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: audit 2026-03-09T16:06:41.775620+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: cluster 2026-03-09T16:06:41.778645+0000 mon.a (mon.0) 3183 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T16:06:42.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:41 vm01 bash[20728]: cluster 2026-03-09T16:06:41.778645+0000 mon.a (mon.0) 3183 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-09T16:06:43.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:42 vm09 bash[22983]: audit 2026-03-09T16:06:41.831186+0000 mon.c (mon.2) 524 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:42 vm09 bash[22983]: audit 2026-03-09T16:06:41.831186+0000 mon.c (mon.2) 524 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:42 vm09 bash[22983]: audit 2026-03-09T16:06:41.831646+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:42 vm09 bash[22983]: audit 2026-03-09T16:06:41.831646+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:42 vm01 bash[28152]: audit 2026-03-09T16:06:41.831186+0000 mon.c (mon.2) 524 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:42 vm01 bash[28152]: audit 2026-03-09T16:06:41.831186+0000 mon.c (mon.2) 524 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:42 vm01 bash[28152]: audit 2026-03-09T16:06:41.831646+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:42 vm01 bash[28152]: audit 2026-03-09T16:06:41.831646+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:06:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:06:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:06:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:42 vm01 bash[20728]: audit 2026-03-09T16:06:41.831186+0000 mon.c (mon.2) 524 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:42 vm01 bash[20728]: audit 2026-03-09T16:06:41.831186+0000 mon.c (mon.2) 524 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:42 vm01 bash[20728]: audit 2026-03-09T16:06:41.831646+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:43.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:42 vm01 bash[20728]: audit 2026-03-09T16:06:41.831646+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:44.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: audit 2026-03-09T16:06:42.819042+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: audit 2026-03-09T16:06:42.819042+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: cluster 2026-03-09T16:06:42.822641+0000 mgr.y (mgr.14520) 500 : cluster [DBG] pgmap v842: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: cluster 2026-03-09T16:06:42.822641+0000 mgr.y (mgr.14520) 500 : cluster [DBG] pgmap v842: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: cluster 2026-03-09T16:06:42.826539+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: cluster 2026-03-09T16:06:42.826539+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: audit 2026-03-09T16:06:42.830605+0000 mon.c (mon.2) 525 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: audit 2026-03-09T16:06:42.830605+0000 mon.c (mon.2) 525 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: audit 2026-03-09T16:06:42.836395+0000 mon.a (mon.0) 3187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: audit 2026-03-09T16:06:42.836395+0000 mon.a (mon.0) 3187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: audit 2026-03-09T16:06:42.979595+0000 mon.a (mon.0) 3188 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]: dispatch 2026-03-09T16:06:44.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:43 vm09 bash[22983]: audit 2026-03-09T16:06:42.979595+0000 mon.a (mon.0) 3188 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]: dispatch 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: audit 2026-03-09T16:06:42.819042+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: audit 2026-03-09T16:06:42.819042+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: cluster 2026-03-09T16:06:42.822641+0000 mgr.y (mgr.14520) 500 : cluster [DBG] pgmap v842: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: cluster 2026-03-09T16:06:42.822641+0000 mgr.y (mgr.14520) 500 : cluster [DBG] pgmap v842: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: cluster 2026-03-09T16:06:42.826539+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: cluster 2026-03-09T16:06:42.826539+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: audit 2026-03-09T16:06:42.830605+0000 mon.c (mon.2) 525 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: audit 2026-03-09T16:06:42.830605+0000 mon.c (mon.2) 525 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: audit 2026-03-09T16:06:42.836395+0000 mon.a (mon.0) 3187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: audit 2026-03-09T16:06:42.836395+0000 mon.a (mon.0) 3187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: audit 2026-03-09T16:06:42.979595+0000 mon.a (mon.0) 3188 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]: dispatch 2026-03-09T16:06:44.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:43 vm01 bash[28152]: audit 2026-03-09T16:06:42.979595+0000 mon.a (mon.0) 3188 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]: dispatch 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: audit 2026-03-09T16:06:42.819042+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: audit 2026-03-09T16:06:42.819042+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: cluster 2026-03-09T16:06:42.822641+0000 mgr.y (mgr.14520) 500 : cluster [DBG] pgmap v842: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: cluster 2026-03-09T16:06:42.822641+0000 mgr.y (mgr.14520) 500 : cluster [DBG] pgmap v842: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: cluster 2026-03-09T16:06:42.826539+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: cluster 2026-03-09T16:06:42.826539+0000 mon.a (mon.0) 3186 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: audit 2026-03-09T16:06:42.830605+0000 mon.c (mon.2) 525 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: audit 2026-03-09T16:06:42.830605+0000 mon.c (mon.2) 525 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: audit 2026-03-09T16:06:42.836395+0000 mon.a (mon.0) 3187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: audit 2026-03-09T16:06:42.836395+0000 mon.a (mon.0) 3187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]: dispatch 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: audit 2026-03-09T16:06:42.979595+0000 mon.a (mon.0) 3188 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]: dispatch 2026-03-09T16:06:44.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:43 vm01 bash[20728]: audit 2026-03-09T16:06:42.979595+0000 mon.a (mon.0) 3188 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]: dispatch 2026-03-09T16:06:45.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:44 vm09 bash[22983]: audit 2026-03-09T16:06:43.832172+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:44 vm09 bash[22983]: audit 2026-03-09T16:06:43.832172+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:44 vm09 bash[22983]: audit 2026-03-09T16:06:43.832269+0000 mon.a (mon.0) 3190 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]': finished 2026-03-09T16:06:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:44 vm09 bash[22983]: audit 2026-03-09T16:06:43.832269+0000 mon.a (mon.0) 3190 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]': finished 2026-03-09T16:06:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:44 vm09 bash[22983]: cluster 2026-03-09T16:06:43.865576+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T16:06:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:44 vm09 bash[22983]: cluster 2026-03-09T16:06:43.865576+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T16:06:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:44 vm09 bash[22983]: audit 2026-03-09T16:06:44.382081+0000 mon.a (mon.0) 3192 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:45.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:44 vm09 bash[22983]: audit 2026-03-09T16:06:44.382081+0000 mon.a (mon.0) 3192 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:44 vm01 bash[28152]: audit 2026-03-09T16:06:43.832172+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:44 vm01 bash[28152]: audit 2026-03-09T16:06:43.832172+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:44 vm01 bash[28152]: audit 2026-03-09T16:06:43.832269+0000 mon.a (mon.0) 3190 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]': finished 2026-03-09T16:06:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:44 vm01 bash[28152]: audit 2026-03-09T16:06:43.832269+0000 mon.a (mon.0) 3190 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]': finished 2026-03-09T16:06:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:44 vm01 bash[28152]: cluster 2026-03-09T16:06:43.865576+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T16:06:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:44 vm01 bash[28152]: cluster 2026-03-09T16:06:43.865576+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T16:06:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:44 vm01 bash[28152]: audit 2026-03-09T16:06:44.382081+0000 mon.a (mon.0) 3192 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:45.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:44 vm01 bash[28152]: audit 2026-03-09T16:06:44.382081+0000 mon.a (mon.0) 3192 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:44 vm01 bash[20728]: audit 2026-03-09T16:06:43.832172+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:44 vm01 bash[20728]: audit 2026-03-09T16:06:43.832172+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-109", "tierpool": "test-rados-api-vm01-59821-109-cache"}]': finished 2026-03-09T16:06:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:44 vm01 bash[20728]: audit 2026-03-09T16:06:43.832269+0000 mon.a (mon.0) 3190 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]': finished 2026-03-09T16:06:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:44 vm01 bash[20728]: audit 2026-03-09T16:06:43.832269+0000 mon.a (mon.0) 3190 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.15", "id": [6, 0]}]': finished 2026-03-09T16:06:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:44 vm01 bash[20728]: cluster 2026-03-09T16:06:43.865576+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T16:06:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:44 vm01 bash[20728]: cluster 2026-03-09T16:06:43.865576+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-09T16:06:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:44 vm01 bash[20728]: audit 2026-03-09T16:06:44.382081+0000 mon.a (mon.0) 3192 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:45.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:44 vm01 bash[20728]: audit 2026-03-09T16:06:44.382081+0000 mon.a (mon.0) 3192 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:06:46.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: cluster 2026-03-09T16:06:44.823279+0000 mgr.y (mgr.14520) 501 : cluster [DBG] pgmap v844: 276 pgs: 1 peering, 275 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 2 op/s 2026-03-09T16:06:46.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: cluster 2026-03-09T16:06:44.823279+0000 mgr.y (mgr.14520) 501 : cluster [DBG] pgmap v844: 276 pgs: 1 peering, 275 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 2 op/s 2026-03-09T16:06:46.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: cluster 2026-03-09T16:06:44.833221+0000 mon.a (mon.0) 3193 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:46.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: cluster 2026-03-09T16:06:44.833221+0000 mon.a (mon.0) 3193 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: cluster 2026-03-09T16:06:44.840146+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T16:06:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: cluster 2026-03-09T16:06:44.840146+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T16:06:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: cluster 2026-03-09T16:06:45.842612+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T16:06:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: cluster 2026-03-09T16:06:45.842612+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T16:06:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: audit 2026-03-09T16:06:45.848840+0000 mon.c (mon.2) 526 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: audit 2026-03-09T16:06:45.848840+0000 mon.c (mon.2) 526 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: audit 2026-03-09T16:06:45.849356+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:45 vm09 bash[22983]: audit 2026-03-09T16:06:45.849356+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: cluster 2026-03-09T16:06:44.823279+0000 mgr.y (mgr.14520) 501 : cluster [DBG] pgmap v844: 276 pgs: 1 peering, 275 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 2 op/s 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: cluster 2026-03-09T16:06:44.823279+0000 mgr.y (mgr.14520) 501 : cluster [DBG] pgmap v844: 276 pgs: 1 peering, 275 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 2 op/s 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: cluster 2026-03-09T16:06:44.833221+0000 mon.a (mon.0) 3193 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: cluster 2026-03-09T16:06:44.833221+0000 mon.a (mon.0) 3193 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: cluster 2026-03-09T16:06:44.840146+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: cluster 2026-03-09T16:06:44.840146+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: cluster 2026-03-09T16:06:45.842612+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: cluster 2026-03-09T16:06:45.842612+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: audit 2026-03-09T16:06:45.848840+0000 mon.c (mon.2) 526 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: audit 2026-03-09T16:06:45.848840+0000 mon.c (mon.2) 526 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: audit 2026-03-09T16:06:45.849356+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:45 vm01 bash[28152]: audit 2026-03-09T16:06:45.849356+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: cluster 2026-03-09T16:06:44.823279+0000 mgr.y (mgr.14520) 501 : cluster [DBG] pgmap v844: 276 pgs: 1 peering, 275 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 2 op/s 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: cluster 2026-03-09T16:06:44.823279+0000 mgr.y (mgr.14520) 501 : cluster [DBG] pgmap v844: 276 pgs: 1 peering, 275 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 2 op/s 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: cluster 2026-03-09T16:06:44.833221+0000 mon.a (mon.0) 3193 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: cluster 2026-03-09T16:06:44.833221+0000 mon.a (mon.0) 3193 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: cluster 2026-03-09T16:06:44.840146+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: cluster 2026-03-09T16:06:44.840146+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: cluster 2026-03-09T16:06:45.842612+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T16:06:46.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: cluster 2026-03-09T16:06:45.842612+0000 mon.a (mon.0) 3195 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-09T16:06:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: audit 2026-03-09T16:06:45.848840+0000 mon.c (mon.2) 526 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: audit 2026-03-09T16:06:45.848840+0000 mon.c (mon.2) 526 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: audit 2026-03-09T16:06:45.849356+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:46.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:45 vm01 bash[20728]: audit 2026-03-09T16:06:45.849356+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:47.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:06:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:06:48.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: audit 2026-03-09T16:06:46.706857+0000 mgr.y (mgr.14520) 502 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:48.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: audit 2026-03-09T16:06:46.706857+0000 mgr.y (mgr.14520) 502 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:48.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: cluster 2026-03-09T16:06:46.823672+0000 mgr.y (mgr.14520) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:48.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: cluster 2026-03-09T16:06:46.823672+0000 mgr.y (mgr.14520) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:48.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: audit 2026-03-09T16:06:46.865277+0000 mon.a (mon.0) 3197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:48.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: audit 2026-03-09T16:06:46.865277+0000 mon.a (mon.0) 3197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:48.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: cluster 2026-03-09T16:06:46.868502+0000 mon.a (mon.0) 3198 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T16:06:48.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: cluster 2026-03-09T16:06:46.868502+0000 mon.a (mon.0) 3198 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T16:06:48.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: audit 2026-03-09T16:06:46.869994+0000 mon.c (mon.2) 527 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: audit 2026-03-09T16:06:46.869994+0000 mon.c (mon.2) 527 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: audit 2026-03-09T16:06:46.870360+0000 mon.a (mon.0) 3199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:47 vm09 bash[22983]: audit 2026-03-09T16:06:46.870360+0000 mon.a (mon.0) 3199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: audit 2026-03-09T16:06:46.706857+0000 mgr.y (mgr.14520) 502 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: audit 2026-03-09T16:06:46.706857+0000 mgr.y (mgr.14520) 502 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: cluster 2026-03-09T16:06:46.823672+0000 mgr.y (mgr.14520) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: cluster 2026-03-09T16:06:46.823672+0000 mgr.y (mgr.14520) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: audit 2026-03-09T16:06:46.865277+0000 mon.a (mon.0) 3197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: audit 2026-03-09T16:06:46.865277+0000 mon.a (mon.0) 3197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: cluster 2026-03-09T16:06:46.868502+0000 mon.a (mon.0) 3198 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: cluster 2026-03-09T16:06:46.868502+0000 mon.a (mon.0) 3198 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: audit 2026-03-09T16:06:46.869994+0000 mon.c (mon.2) 527 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: audit 2026-03-09T16:06:46.869994+0000 mon.c (mon.2) 527 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: audit 2026-03-09T16:06:46.870360+0000 mon.a (mon.0) 3199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:47 vm01 bash[28152]: audit 2026-03-09T16:06:46.870360+0000 mon.a (mon.0) 3199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: audit 2026-03-09T16:06:46.706857+0000 mgr.y (mgr.14520) 502 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: audit 2026-03-09T16:06:46.706857+0000 mgr.y (mgr.14520) 502 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: cluster 2026-03-09T16:06:46.823672+0000 mgr.y (mgr.14520) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: cluster 2026-03-09T16:06:46.823672+0000 mgr.y (mgr.14520) 503 : cluster [DBG] pgmap v847: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: audit 2026-03-09T16:06:46.865277+0000 mon.a (mon.0) 3197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: audit 2026-03-09T16:06:46.865277+0000 mon.a (mon.0) 3197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: cluster 2026-03-09T16:06:46.868502+0000 mon.a (mon.0) 3198 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: cluster 2026-03-09T16:06:46.868502+0000 mon.a (mon.0) 3198 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: audit 2026-03-09T16:06:46.869994+0000 mon.c (mon.2) 527 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: audit 2026-03-09T16:06:46.869994+0000 mon.c (mon.2) 527 : audit [INF] from='client.? 192.168.123.101:0/1852972263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: audit 2026-03-09T16:06:46.870360+0000 mon.a (mon.0) 3199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:48.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:47 vm01 bash[20728]: audit 2026-03-09T16:06:46.870360+0000 mon.a (mon.0) 3199 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]: dispatch 2026-03-09T16:06:49.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:48 vm09 bash[22983]: audit 2026-03-09T16:06:47.868578+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:49.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:48 vm09 bash[22983]: audit 2026-03-09T16:06:47.868578+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:49.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:48 vm09 bash[22983]: cluster 2026-03-09T16:06:47.876340+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T16:06:49.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:48 vm09 bash[22983]: cluster 2026-03-09T16:06:47.876340+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T16:06:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:48 vm01 bash[28152]: audit 2026-03-09T16:06:47.868578+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:48 vm01 bash[28152]: audit 2026-03-09T16:06:47.868578+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:48 vm01 bash[28152]: cluster 2026-03-09T16:06:47.876340+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T16:06:49.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:48 vm01 bash[28152]: cluster 2026-03-09T16:06:47.876340+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T16:06:49.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:48 vm01 bash[20728]: audit 2026-03-09T16:06:47.868578+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:49.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:48 vm01 bash[20728]: audit 2026-03-09T16:06:47.868578+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-109"}]': finished 2026-03-09T16:06:49.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:48 vm01 bash[20728]: cluster 2026-03-09T16:06:47.876340+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T16:06:49.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:48 vm01 bash[20728]: cluster 2026-03-09T16:06:47.876340+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:49 vm01 bash[28152]: cluster 2026-03-09T16:06:48.824099+0000 mgr.y (mgr.14520) 504 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:49 vm01 bash[28152]: cluster 2026-03-09T16:06:48.824099+0000 mgr.y (mgr.14520) 504 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:49 vm01 bash[28152]: audit 2026-03-09T16:06:48.897804+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:49 vm01 bash[28152]: audit 2026-03-09T16:06:48.897804+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:49 vm01 bash[28152]: cluster 2026-03-09T16:06:48.901718+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:49 vm01 bash[28152]: cluster 2026-03-09T16:06:48.901718+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:49 vm01 bash[28152]: audit 2026-03-09T16:06:48.902417+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:49 vm01 bash[28152]: audit 2026-03-09T16:06:48.902417+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:49 vm01 bash[20728]: cluster 2026-03-09T16:06:48.824099+0000 mgr.y (mgr.14520) 504 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:49 vm01 bash[20728]: cluster 2026-03-09T16:06:48.824099+0000 mgr.y (mgr.14520) 504 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:49 vm01 bash[20728]: audit 2026-03-09T16:06:48.897804+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:49 vm01 bash[20728]: audit 2026-03-09T16:06:48.897804+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:49 vm01 bash[20728]: cluster 2026-03-09T16:06:48.901718+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:49 vm01 bash[20728]: cluster 2026-03-09T16:06:48.901718+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:49 vm01 bash[20728]: audit 2026-03-09T16:06:48.902417+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:49 vm01 bash[20728]: audit 2026-03-09T16:06:48.902417+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:49 vm09 bash[22983]: cluster 2026-03-09T16:06:48.824099+0000 mgr.y (mgr.14520) 504 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:49 vm09 bash[22983]: cluster 2026-03-09T16:06:48.824099+0000 mgr.y (mgr.14520) 504 : cluster [DBG] pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:06:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:49 vm09 bash[22983]: audit 2026-03-09T16:06:48.897804+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:49 vm09 bash[22983]: audit 2026-03-09T16:06:48.897804+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:49 vm09 bash[22983]: cluster 2026-03-09T16:06:48.901718+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T16:06:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:49 vm09 bash[22983]: cluster 2026-03-09T16:06:48.901718+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-09T16:06:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:49 vm09 bash[22983]: audit 2026-03-09T16:06:48.902417+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:49 vm09 bash[22983]: audit 2026-03-09T16:06:48.902417+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:50 vm09 bash[22983]: audit 2026-03-09T16:06:49.896201+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:51.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:50 vm09 bash[22983]: audit 2026-03-09T16:06:49.896201+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:51.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:50 vm09 bash[22983]: audit 2026-03-09T16:06:49.898791+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:50 vm09 bash[22983]: audit 2026-03-09T16:06:49.898791+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:50 vm09 bash[22983]: cluster 2026-03-09T16:06:49.899518+0000 mon.a (mon.0) 3205 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T16:06:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:50 vm09 bash[22983]: cluster 2026-03-09T16:06:49.899518+0000 mon.a (mon.0) 3205 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T16:06:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:50 vm09 bash[22983]: audit 2026-03-09T16:06:49.915638+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:50 vm09 bash[22983]: audit 2026-03-09T16:06:49.915638+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:50 vm01 bash[28152]: audit 2026-03-09T16:06:49.896201+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:50 vm01 bash[28152]: audit 2026-03-09T16:06:49.896201+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:50 vm01 bash[28152]: audit 2026-03-09T16:06:49.898791+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:50 vm01 bash[28152]: audit 2026-03-09T16:06:49.898791+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:50 vm01 bash[28152]: cluster 2026-03-09T16:06:49.899518+0000 mon.a (mon.0) 3205 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:50 vm01 bash[28152]: cluster 2026-03-09T16:06:49.899518+0000 mon.a (mon.0) 3205 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:50 vm01 bash[28152]: audit 2026-03-09T16:06:49.915638+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:50 vm01 bash[28152]: audit 2026-03-09T16:06:49.915638+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:50 vm01 bash[20728]: audit 2026-03-09T16:06:49.896201+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:50 vm01 bash[20728]: audit 2026-03-09T16:06:49.896201+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:50 vm01 bash[20728]: audit 2026-03-09T16:06:49.898791+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:50 vm01 bash[20728]: audit 2026-03-09T16:06:49.898791+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.101:0/4261461189' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:50 vm01 bash[20728]: cluster 2026-03-09T16:06:49.899518+0000 mon.a (mon.0) 3205 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:50 vm01 bash[20728]: cluster 2026-03-09T16:06:49.899518+0000 mon.a (mon.0) 3205 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:50 vm01 bash[20728]: audit 2026-03-09T16:06:49.915638+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:51.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:50 vm01 bash[20728]: audit 2026-03-09T16:06:49.915638+0000 mon.a (mon.0) 3206 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]: dispatch 2026-03-09T16:06:52.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: cluster 2026-03-09T16:06:50.824366+0000 mgr.y (mgr.14520) 505 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: cluster 2026-03-09T16:06:50.824366+0000 mgr.y (mgr.14520) 505 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: cluster 2026-03-09T16:06:50.905156+0000 mon.a (mon.0) 3207 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: cluster 2026-03-09T16:06:50.905156+0000 mon.a (mon.0) 3207 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:50.983014+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:50.983014+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: cluster 2026-03-09T16:06:50.990422+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: cluster 2026-03-09T16:06:50.990422+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.020286+0000 mon.c (mon.2) 528 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.020286+0000 mon.c (mon.2) 528 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.020732+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.020732+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.021941+0000 mon.c (mon.2) 529 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.021941+0000 mon.c (mon.2) 529 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.022194+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.022194+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.022889+0000 mon.c (mon.2) 530 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.022889+0000 mon.c (mon.2) 530 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.023165+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:51 vm09 bash[22983]: audit 2026-03-09T16:06:51.023165+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: cluster 2026-03-09T16:06:50.824366+0000 mgr.y (mgr.14520) 505 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: cluster 2026-03-09T16:06:50.824366+0000 mgr.y (mgr.14520) 505 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: cluster 2026-03-09T16:06:50.905156+0000 mon.a (mon.0) 3207 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: cluster 2026-03-09T16:06:50.905156+0000 mon.a (mon.0) 3207 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:50.983014+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:50.983014+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: cluster 2026-03-09T16:06:50.990422+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: cluster 2026-03-09T16:06:50.990422+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.020286+0000 mon.c (mon.2) 528 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.020286+0000 mon.c (mon.2) 528 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.020732+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.020732+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.021941+0000 mon.c (mon.2) 529 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.021941+0000 mon.c (mon.2) 529 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.022194+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.022194+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.022889+0000 mon.c (mon.2) 530 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.022889+0000 mon.c (mon.2) 530 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.023165+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:51 vm01 bash[20728]: audit 2026-03-09T16:06:51.023165+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:51 vm01 bash[28152]: cluster 2026-03-09T16:06:50.824366+0000 mgr.y (mgr.14520) 505 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:51 vm01 bash[28152]: cluster 2026-03-09T16:06:50.824366+0000 mgr.y (mgr.14520) 505 : cluster [DBG] pgmap v853: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:51 vm01 bash[28152]: cluster 2026-03-09T16:06:50.905156+0000 mon.a (mon.0) 3207 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:51 vm01 bash[28152]: cluster 2026-03-09T16:06:50.905156+0000 mon.a (mon.0) 3207 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:51 vm01 bash[28152]: audit 2026-03-09T16:06:50.983014+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:51 vm01 bash[28152]: audit 2026-03-09T16:06:50.983014+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm01-59821-104"}]': finished 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:51 vm01 bash[28152]: cluster 2026-03-09T16:06:50.990422+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:51 vm01 bash[28152]: cluster 2026-03-09T16:06:50.990422+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.020286+0000 mon.c (mon.2) 528 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.020286+0000 mon.c (mon.2) 528 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.020732+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.020732+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.021941+0000 mon.c (mon.2) 529 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.021941+0000 mon.c (mon.2) 529 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.022194+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.022194+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.022889+0000 mon.c (mon.2) 530 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.022889+0000 mon.c (mon.2) 530 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.023165+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:52.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:52 vm01 bash[28152]: audit 2026-03-09T16:06:51.023165+0000 mon.a (mon.0) 3212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:06:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:06:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:53 vm01 bash[28152]: audit 2026-03-09T16:06:52.018425+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:53 vm01 bash[28152]: audit 2026-03-09T16:06:52.018425+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:53 vm01 bash[28152]: cluster 2026-03-09T16:06:52.024499+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:53 vm01 bash[28152]: cluster 2026-03-09T16:06:52.024499+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:53 vm01 bash[28152]: audit 2026-03-09T16:06:52.027900+0000 mon.c (mon.2) 531 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:53 vm01 bash[28152]: audit 2026-03-09T16:06:52.027900+0000 mon.c (mon.2) 531 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:53 vm01 bash[28152]: audit 2026-03-09T16:06:52.028278+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:53 vm01 bash[28152]: audit 2026-03-09T16:06:52.028278+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:53 vm01 bash[20728]: audit 2026-03-09T16:06:52.018425+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:53 vm01 bash[20728]: audit 2026-03-09T16:06:52.018425+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:53 vm01 bash[20728]: cluster 2026-03-09T16:06:52.024499+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T16:06:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:53 vm01 bash[20728]: cluster 2026-03-09T16:06:52.024499+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T16:06:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:53 vm01 bash[20728]: audit 2026-03-09T16:06:52.027900+0000 mon.c (mon.2) 531 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:53 vm01 bash[20728]: audit 2026-03-09T16:06:52.027900+0000 mon.c (mon.2) 531 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:53 vm01 bash[20728]: audit 2026-03-09T16:06:52.028278+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:53 vm01 bash[20728]: audit 2026-03-09T16:06:52.028278+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:53 vm09 bash[22983]: audit 2026-03-09T16:06:52.018425+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:53.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:53 vm09 bash[22983]: audit 2026-03-09T16:06:52.018425+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm01-59821-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T16:06:53.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:53 vm09 bash[22983]: cluster 2026-03-09T16:06:52.024499+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T16:06:53.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:53 vm09 bash[22983]: cluster 2026-03-09T16:06:52.024499+0000 mon.a (mon.0) 3214 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-09T16:06:53.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:53 vm09 bash[22983]: audit 2026-03-09T16:06:52.027900+0000 mon.c (mon.2) 531 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:53 vm09 bash[22983]: audit 2026-03-09T16:06:52.027900+0000 mon.c (mon.2) 531 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:53 vm09 bash[22983]: audit 2026-03-09T16:06:52.028278+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:53.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:53 vm09 bash[22983]: audit 2026-03-09T16:06:52.028278+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:54 vm09 bash[22983]: cluster 2026-03-09T16:06:52.824644+0000 mgr.y (mgr.14520) 506 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:54 vm09 bash[22983]: cluster 2026-03-09T16:06:52.824644+0000 mgr.y (mgr.14520) 506 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:54 vm09 bash[22983]: cluster 2026-03-09T16:06:53.043454+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T16:06:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:54 vm09 bash[22983]: cluster 2026-03-09T16:06:53.043454+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T16:06:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:54 vm01 bash[20728]: cluster 2026-03-09T16:06:52.824644+0000 mgr.y (mgr.14520) 506 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:54 vm01 bash[20728]: cluster 2026-03-09T16:06:52.824644+0000 mgr.y (mgr.14520) 506 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:54 vm01 bash[20728]: cluster 2026-03-09T16:06:53.043454+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T16:06:54.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:54 vm01 bash[20728]: cluster 2026-03-09T16:06:53.043454+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T16:06:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:54 vm01 bash[28152]: cluster 2026-03-09T16:06:52.824644+0000 mgr.y (mgr.14520) 506 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:54 vm01 bash[28152]: cluster 2026-03-09T16:06:52.824644+0000 mgr.y (mgr.14520) 506 : cluster [DBG] pgmap v856: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:06:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:54 vm01 bash[28152]: cluster 2026-03-09T16:06:53.043454+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T16:06:54.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:54 vm01 bash[28152]: cluster 2026-03-09T16:06:53.043454+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-09T16:06:55.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:55 vm09 bash[22983]: audit 2026-03-09T16:06:54.037258+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:06:55.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:55 vm09 bash[22983]: audit 2026-03-09T16:06:54.037258+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:06:55.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:55 vm09 bash[22983]: cluster 2026-03-09T16:06:54.050596+0000 mon.a (mon.0) 3218 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T16:06:55.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:55 vm09 bash[22983]: cluster 2026-03-09T16:06:54.050596+0000 mon.a (mon.0) 3218 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T16:06:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:55 vm01 bash[20728]: audit 2026-03-09T16:06:54.037258+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:06:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:55 vm01 bash[20728]: audit 2026-03-09T16:06:54.037258+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:06:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:55 vm01 bash[20728]: cluster 2026-03-09T16:06:54.050596+0000 mon.a (mon.0) 3218 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T16:06:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:55 vm01 bash[20728]: cluster 2026-03-09T16:06:54.050596+0000 mon.a (mon.0) 3218 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T16:06:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:55 vm01 bash[28152]: audit 2026-03-09T16:06:54.037258+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:06:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:55 vm01 bash[28152]: audit 2026-03-09T16:06:54.037258+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm01-59821-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:06:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:55 vm01 bash[28152]: cluster 2026-03-09T16:06:54.050596+0000 mon.a (mon.0) 3218 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T16:06:55.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:55 vm01 bash[28152]: cluster 2026-03-09T16:06:54.050596+0000 mon.a (mon.0) 3218 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-09T16:06:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:56 vm01 bash[28152]: cluster 2026-03-09T16:06:54.825165+0000 mgr.y (mgr.14520) 507 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:56 vm01 bash[28152]: cluster 2026-03-09T16:06:54.825165+0000 mgr.y (mgr.14520) 507 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:56 vm01 bash[28152]: cluster 2026-03-09T16:06:55.067716+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T16:06:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:56 vm01 bash[28152]: cluster 2026-03-09T16:06:55.067716+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T16:06:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:56 vm01 bash[28152]: audit 2026-03-09T16:06:55.073670+0000 mon.c (mon.2) 532 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:56 vm01 bash[28152]: audit 2026-03-09T16:06:55.073670+0000 mon.c (mon.2) 532 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:56 vm01 bash[28152]: audit 2026-03-09T16:06:55.073930+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:56 vm01 bash[28152]: audit 2026-03-09T16:06:55.073930+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:56 vm01 bash[20728]: cluster 2026-03-09T16:06:54.825165+0000 mgr.y (mgr.14520) 507 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:56 vm01 bash[20728]: cluster 2026-03-09T16:06:54.825165+0000 mgr.y (mgr.14520) 507 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:56 vm01 bash[20728]: cluster 2026-03-09T16:06:55.067716+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T16:06:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:56 vm01 bash[20728]: cluster 2026-03-09T16:06:55.067716+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T16:06:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:56 vm01 bash[20728]: audit 2026-03-09T16:06:55.073670+0000 mon.c (mon.2) 532 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:56 vm01 bash[20728]: audit 2026-03-09T16:06:55.073670+0000 mon.c (mon.2) 532 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:56 vm01 bash[20728]: audit 2026-03-09T16:06:55.073930+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:56 vm01 bash[20728]: audit 2026-03-09T16:06:55.073930+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:56 vm09 bash[22983]: cluster 2026-03-09T16:06:54.825165+0000 mgr.y (mgr.14520) 507 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:56 vm09 bash[22983]: cluster 2026-03-09T16:06:54.825165+0000 mgr.y (mgr.14520) 507 : cluster [DBG] pgmap v859: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:56 vm09 bash[22983]: cluster 2026-03-09T16:06:55.067716+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T16:06:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:56 vm09 bash[22983]: cluster 2026-03-09T16:06:55.067716+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-09T16:06:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:56 vm09 bash[22983]: audit 2026-03-09T16:06:55.073670+0000 mon.c (mon.2) 532 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:56 vm09 bash[22983]: audit 2026-03-09T16:06:55.073670+0000 mon.c (mon.2) 532 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:56 vm09 bash[22983]: audit 2026-03-09T16:06:55.073930+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:56.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:56 vm09 bash[22983]: audit 2026-03-09T16:06:55.073930+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:06:57.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:06:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:56.111681+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:56.111681+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: cluster 2026-03-09T16:06:56.121978+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: cluster 2026-03-09T16:06:56.121978+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:56.160759+0000 mon.c (mon.2) 533 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:56.160759+0000 mon.c (mon.2) 533 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:56.161005+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:56.161005+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:56.715052+0000 mgr.y (mgr.14520) 508 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:56.715052+0000 mgr.y (mgr.14520) 508 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: cluster 2026-03-09T16:06:56.825484+0000 mgr.y (mgr.14520) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: cluster 2026-03-09T16:06:56.825484+0000 mgr.y (mgr.14520) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:57.115160+0000 mon.a (mon.0) 3224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:57.115160+0000 mon.a (mon.0) 3224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: cluster 2026-03-09T16:06:57.117822+0000 mon.a (mon.0) 3225 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: cluster 2026-03-09T16:06:57.117822+0000 mon.a (mon.0) 3225 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:57.119442+0000 mon.c (mon.2) 534 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:57.119442+0000 mon.c (mon.2) 534 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:57.122365+0000 mon.a (mon.0) 3226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:57 vm01 bash[28152]: audit 2026-03-09T16:06:57.122365+0000 mon.a (mon.0) 3226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:56.111681+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:56.111681+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: cluster 2026-03-09T16:06:56.121978+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: cluster 2026-03-09T16:06:56.121978+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:56.160759+0000 mon.c (mon.2) 533 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:56.160759+0000 mon.c (mon.2) 533 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:56.161005+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:56.161005+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:56.715052+0000 mgr.y (mgr.14520) 508 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:56.715052+0000 mgr.y (mgr.14520) 508 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: cluster 2026-03-09T16:06:56.825484+0000 mgr.y (mgr.14520) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: cluster 2026-03-09T16:06:56.825484+0000 mgr.y (mgr.14520) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:57.115160+0000 mon.a (mon.0) 3224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:57.115160+0000 mon.a (mon.0) 3224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: cluster 2026-03-09T16:06:57.117822+0000 mon.a (mon.0) 3225 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: cluster 2026-03-09T16:06:57.117822+0000 mon.a (mon.0) 3225 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:57.119442+0000 mon.c (mon.2) 534 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:57.119442+0000 mon.c (mon.2) 534 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:57.122365+0000 mon.a (mon.0) 3226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:57 vm01 bash[20728]: audit 2026-03-09T16:06:57.122365+0000 mon.a (mon.0) 3226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:56.111681+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:57.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:56.111681+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:06:57.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: cluster 2026-03-09T16:06:56.121978+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: cluster 2026-03-09T16:06:56.121978+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:56.160759+0000 mon.c (mon.2) 533 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:56.160759+0000 mon.c (mon.2) 533 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:56.161005+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:56.161005+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:56.715052+0000 mgr.y (mgr.14520) 508 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:56.715052+0000 mgr.y (mgr.14520) 508 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: cluster 2026-03-09T16:06:56.825484+0000 mgr.y (mgr.14520) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: cluster 2026-03-09T16:06:56.825484+0000 mgr.y (mgr.14520) 509 : cluster [DBG] pgmap v862: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:57.115160+0000 mon.a (mon.0) 3224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:57.115160+0000 mon.a (mon.0) 3224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: cluster 2026-03-09T16:06:57.117822+0000 mon.a (mon.0) 3225 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: cluster 2026-03-09T16:06:57.117822+0000 mon.a (mon.0) 3225 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:57.119442+0000 mon.c (mon.2) 534 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:57.119442+0000 mon.c (mon.2) 534 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:57.122365+0000 mon.a (mon.0) 3226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:57.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:57 vm09 bash[22983]: audit 2026-03-09T16:06:57.122365+0000 mon.a (mon.0) 3226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:06:59.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: audit 2026-03-09T16:06:58.118367+0000 mon.a (mon.0) 3227 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:06:59.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: audit 2026-03-09T16:06:58.118367+0000 mon.a (mon.0) 3227 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: cluster 2026-03-09T16:06:58.122175+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: cluster 2026-03-09T16:06:58.122175+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: audit 2026-03-09T16:06:58.154499+0000 mon.c (mon.2) 535 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: audit 2026-03-09T16:06:58.154499+0000 mon.c (mon.2) 535 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: audit 2026-03-09T16:06:58.154888+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: audit 2026-03-09T16:06:58.154888+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: cluster 2026-03-09T16:06:58.826280+0000 mgr.y (mgr.14520) 510 : cluster [DBG] pgmap v865: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: cluster 2026-03-09T16:06:58.826280+0000 mgr.y (mgr.14520) 510 : cluster [DBG] pgmap v865: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: cluster 2026-03-09T16:06:58.981360+0000 mon.a (mon.0) 3230 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:06:59 vm09 bash[22983]: cluster 2026-03-09T16:06:58.981360+0000 mon.a (mon.0) 3230 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: audit 2026-03-09T16:06:58.118367+0000 mon.a (mon.0) 3227 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: audit 2026-03-09T16:06:58.118367+0000 mon.a (mon.0) 3227 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: cluster 2026-03-09T16:06:58.122175+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: cluster 2026-03-09T16:06:58.122175+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: audit 2026-03-09T16:06:58.154499+0000 mon.c (mon.2) 535 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: audit 2026-03-09T16:06:58.154499+0000 mon.c (mon.2) 535 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: audit 2026-03-09T16:06:58.154888+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: audit 2026-03-09T16:06:58.154888+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: cluster 2026-03-09T16:06:58.826280+0000 mgr.y (mgr.14520) 510 : cluster [DBG] pgmap v865: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: cluster 2026-03-09T16:06:58.826280+0000 mgr.y (mgr.14520) 510 : cluster [DBG] pgmap v865: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: cluster 2026-03-09T16:06:58.981360+0000 mon.a (mon.0) 3230 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:06:59 vm01 bash[20728]: cluster 2026-03-09T16:06:58.981360+0000 mon.a (mon.0) 3230 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: audit 2026-03-09T16:06:58.118367+0000 mon.a (mon.0) 3227 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:06:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: audit 2026-03-09T16:06:58.118367+0000 mon.a (mon.0) 3227 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: cluster 2026-03-09T16:06:58.122175+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: cluster 2026-03-09T16:06:58.122175+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: audit 2026-03-09T16:06:58.154499+0000 mon.c (mon.2) 535 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: audit 2026-03-09T16:06:58.154499+0000 mon.c (mon.2) 535 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: audit 2026-03-09T16:06:58.154888+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: audit 2026-03-09T16:06:58.154888+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: cluster 2026-03-09T16:06:58.826280+0000 mgr.y (mgr.14520) 510 : cluster [DBG] pgmap v865: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: cluster 2026-03-09T16:06:58.826280+0000 mgr.y (mgr.14520) 510 : cluster [DBG] pgmap v865: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: cluster 2026-03-09T16:06:58.981360+0000 mon.a (mon.0) 3230 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:06:59.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:06:59 vm01 bash[28152]: cluster 2026-03-09T16:06:58.981360+0000 mon.a (mon.0) 3230 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: audit 2026-03-09T16:06:59.140530+0000 mon.a (mon.0) 3231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: audit 2026-03-09T16:06:59.140530+0000 mon.a (mon.0) 3231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: audit 2026-03-09T16:06:59.143810+0000 mon.c (mon.2) 536 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: audit 2026-03-09T16:06:59.143810+0000 mon.c (mon.2) 536 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: cluster 2026-03-09T16:06:59.146218+0000 mon.a (mon.0) 3232 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: cluster 2026-03-09T16:06:59.146218+0000 mon.a (mon.0) 3232 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: audit 2026-03-09T16:06:59.148664+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: audit 2026-03-09T16:06:59.148664+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: audit 2026-03-09T16:06:59.388237+0000 mon.a (mon.0) 3234 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:00 vm01 bash[28152]: audit 2026-03-09T16:06:59.388237+0000 mon.a (mon.0) 3234 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: audit 2026-03-09T16:06:59.140530+0000 mon.a (mon.0) 3231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: audit 2026-03-09T16:06:59.140530+0000 mon.a (mon.0) 3231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: audit 2026-03-09T16:06:59.143810+0000 mon.c (mon.2) 536 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: audit 2026-03-09T16:06:59.143810+0000 mon.c (mon.2) 536 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: cluster 2026-03-09T16:06:59.146218+0000 mon.a (mon.0) 3232 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: cluster 2026-03-09T16:06:59.146218+0000 mon.a (mon.0) 3232 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: audit 2026-03-09T16:06:59.148664+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: audit 2026-03-09T16:06:59.148664+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: audit 2026-03-09T16:06:59.388237+0000 mon.a (mon.0) 3234 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:00.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:00 vm01 bash[20728]: audit 2026-03-09T16:06:59.388237+0000 mon.a (mon.0) 3234 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: audit 2026-03-09T16:06:59.140530+0000 mon.a (mon.0) 3231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: audit 2026-03-09T16:06:59.140530+0000 mon.a (mon.0) 3231 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: audit 2026-03-09T16:06:59.143810+0000 mon.c (mon.2) 536 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: audit 2026-03-09T16:06:59.143810+0000 mon.c (mon.2) 536 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: cluster 2026-03-09T16:06:59.146218+0000 mon.a (mon.0) 3232 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: cluster 2026-03-09T16:06:59.146218+0000 mon.a (mon.0) 3232 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: audit 2026-03-09T16:06:59.148664+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: audit 2026-03-09T16:06:59.148664+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]: dispatch 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: audit 2026-03-09T16:06:59.388237+0000 mon.a (mon.0) 3234 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:00 vm09 bash[22983]: audit 2026-03-09T16:06:59.388237+0000 mon.a (mon.0) 3234 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:01.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:01 vm01 bash[28152]: audit 2026-03-09T16:07:00.145417+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:07:01.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:01 vm01 bash[28152]: audit 2026-03-09T16:07:00.145417+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:07:01.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:01 vm01 bash[28152]: cluster 2026-03-09T16:07:00.148118+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T16:07:01.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:01 vm01 bash[28152]: cluster 2026-03-09T16:07:00.148118+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T16:07:01.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:01 vm01 bash[28152]: cluster 2026-03-09T16:07:00.826624+0000 mgr.y (mgr.14520) 511 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:07:01.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:01 vm01 bash[28152]: cluster 2026-03-09T16:07:00.826624+0000 mgr.y (mgr.14520) 511 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:07:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:01 vm01 bash[20728]: audit 2026-03-09T16:07:00.145417+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:07:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:01 vm01 bash[20728]: audit 2026-03-09T16:07:00.145417+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:07:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:01 vm01 bash[20728]: cluster 2026-03-09T16:07:00.148118+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T16:07:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:01 vm01 bash[20728]: cluster 2026-03-09T16:07:00.148118+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T16:07:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:01 vm01 bash[20728]: cluster 2026-03-09T16:07:00.826624+0000 mgr.y (mgr.14520) 511 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:07:01.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:01 vm01 bash[20728]: cluster 2026-03-09T16:07:00.826624+0000 mgr.y (mgr.14520) 511 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:07:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:01 vm09 bash[22983]: audit 2026-03-09T16:07:00.145417+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:07:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:01 vm09 bash[22983]: audit 2026-03-09T16:07:00.145417+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-112"}]': finished 2026-03-09T16:07:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:01 vm09 bash[22983]: cluster 2026-03-09T16:07:00.148118+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T16:07:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:01 vm09 bash[22983]: cluster 2026-03-09T16:07:00.148118+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-09T16:07:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:01 vm09 bash[22983]: cluster 2026-03-09T16:07:00.826624+0000 mgr.y (mgr.14520) 511 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:07:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:01 vm09 bash[22983]: cluster 2026-03-09T16:07:00.826624+0000 mgr.y (mgr.14520) 511 : cluster [DBG] pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:07:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:02 vm01 bash[28152]: cluster 2026-03-09T16:07:01.169812+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T16:07:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:02 vm01 bash[28152]: cluster 2026-03-09T16:07:01.169812+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T16:07:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:02 vm01 bash[20728]: cluster 2026-03-09T16:07:01.169812+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T16:07:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:02 vm01 bash[20728]: cluster 2026-03-09T16:07:01.169812+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T16:07:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:02 vm09 bash[22983]: cluster 2026-03-09T16:07:01.169812+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T16:07:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:02 vm09 bash[22983]: cluster 2026-03-09T16:07:01.169812+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-09T16:07:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:07:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:07:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:07:03.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:03 vm09 bash[22983]: cluster 2026-03-09T16:07:02.186362+0000 mon.a (mon.0) 3238 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T16:07:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:03 vm09 bash[22983]: cluster 2026-03-09T16:07:02.186362+0000 mon.a (mon.0) 3238 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T16:07:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:03 vm09 bash[22983]: audit 2026-03-09T16:07:02.198729+0000 mon.c (mon.2) 537 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:03 vm09 bash[22983]: audit 2026-03-09T16:07:02.198729+0000 mon.c (mon.2) 537 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:03 vm09 bash[22983]: audit 2026-03-09T16:07:02.199478+0000 mon.a (mon.0) 3239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:03 vm09 bash[22983]: audit 2026-03-09T16:07:02.199478+0000 mon.a (mon.0) 3239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:03 vm09 bash[22983]: cluster 2026-03-09T16:07:02.827053+0000 mgr.y (mgr.14520) 512 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:03 vm09 bash[22983]: cluster 2026-03-09T16:07:02.827053+0000 mgr.y (mgr.14520) 512 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:03 vm01 bash[20728]: cluster 2026-03-09T16:07:02.186362+0000 mon.a (mon.0) 3238 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T16:07:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:03 vm01 bash[20728]: cluster 2026-03-09T16:07:02.186362+0000 mon.a (mon.0) 3238 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T16:07:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:03 vm01 bash[20728]: audit 2026-03-09T16:07:02.198729+0000 mon.c (mon.2) 537 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:03 vm01 bash[20728]: audit 2026-03-09T16:07:02.198729+0000 mon.c (mon.2) 537 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:03 vm01 bash[20728]: audit 2026-03-09T16:07:02.199478+0000 mon.a (mon.0) 3239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:03 vm01 bash[20728]: audit 2026-03-09T16:07:02.199478+0000 mon.a (mon.0) 3239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:03 vm01 bash[20728]: cluster 2026-03-09T16:07:02.827053+0000 mgr.y (mgr.14520) 512 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:03 vm01 bash[20728]: cluster 2026-03-09T16:07:02.827053+0000 mgr.y (mgr.14520) 512 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:03 vm01 bash[28152]: cluster 2026-03-09T16:07:02.186362+0000 mon.a (mon.0) 3238 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:03 vm01 bash[28152]: cluster 2026-03-09T16:07:02.186362+0000 mon.a (mon.0) 3238 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:03 vm01 bash[28152]: audit 2026-03-09T16:07:02.198729+0000 mon.c (mon.2) 537 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:03 vm01 bash[28152]: audit 2026-03-09T16:07:02.198729+0000 mon.c (mon.2) 537 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:03 vm01 bash[28152]: audit 2026-03-09T16:07:02.199478+0000 mon.a (mon.0) 3239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:03 vm01 bash[28152]: audit 2026-03-09T16:07:02.199478+0000 mon.a (mon.0) 3239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:03 vm01 bash[28152]: cluster 2026-03-09T16:07:02.827053+0000 mgr.y (mgr.14520) 512 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:03.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:03 vm01 bash[28152]: cluster 2026-03-09T16:07:02.827053+0000 mgr.y (mgr.14520) 512 : cluster [DBG] pgmap v871: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: audit 2026-03-09T16:07:03.189660+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: audit 2026-03-09T16:07:03.189660+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: cluster 2026-03-09T16:07:03.205002+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T16:07:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: cluster 2026-03-09T16:07:03.205002+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T16:07:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: audit 2026-03-09T16:07:03.236297+0000 mon.c (mon.2) 538 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: audit 2026-03-09T16:07:03.236297+0000 mon.c (mon.2) 538 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: audit 2026-03-09T16:07:03.237248+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: audit 2026-03-09T16:07:03.237248+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: cluster 2026-03-09T16:07:03.982208+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:04 vm09 bash[22983]: cluster 2026-03-09T16:07:03.982208+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: audit 2026-03-09T16:07:03.189660+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: audit 2026-03-09T16:07:03.189660+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: cluster 2026-03-09T16:07:03.205002+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: cluster 2026-03-09T16:07:03.205002+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: audit 2026-03-09T16:07:03.236297+0000 mon.c (mon.2) 538 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: audit 2026-03-09T16:07:03.236297+0000 mon.c (mon.2) 538 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: audit 2026-03-09T16:07:03.237248+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: audit 2026-03-09T16:07:03.237248+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: cluster 2026-03-09T16:07:03.982208+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:04 vm01 bash[28152]: cluster 2026-03-09T16:07:03.982208+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: audit 2026-03-09T16:07:03.189660+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: audit 2026-03-09T16:07:03.189660+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: cluster 2026-03-09T16:07:03.205002+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: cluster 2026-03-09T16:07:03.205002+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: audit 2026-03-09T16:07:03.236297+0000 mon.c (mon.2) 538 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: audit 2026-03-09T16:07:03.236297+0000 mon.c (mon.2) 538 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: audit 2026-03-09T16:07:03.237248+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: audit 2026-03-09T16:07:03.237248+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: cluster 2026-03-09T16:07:03.982208+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:04 vm01 bash[20728]: cluster 2026-03-09T16:07:03.982208+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: audit 2026-03-09T16:07:04.193089+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: audit 2026-03-09T16:07:04.193089+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: audit 2026-03-09T16:07:04.200404+0000 mon.c (mon.2) 539 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: audit 2026-03-09T16:07:04.200404+0000 mon.c (mon.2) 539 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: cluster 2026-03-09T16:07:04.201290+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: cluster 2026-03-09T16:07:04.201290+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: audit 2026-03-09T16:07:04.204132+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: audit 2026-03-09T16:07:04.204132+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: cluster 2026-03-09T16:07:04.827693+0000 mgr.y (mgr.14520) 513 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: cluster 2026-03-09T16:07:04.827693+0000 mgr.y (mgr.14520) 513 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: audit 2026-03-09T16:07:05.197169+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: audit 2026-03-09T16:07:05.197169+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: cluster 2026-03-09T16:07:05.201103+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T16:07:05.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:05 vm09 bash[22983]: cluster 2026-03-09T16:07:05.201103+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T16:07:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: audit 2026-03-09T16:07:04.193089+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: audit 2026-03-09T16:07:04.193089+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: audit 2026-03-09T16:07:04.200404+0000 mon.c (mon.2) 539 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: audit 2026-03-09T16:07:04.200404+0000 mon.c (mon.2) 539 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: cluster 2026-03-09T16:07:04.201290+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T16:07:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: cluster 2026-03-09T16:07:04.201290+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: audit 2026-03-09T16:07:04.204132+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: audit 2026-03-09T16:07:04.204132+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: cluster 2026-03-09T16:07:04.827693+0000 mgr.y (mgr.14520) 513 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: cluster 2026-03-09T16:07:04.827693+0000 mgr.y (mgr.14520) 513 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: audit 2026-03-09T16:07:05.197169+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: audit 2026-03-09T16:07:05.197169+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: cluster 2026-03-09T16:07:05.201103+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:05 vm01 bash[28152]: cluster 2026-03-09T16:07:05.201103+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: audit 2026-03-09T16:07:04.193089+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: audit 2026-03-09T16:07:04.193089+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: audit 2026-03-09T16:07:04.200404+0000 mon.c (mon.2) 539 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: audit 2026-03-09T16:07:04.200404+0000 mon.c (mon.2) 539 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: cluster 2026-03-09T16:07:04.201290+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: cluster 2026-03-09T16:07:04.201290+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: audit 2026-03-09T16:07:04.204132+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: audit 2026-03-09T16:07:04.204132+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: cluster 2026-03-09T16:07:04.827693+0000 mgr.y (mgr.14520) 513 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: cluster 2026-03-09T16:07:04.827693+0000 mgr.y (mgr.14520) 513 : cluster [DBG] pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: audit 2026-03-09T16:07:05.197169+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: audit 2026-03-09T16:07:05.197169+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: cluster 2026-03-09T16:07:05.201103+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T16:07:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:05 vm01 bash[20728]: cluster 2026-03-09T16:07:05.201103+0000 mon.a (mon.0) 3248 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-09T16:07:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: audit 2026-03-09T16:07:05.204355+0000 mon.c (mon.2) 540 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: audit 2026-03-09T16:07:05.204355+0000 mon.c (mon.2) 540 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: audit 2026-03-09T16:07:05.209867+0000 mon.a (mon.0) 3249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: audit 2026-03-09T16:07:05.209867+0000 mon.a (mon.0) 3249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: cluster 2026-03-09T16:07:06.197492+0000 mon.a (mon.0) 3250 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: cluster 2026-03-09T16:07:06.197492+0000 mon.a (mon.0) 3250 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: audit 2026-03-09T16:07:06.201541+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]': finished 2026-03-09T16:07:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: audit 2026-03-09T16:07:06.201541+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]': finished 2026-03-09T16:07:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: cluster 2026-03-09T16:07:06.208934+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T16:07:06.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:06 vm09 bash[22983]: cluster 2026-03-09T16:07:06.208934+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: audit 2026-03-09T16:07:05.204355+0000 mon.c (mon.2) 540 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: audit 2026-03-09T16:07:05.204355+0000 mon.c (mon.2) 540 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: audit 2026-03-09T16:07:05.209867+0000 mon.a (mon.0) 3249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: audit 2026-03-09T16:07:05.209867+0000 mon.a (mon.0) 3249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: cluster 2026-03-09T16:07:06.197492+0000 mon.a (mon.0) 3250 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: cluster 2026-03-09T16:07:06.197492+0000 mon.a (mon.0) 3250 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: audit 2026-03-09T16:07:06.201541+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]': finished 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: audit 2026-03-09T16:07:06.201541+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]': finished 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: cluster 2026-03-09T16:07:06.208934+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T16:07:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:06 vm01 bash[28152]: cluster 2026-03-09T16:07:06.208934+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: audit 2026-03-09T16:07:05.204355+0000 mon.c (mon.2) 540 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: audit 2026-03-09T16:07:05.204355+0000 mon.c (mon.2) 540 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: audit 2026-03-09T16:07:05.209867+0000 mon.a (mon.0) 3249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: audit 2026-03-09T16:07:05.209867+0000 mon.a (mon.0) 3249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]: dispatch 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: cluster 2026-03-09T16:07:06.197492+0000 mon.a (mon.0) 3250 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: cluster 2026-03-09T16:07:06.197492+0000 mon.a (mon.0) 3250 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: audit 2026-03-09T16:07:06.201541+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]': finished 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: audit 2026-03-09T16:07:06.201541+0000 mon.a (mon.0) 3251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-114", "mode": "writeback"}]': finished 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: cluster 2026-03-09T16:07:06.208934+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T16:07:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:06 vm01 bash[20728]: cluster 2026-03-09T16:07:06.208934+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-09T16:07:07.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:07:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:07:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:07 vm09 bash[22983]: audit 2026-03-09T16:07:06.264243+0000 mon.c (mon.2) 541 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:07 vm09 bash[22983]: audit 2026-03-09T16:07:06.264243+0000 mon.c (mon.2) 541 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:07 vm09 bash[22983]: audit 2026-03-09T16:07:06.264695+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:07 vm09 bash[22983]: audit 2026-03-09T16:07:06.264695+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:07 vm09 bash[22983]: audit 2026-03-09T16:07:06.724037+0000 mgr.y (mgr.14520) 514 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:07 vm09 bash[22983]: audit 2026-03-09T16:07:06.724037+0000 mgr.y (mgr.14520) 514 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:07 vm09 bash[22983]: cluster 2026-03-09T16:07:06.828088+0000 mgr.y (mgr.14520) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:07 vm09 bash[22983]: cluster 2026-03-09T16:07:06.828088+0000 mgr.y (mgr.14520) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:07 vm01 bash[28152]: audit 2026-03-09T16:07:06.264243+0000 mon.c (mon.2) 541 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:07 vm01 bash[28152]: audit 2026-03-09T16:07:06.264243+0000 mon.c (mon.2) 541 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:07 vm01 bash[28152]: audit 2026-03-09T16:07:06.264695+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:07 vm01 bash[28152]: audit 2026-03-09T16:07:06.264695+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:07 vm01 bash[28152]: audit 2026-03-09T16:07:06.724037+0000 mgr.y (mgr.14520) 514 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:07 vm01 bash[28152]: audit 2026-03-09T16:07:06.724037+0000 mgr.y (mgr.14520) 514 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:07 vm01 bash[28152]: cluster 2026-03-09T16:07:06.828088+0000 mgr.y (mgr.14520) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:07 vm01 bash[28152]: cluster 2026-03-09T16:07:06.828088+0000 mgr.y (mgr.14520) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:07 vm01 bash[20728]: audit 2026-03-09T16:07:06.264243+0000 mon.c (mon.2) 541 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:07 vm01 bash[20728]: audit 2026-03-09T16:07:06.264243+0000 mon.c (mon.2) 541 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:07 vm01 bash[20728]: audit 2026-03-09T16:07:06.264695+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:07 vm01 bash[20728]: audit 2026-03-09T16:07:06.264695+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:07 vm01 bash[20728]: audit 2026-03-09T16:07:06.724037+0000 mgr.y (mgr.14520) 514 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:07 vm01 bash[20728]: audit 2026-03-09T16:07:06.724037+0000 mgr.y (mgr.14520) 514 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:07 vm01 bash[20728]: cluster 2026-03-09T16:07:06.828088+0000 mgr.y (mgr.14520) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:07.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:07 vm01 bash[20728]: cluster 2026-03-09T16:07:06.828088+0000 mgr.y (mgr.14520) 515 : cluster [DBG] pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: audit 2026-03-09T16:07:07.219236+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: audit 2026-03-09T16:07:07.219236+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: audit 2026-03-09T16:07:07.222235+0000 mon.c (mon.2) 542 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: audit 2026-03-09T16:07:07.222235+0000 mon.c (mon.2) 542 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: cluster 2026-03-09T16:07:07.227031+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: cluster 2026-03-09T16:07:07.227031+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: audit 2026-03-09T16:07:07.231202+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: audit 2026-03-09T16:07:07.231202+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: cluster 2026-03-09T16:07:08.219184+0000 mon.a (mon.0) 3257 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: cluster 2026-03-09T16:07:08.219184+0000 mon.a (mon.0) 3257 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: audit 2026-03-09T16:07:08.222475+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: audit 2026-03-09T16:07:08.222475+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: cluster 2026-03-09T16:07:08.235015+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T16:07:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:08 vm09 bash[22983]: cluster 2026-03-09T16:07:08.235015+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: audit 2026-03-09T16:07:07.219236+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: audit 2026-03-09T16:07:07.219236+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: audit 2026-03-09T16:07:07.222235+0000 mon.c (mon.2) 542 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: audit 2026-03-09T16:07:07.222235+0000 mon.c (mon.2) 542 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: cluster 2026-03-09T16:07:07.227031+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: cluster 2026-03-09T16:07:07.227031+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: audit 2026-03-09T16:07:07.231202+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: audit 2026-03-09T16:07:07.231202+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: cluster 2026-03-09T16:07:08.219184+0000 mon.a (mon.0) 3257 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: cluster 2026-03-09T16:07:08.219184+0000 mon.a (mon.0) 3257 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: audit 2026-03-09T16:07:08.222475+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: audit 2026-03-09T16:07:08.222475+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: cluster 2026-03-09T16:07:08.235015+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T16:07:08.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:08 vm01 bash[28152]: cluster 2026-03-09T16:07:08.235015+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: audit 2026-03-09T16:07:07.219236+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: audit 2026-03-09T16:07:07.219236+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: audit 2026-03-09T16:07:07.222235+0000 mon.c (mon.2) 542 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: audit 2026-03-09T16:07:07.222235+0000 mon.c (mon.2) 542 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: cluster 2026-03-09T16:07:07.227031+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: cluster 2026-03-09T16:07:07.227031+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: audit 2026-03-09T16:07:07.231202+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: audit 2026-03-09T16:07:07.231202+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]: dispatch 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: cluster 2026-03-09T16:07:08.219184+0000 mon.a (mon.0) 3257 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: cluster 2026-03-09T16:07:08.219184+0000 mon.a (mon.0) 3257 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: audit 2026-03-09T16:07:08.222475+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: audit 2026-03-09T16:07:08.222475+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-114"}]': finished 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: cluster 2026-03-09T16:07:08.235015+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T16:07:08.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:08 vm01 bash[20728]: cluster 2026-03-09T16:07:08.235015+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-09T16:07:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:09 vm09 bash[22983]: cluster 2026-03-09T16:07:08.828557+0000 mgr.y (mgr.14520) 516 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:07:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:09 vm09 bash[22983]: cluster 2026-03-09T16:07:08.828557+0000 mgr.y (mgr.14520) 516 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:07:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:09 vm09 bash[22983]: cluster 2026-03-09T16:07:08.982839+0000 mon.a (mon.0) 3260 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:09 vm09 bash[22983]: cluster 2026-03-09T16:07:08.982839+0000 mon.a (mon.0) 3260 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:09 vm01 bash[28152]: cluster 2026-03-09T16:07:08.828557+0000 mgr.y (mgr.14520) 516 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:07:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:09 vm01 bash[28152]: cluster 2026-03-09T16:07:08.828557+0000 mgr.y (mgr.14520) 516 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:07:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:09 vm01 bash[28152]: cluster 2026-03-09T16:07:08.982839+0000 mon.a (mon.0) 3260 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:09.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:09 vm01 bash[28152]: cluster 2026-03-09T16:07:08.982839+0000 mon.a (mon.0) 3260 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:09.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:09 vm01 bash[20728]: cluster 2026-03-09T16:07:08.828557+0000 mgr.y (mgr.14520) 516 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:07:09.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:09 vm01 bash[20728]: cluster 2026-03-09T16:07:08.828557+0000 mgr.y (mgr.14520) 516 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:07:09.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:09 vm01 bash[20728]: cluster 2026-03-09T16:07:08.982839+0000 mon.a (mon.0) 3260 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:09.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:09 vm01 bash[20728]: cluster 2026-03-09T16:07:08.982839+0000 mon.a (mon.0) 3260 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:07:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:10 vm09 bash[22983]: cluster 2026-03-09T16:07:09.269843+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T16:07:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:10 vm09 bash[22983]: cluster 2026-03-09T16:07:09.269843+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T16:07:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:10 vm01 bash[28152]: cluster 2026-03-09T16:07:09.269843+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T16:07:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:10 vm01 bash[28152]: cluster 2026-03-09T16:07:09.269843+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T16:07:10.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:10 vm01 bash[20728]: cluster 2026-03-09T16:07:09.269843+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T16:07:10.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:10 vm01 bash[20728]: cluster 2026-03-09T16:07:09.269843+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-09T16:07:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:11 vm09 bash[22983]: cluster 2026-03-09T16:07:10.279098+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T16:07:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:11 vm09 bash[22983]: cluster 2026-03-09T16:07:10.279098+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T16:07:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:11 vm09 bash[22983]: audit 2026-03-09T16:07:10.279674+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:11 vm09 bash[22983]: audit 2026-03-09T16:07:10.279674+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:11 vm09 bash[22983]: audit 2026-03-09T16:07:10.280929+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:11 vm09 bash[22983]: audit 2026-03-09T16:07:10.280929+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:11 vm09 bash[22983]: cluster 2026-03-09T16:07:10.828840+0000 mgr.y (mgr.14520) 517 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:11 vm09 bash[22983]: cluster 2026-03-09T16:07:10.828840+0000 mgr.y (mgr.14520) 517 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:11 vm01 bash[28152]: cluster 2026-03-09T16:07:10.279098+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:11 vm01 bash[28152]: cluster 2026-03-09T16:07:10.279098+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:11 vm01 bash[28152]: audit 2026-03-09T16:07:10.279674+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:11 vm01 bash[28152]: audit 2026-03-09T16:07:10.279674+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:11 vm01 bash[28152]: audit 2026-03-09T16:07:10.280929+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:11 vm01 bash[28152]: audit 2026-03-09T16:07:10.280929+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:11 vm01 bash[28152]: cluster 2026-03-09T16:07:10.828840+0000 mgr.y (mgr.14520) 517 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:11 vm01 bash[28152]: cluster 2026-03-09T16:07:10.828840+0000 mgr.y (mgr.14520) 517 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:11 vm01 bash[20728]: cluster 2026-03-09T16:07:10.279098+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:11 vm01 bash[20728]: cluster 2026-03-09T16:07:10.279098+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:11 vm01 bash[20728]: audit 2026-03-09T16:07:10.279674+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:11 vm01 bash[20728]: audit 2026-03-09T16:07:10.279674+0000 mon.c (mon.2) 543 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:11 vm01 bash[20728]: audit 2026-03-09T16:07:10.280929+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:11 vm01 bash[20728]: audit 2026-03-09T16:07:10.280929+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:11 vm01 bash[20728]: cluster 2026-03-09T16:07:10.828840+0000 mgr.y (mgr.14520) 517 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:11 vm01 bash[20728]: cluster 2026-03-09T16:07:10.828840+0000 mgr.y (mgr.14520) 517 : cluster [DBG] pgmap v883: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:12 vm09 bash[22983]: audit 2026-03-09T16:07:11.266044+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:12 vm09 bash[22983]: audit 2026-03-09T16:07:11.266044+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:12 vm09 bash[22983]: cluster 2026-03-09T16:07:11.281543+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T16:07:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:12 vm09 bash[22983]: cluster 2026-03-09T16:07:11.281543+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T16:07:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:12 vm09 bash[22983]: cluster 2026-03-09T16:07:12.276684+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T16:07:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:12 vm09 bash[22983]: cluster 2026-03-09T16:07:12.276684+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:12 vm01 bash[28152]: audit 2026-03-09T16:07:11.266044+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:12 vm01 bash[28152]: audit 2026-03-09T16:07:11.266044+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:12 vm01 bash[28152]: cluster 2026-03-09T16:07:11.281543+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:12 vm01 bash[28152]: cluster 2026-03-09T16:07:11.281543+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:12 vm01 bash[28152]: cluster 2026-03-09T16:07:12.276684+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:12 vm01 bash[28152]: cluster 2026-03-09T16:07:12.276684+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:12 vm01 bash[20728]: audit 2026-03-09T16:07:11.266044+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:12 vm01 bash[20728]: audit 2026-03-09T16:07:11.266044+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:12 vm01 bash[20728]: cluster 2026-03-09T16:07:11.281543+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:12 vm01 bash[20728]: cluster 2026-03-09T16:07:11.281543+0000 mon.a (mon.0) 3265 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:12 vm01 bash[20728]: cluster 2026-03-09T16:07:12.276684+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T16:07:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:12 vm01 bash[20728]: cluster 2026-03-09T16:07:12.276684+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-09T16:07:13.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:07:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:07:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:07:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:13 vm09 bash[22983]: audit 2026-03-09T16:07:12.297000+0000 mon.c (mon.2) 544 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:13 vm09 bash[22983]: audit 2026-03-09T16:07:12.297000+0000 mon.c (mon.2) 544 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:13 vm09 bash[22983]: audit 2026-03-09T16:07:12.297283+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:13 vm09 bash[22983]: audit 2026-03-09T16:07:12.297283+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:13 vm09 bash[22983]: cluster 2026-03-09T16:07:12.829177+0000 mgr.y (mgr.14520) 518 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:07:13.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:13 vm09 bash[22983]: cluster 2026-03-09T16:07:12.829177+0000 mgr.y (mgr.14520) 518 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:13 vm01 bash[28152]: audit 2026-03-09T16:07:12.297000+0000 mon.c (mon.2) 544 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:13 vm01 bash[28152]: audit 2026-03-09T16:07:12.297000+0000 mon.c (mon.2) 544 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:13 vm01 bash[28152]: audit 2026-03-09T16:07:12.297283+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:13 vm01 bash[28152]: audit 2026-03-09T16:07:12.297283+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:13 vm01 bash[28152]: cluster 2026-03-09T16:07:12.829177+0000 mgr.y (mgr.14520) 518 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:13 vm01 bash[28152]: cluster 2026-03-09T16:07:12.829177+0000 mgr.y (mgr.14520) 518 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:13 vm01 bash[20728]: audit 2026-03-09T16:07:12.297000+0000 mon.c (mon.2) 544 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:13 vm01 bash[20728]: audit 2026-03-09T16:07:12.297000+0000 mon.c (mon.2) 544 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:13 vm01 bash[20728]: audit 2026-03-09T16:07:12.297283+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:13 vm01 bash[20728]: audit 2026-03-09T16:07:12.297283+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:13 vm01 bash[20728]: cluster 2026-03-09T16:07:12.829177+0000 mgr.y (mgr.14520) 518 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:07:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:13 vm01 bash[20728]: cluster 2026-03-09T16:07:12.829177+0000 mgr.y (mgr.14520) 518 : cluster [DBG] pgmap v886: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:07:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:13.292884+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:13.292884+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:13.296567+0000 mon.c (mon.2) 545 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:13.296567+0000 mon.c (mon.2) 545 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: cluster 2026-03-09T16:07:13.301271+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: cluster 2026-03-09T16:07:13.301271+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:13.305578+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:13.305578+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:14.299609+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:14.299609+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:14.308976+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: audit 2026-03-09T16:07:14.308976+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: cluster 2026-03-09T16:07:14.313335+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T16:07:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:14 vm09 bash[22983]: cluster 2026-03-09T16:07:14.313335+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:13.292884+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:13.292884+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:13.296567+0000 mon.c (mon.2) 545 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:13.296567+0000 mon.c (mon.2) 545 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: cluster 2026-03-09T16:07:13.301271+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: cluster 2026-03-09T16:07:13.301271+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:13.305578+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:13.305578+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:14.299609+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:14.299609+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:14.308976+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: audit 2026-03-09T16:07:14.308976+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: cluster 2026-03-09T16:07:14.313335+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T16:07:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:14 vm01 bash[28152]: cluster 2026-03-09T16:07:14.313335+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:13.292884+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:13.292884+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:13.296567+0000 mon.c (mon.2) 545 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:13.296567+0000 mon.c (mon.2) 545 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: cluster 2026-03-09T16:07:13.301271+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: cluster 2026-03-09T16:07:13.301271+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:13.305578+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:13.305578+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:14.299609+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:14.299609+0000 mon.a (mon.0) 3271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:14.308976+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: audit 2026-03-09T16:07:14.308976+0000 mon.c (mon.2) 546 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: cluster 2026-03-09T16:07:14.313335+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T16:07:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:14 vm01 bash[20728]: cluster 2026-03-09T16:07:14.313335+0000 mon.a (mon.0) 3272 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-09T16:07:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:15 vm09 bash[22983]: audit 2026-03-09T16:07:14.315528+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:15 vm09 bash[22983]: audit 2026-03-09T16:07:14.315528+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:15 vm09 bash[22983]: audit 2026-03-09T16:07:14.395759+0000 mon.a (mon.0) 3274 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:15 vm09 bash[22983]: audit 2026-03-09T16:07:14.395759+0000 mon.a (mon.0) 3274 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:15 vm09 bash[22983]: cluster 2026-03-09T16:07:14.829526+0000 mgr.y (mgr.14520) 519 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:07:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:15 vm09 bash[22983]: cluster 2026-03-09T16:07:14.829526+0000 mgr.y (mgr.14520) 519 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:07:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:15 vm09 bash[22983]: cluster 2026-03-09T16:07:15.299651+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:15 vm09 bash[22983]: cluster 2026-03-09T16:07:15.299651+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:15 vm01 bash[28152]: audit 2026-03-09T16:07:14.315528+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:15 vm01 bash[28152]: audit 2026-03-09T16:07:14.315528+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:15 vm01 bash[28152]: audit 2026-03-09T16:07:14.395759+0000 mon.a (mon.0) 3274 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:15 vm01 bash[28152]: audit 2026-03-09T16:07:14.395759+0000 mon.a (mon.0) 3274 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:15 vm01 bash[28152]: cluster 2026-03-09T16:07:14.829526+0000 mgr.y (mgr.14520) 519 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:15 vm01 bash[28152]: cluster 2026-03-09T16:07:14.829526+0000 mgr.y (mgr.14520) 519 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:15 vm01 bash[28152]: cluster 2026-03-09T16:07:15.299651+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:15 vm01 bash[28152]: cluster 2026-03-09T16:07:15.299651+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:15 vm01 bash[20728]: audit 2026-03-09T16:07:14.315528+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:15 vm01 bash[20728]: audit 2026-03-09T16:07:14.315528+0000 mon.a (mon.0) 3273 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]: dispatch 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:15 vm01 bash[20728]: audit 2026-03-09T16:07:14.395759+0000 mon.a (mon.0) 3274 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:15 vm01 bash[20728]: audit 2026-03-09T16:07:14.395759+0000 mon.a (mon.0) 3274 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:15 vm01 bash[20728]: cluster 2026-03-09T16:07:14.829526+0000 mgr.y (mgr.14520) 519 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:15 vm01 bash[20728]: cluster 2026-03-09T16:07:14.829526+0000 mgr.y (mgr.14520) 519 : cluster [DBG] pgmap v889: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:15 vm01 bash[20728]: cluster 2026-03-09T16:07:15.299651+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:15 vm01 bash[20728]: cluster 2026-03-09T16:07:15.299651+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:16 vm01 bash[28152]: audit 2026-03-09T16:07:15.346596+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]': finished 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:16 vm01 bash[28152]: audit 2026-03-09T16:07:15.346596+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]': finished 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:16 vm01 bash[28152]: cluster 2026-03-09T16:07:15.358295+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:16 vm01 bash[28152]: cluster 2026-03-09T16:07:15.358295+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:16 vm01 bash[28152]: audit 2026-03-09T16:07:15.387460+0000 mon.c (mon.2) 547 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:16 vm01 bash[28152]: audit 2026-03-09T16:07:15.387460+0000 mon.c (mon.2) 547 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:16 vm01 bash[28152]: audit 2026-03-09T16:07:15.387699+0000 mgr.y (mgr.14520) 520 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:16 vm01 bash[28152]: audit 2026-03-09T16:07:15.387699+0000 mgr.y (mgr.14520) 520 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:16 vm01 bash[20728]: audit 2026-03-09T16:07:15.346596+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]': finished 2026-03-09T16:07:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:16 vm01 bash[20728]: audit 2026-03-09T16:07:15.346596+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]': finished 2026-03-09T16:07:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:16 vm01 bash[20728]: cluster 2026-03-09T16:07:15.358295+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T16:07:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:16 vm01 bash[20728]: cluster 2026-03-09T16:07:15.358295+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T16:07:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:16 vm01 bash[20728]: audit 2026-03-09T16:07:15.387460+0000 mon.c (mon.2) 547 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:16 vm01 bash[20728]: audit 2026-03-09T16:07:15.387460+0000 mon.c (mon.2) 547 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:16 vm01 bash[20728]: audit 2026-03-09T16:07:15.387699+0000 mgr.y (mgr.14520) 520 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:16 vm01 bash[20728]: audit 2026-03-09T16:07:15.387699+0000 mgr.y (mgr.14520) 520 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.723 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:16 vm09 bash[22983]: audit 2026-03-09T16:07:15.346596+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]': finished 2026-03-09T16:07:16.723 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:16 vm09 bash[22983]: audit 2026-03-09T16:07:15.346596+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-116", "mode": "writeback"}]': finished 2026-03-09T16:07:16.723 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:16 vm09 bash[22983]: cluster 2026-03-09T16:07:15.358295+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T16:07:16.723 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:16 vm09 bash[22983]: cluster 2026-03-09T16:07:15.358295+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-09T16:07:16.723 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:16 vm09 bash[22983]: audit 2026-03-09T16:07:15.387460+0000 mon.c (mon.2) 547 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.723 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:16 vm09 bash[22983]: audit 2026-03-09T16:07:15.387460+0000 mon.c (mon.2) 547 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.723 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:16 vm09 bash[22983]: audit 2026-03-09T16:07:15.387699+0000 mgr.y (mgr.14520) 520 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:16.723 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:16 vm09 bash[22983]: audit 2026-03-09T16:07:15.387699+0000 mgr.y (mgr.14520) 520 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-09T16:07:17.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:07:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:07:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:17 vm09 bash[22983]: cluster 2026-03-09T16:07:15.744563+0000 osd.2 (osd.2) 11 : cluster [DBG] 318.7 scrub starts 2026-03-09T16:07:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:17 vm09 bash[22983]: cluster 2026-03-09T16:07:15.744563+0000 osd.2 (osd.2) 11 : cluster [DBG] 318.7 scrub starts 2026-03-09T16:07:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:17 vm09 bash[22983]: cluster 2026-03-09T16:07:15.745915+0000 osd.2 (osd.2) 12 : cluster [DBG] 318.7 scrub ok 2026-03-09T16:07:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:17 vm09 bash[22983]: cluster 2026-03-09T16:07:15.745915+0000 osd.2 (osd.2) 12 : cluster [DBG] 318.7 scrub ok 2026-03-09T16:07:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:17 vm09 bash[22983]: audit 2026-03-09T16:07:16.726847+0000 mgr.y (mgr.14520) 521 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:17 vm09 bash[22983]: audit 2026-03-09T16:07:16.726847+0000 mgr.y (mgr.14520) 521 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:17 vm09 bash[22983]: cluster 2026-03-09T16:07:16.829986+0000 mgr.y (mgr.14520) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:07:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:17 vm09 bash[22983]: cluster 2026-03-09T16:07:16.829986+0000 mgr.y (mgr.14520) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:17 vm01 bash[28152]: cluster 2026-03-09T16:07:15.744563+0000 osd.2 (osd.2) 11 : cluster [DBG] 318.7 scrub starts 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:17 vm01 bash[28152]: cluster 2026-03-09T16:07:15.744563+0000 osd.2 (osd.2) 11 : cluster [DBG] 318.7 scrub starts 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:17 vm01 bash[28152]: cluster 2026-03-09T16:07:15.745915+0000 osd.2 (osd.2) 12 : cluster [DBG] 318.7 scrub ok 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:17 vm01 bash[28152]: cluster 2026-03-09T16:07:15.745915+0000 osd.2 (osd.2) 12 : cluster [DBG] 318.7 scrub ok 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:17 vm01 bash[28152]: audit 2026-03-09T16:07:16.726847+0000 mgr.y (mgr.14520) 521 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:17 vm01 bash[28152]: audit 2026-03-09T16:07:16.726847+0000 mgr.y (mgr.14520) 521 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:17 vm01 bash[28152]: cluster 2026-03-09T16:07:16.829986+0000 mgr.y (mgr.14520) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:17 vm01 bash[28152]: cluster 2026-03-09T16:07:16.829986+0000 mgr.y (mgr.14520) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:17 vm01 bash[20728]: cluster 2026-03-09T16:07:15.744563+0000 osd.2 (osd.2) 11 : cluster [DBG] 318.7 scrub starts 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:17 vm01 bash[20728]: cluster 2026-03-09T16:07:15.744563+0000 osd.2 (osd.2) 11 : cluster [DBG] 318.7 scrub starts 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:17 vm01 bash[20728]: cluster 2026-03-09T16:07:15.745915+0000 osd.2 (osd.2) 12 : cluster [DBG] 318.7 scrub ok 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:17 vm01 bash[20728]: cluster 2026-03-09T16:07:15.745915+0000 osd.2 (osd.2) 12 : cluster [DBG] 318.7 scrub ok 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:17 vm01 bash[20728]: audit 2026-03-09T16:07:16.726847+0000 mgr.y (mgr.14520) 521 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:17 vm01 bash[20728]: audit 2026-03-09T16:07:16.726847+0000 mgr.y (mgr.14520) 521 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:17.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:17 vm01 bash[20728]: cluster 2026-03-09T16:07:16.829986+0000 mgr.y (mgr.14520) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:07:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:17 vm01 bash[20728]: cluster 2026-03-09T16:07:16.829986+0000 mgr.y (mgr.14520) 522 : cluster [DBG] pgmap v891: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T16:07:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:19 vm01 bash[28152]: cluster 2026-03-09T16:07:18.830465+0000 mgr.y (mgr.14520) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:07:20.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:19 vm01 bash[28152]: cluster 2026-03-09T16:07:18.830465+0000 mgr.y (mgr.14520) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:07:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:19 vm01 bash[20728]: cluster 2026-03-09T16:07:18.830465+0000 mgr.y (mgr.14520) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:07:20.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:19 vm01 bash[20728]: cluster 2026-03-09T16:07:18.830465+0000 mgr.y (mgr.14520) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:07:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:19 vm09 bash[22983]: cluster 2026-03-09T16:07:18.830465+0000 mgr.y (mgr.14520) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:07:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:19 vm09 bash[22983]: cluster 2026-03-09T16:07:18.830465+0000 mgr.y (mgr.14520) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T16:07:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:21 vm01 bash[28152]: cluster 2026-03-09T16:07:20.830975+0000 mgr.y (mgr.14520) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 951 B/s wr, 2 op/s 2026-03-09T16:07:22.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:21 vm01 bash[28152]: cluster 2026-03-09T16:07:20.830975+0000 mgr.y (mgr.14520) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 951 B/s wr, 2 op/s 2026-03-09T16:07:22.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:21 vm01 bash[20728]: cluster 2026-03-09T16:07:20.830975+0000 mgr.y (mgr.14520) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 951 B/s wr, 2 op/s 2026-03-09T16:07:22.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:21 vm01 bash[20728]: cluster 2026-03-09T16:07:20.830975+0000 mgr.y (mgr.14520) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 951 B/s wr, 2 op/s 2026-03-09T16:07:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:21 vm09 bash[22983]: cluster 2026-03-09T16:07:20.830975+0000 mgr.y (mgr.14520) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 951 B/s wr, 2 op/s 2026-03-09T16:07:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:21 vm09 bash[22983]: cluster 2026-03-09T16:07:20.830975+0000 mgr.y (mgr.14520) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 951 B/s wr, 2 op/s 2026-03-09T16:07:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:07:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:07:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:07:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:23 vm01 bash[28152]: cluster 2026-03-09T16:07:22.831373+0000 mgr.y (mgr.14520) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 840 B/s rd, 1 op/s 2026-03-09T16:07:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:23 vm01 bash[28152]: cluster 2026-03-09T16:07:22.831373+0000 mgr.y (mgr.14520) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 840 B/s rd, 1 op/s 2026-03-09T16:07:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:23 vm01 bash[20728]: cluster 2026-03-09T16:07:22.831373+0000 mgr.y (mgr.14520) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 840 B/s rd, 1 op/s 2026-03-09T16:07:24.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:23 vm01 bash[20728]: cluster 2026-03-09T16:07:22.831373+0000 mgr.y (mgr.14520) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 840 B/s rd, 1 op/s 2026-03-09T16:07:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:23 vm09 bash[22983]: cluster 2026-03-09T16:07:22.831373+0000 mgr.y (mgr.14520) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 840 B/s rd, 1 op/s 2026-03-09T16:07:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:23 vm09 bash[22983]: cluster 2026-03-09T16:07:22.831373+0000 mgr.y (mgr.14520) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 840 B/s rd, 1 op/s 2026-03-09T16:07:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:25 vm01 bash[28152]: cluster 2026-03-09T16:07:24.832136+0000 mgr.y (mgr.14520) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:26.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:25 vm01 bash[28152]: cluster 2026-03-09T16:07:24.832136+0000 mgr.y (mgr.14520) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:25 vm01 bash[20728]: cluster 2026-03-09T16:07:24.832136+0000 mgr.y (mgr.14520) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:26.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:25 vm01 bash[20728]: cluster 2026-03-09T16:07:24.832136+0000 mgr.y (mgr.14520) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:25 vm09 bash[22983]: cluster 2026-03-09T16:07:24.832136+0000 mgr.y (mgr.14520) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:25 vm09 bash[22983]: cluster 2026-03-09T16:07:24.832136+0000 mgr.y (mgr.14520) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:27.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:07:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:07:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:27 vm01 bash[28152]: audit 2026-03-09T16:07:26.736833+0000 mgr.y (mgr.14520) 527 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:27 vm01 bash[28152]: audit 2026-03-09T16:07:26.736833+0000 mgr.y (mgr.14520) 527 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:27 vm01 bash[28152]: cluster 2026-03-09T16:07:26.832417+0000 mgr.y (mgr.14520) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:07:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:27 vm01 bash[28152]: cluster 2026-03-09T16:07:26.832417+0000 mgr.y (mgr.14520) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:07:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:27 vm01 bash[20728]: audit 2026-03-09T16:07:26.736833+0000 mgr.y (mgr.14520) 527 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:27 vm01 bash[20728]: audit 2026-03-09T16:07:26.736833+0000 mgr.y (mgr.14520) 527 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:27 vm01 bash[20728]: cluster 2026-03-09T16:07:26.832417+0000 mgr.y (mgr.14520) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:07:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:27 vm01 bash[20728]: cluster 2026-03-09T16:07:26.832417+0000 mgr.y (mgr.14520) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:07:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:27 vm09 bash[22983]: audit 2026-03-09T16:07:26.736833+0000 mgr.y (mgr.14520) 527 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:27 vm09 bash[22983]: audit 2026-03-09T16:07:26.736833+0000 mgr.y (mgr.14520) 527 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:27 vm09 bash[22983]: cluster 2026-03-09T16:07:26.832417+0000 mgr.y (mgr.14520) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:07:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:27 vm09 bash[22983]: cluster 2026-03-09T16:07:26.832417+0000 mgr.y (mgr.14520) 528 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:29 vm01 bash[28152]: cluster 2026-03-09T16:07:28.832840+0000 mgr.y (mgr.14520) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:29 vm01 bash[28152]: cluster 2026-03-09T16:07:28.832840+0000 mgr.y (mgr.14520) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:29 vm01 bash[28152]: audit 2026-03-09T16:07:29.407152+0000 mon.a (mon.0) 3278 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:29 vm01 bash[28152]: audit 2026-03-09T16:07:29.407152+0000 mon.a (mon.0) 3278 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:29 vm01 bash[28152]: audit 2026-03-09T16:07:29.408295+0000 mon.a (mon.0) 3279 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:29 vm01 bash[28152]: audit 2026-03-09T16:07:29.408295+0000 mon.a (mon.0) 3279 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:29 vm01 bash[20728]: cluster 2026-03-09T16:07:28.832840+0000 mgr.y (mgr.14520) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:29 vm01 bash[20728]: cluster 2026-03-09T16:07:28.832840+0000 mgr.y (mgr.14520) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:29 vm01 bash[20728]: audit 2026-03-09T16:07:29.407152+0000 mon.a (mon.0) 3278 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:29 vm01 bash[20728]: audit 2026-03-09T16:07:29.407152+0000 mon.a (mon.0) 3278 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:29 vm01 bash[20728]: audit 2026-03-09T16:07:29.408295+0000 mon.a (mon.0) 3279 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:29 vm01 bash[20728]: audit 2026-03-09T16:07:29.408295+0000 mon.a (mon.0) 3279 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:29 vm09 bash[22983]: cluster 2026-03-09T16:07:28.832840+0000 mgr.y (mgr.14520) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T16:07:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:29 vm09 bash[22983]: cluster 2026-03-09T16:07:28.832840+0000 mgr.y (mgr.14520) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T16:07:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:29 vm09 bash[22983]: audit 2026-03-09T16:07:29.407152+0000 mon.a (mon.0) 3278 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:29 vm09 bash[22983]: audit 2026-03-09T16:07:29.407152+0000 mon.a (mon.0) 3278 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:29 vm09 bash[22983]: audit 2026-03-09T16:07:29.408295+0000 mon.a (mon.0) 3279 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:29 vm09 bash[22983]: audit 2026-03-09T16:07:29.408295+0000 mon.a (mon.0) 3279 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:31 vm09 bash[22983]: cluster 2026-03-09T16:07:30.833526+0000 mgr.y (mgr.14520) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:07:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:31 vm09 bash[22983]: cluster 2026-03-09T16:07:30.833526+0000 mgr.y (mgr.14520) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:07:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:31 vm09 bash[22983]: cluster 2026-03-09T16:07:30.939088+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T16:07:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:31 vm09 bash[22983]: cluster 2026-03-09T16:07:30.939088+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T16:07:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:31 vm09 bash[22983]: audit 2026-03-09T16:07:30.997614+0000 mon.c (mon.2) 548 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:31 vm09 bash[22983]: audit 2026-03-09T16:07:30.997614+0000 mon.c (mon.2) 548 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:31 vm09 bash[22983]: audit 2026-03-09T16:07:30.997886+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:31 vm09 bash[22983]: audit 2026-03-09T16:07:30.997886+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:31 vm01 bash[28152]: cluster 2026-03-09T16:07:30.833526+0000 mgr.y (mgr.14520) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:31 vm01 bash[28152]: cluster 2026-03-09T16:07:30.833526+0000 mgr.y (mgr.14520) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:31 vm01 bash[28152]: cluster 2026-03-09T16:07:30.939088+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:31 vm01 bash[28152]: cluster 2026-03-09T16:07:30.939088+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:31 vm01 bash[28152]: audit 2026-03-09T16:07:30.997614+0000 mon.c (mon.2) 548 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:31 vm01 bash[28152]: audit 2026-03-09T16:07:30.997614+0000 mon.c (mon.2) 548 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:31 vm01 bash[28152]: audit 2026-03-09T16:07:30.997886+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:31 vm01 bash[28152]: audit 2026-03-09T16:07:30.997886+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:31 vm01 bash[20728]: cluster 2026-03-09T16:07:30.833526+0000 mgr.y (mgr.14520) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:31 vm01 bash[20728]: cluster 2026-03-09T16:07:30.833526+0000 mgr.y (mgr.14520) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:31 vm01 bash[20728]: cluster 2026-03-09T16:07:30.939088+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:31 vm01 bash[20728]: cluster 2026-03-09T16:07:30.939088+0000 mon.a (mon.0) 3280 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:31 vm01 bash[20728]: audit 2026-03-09T16:07:30.997614+0000 mon.c (mon.2) 548 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:31 vm01 bash[20728]: audit 2026-03-09T16:07:30.997614+0000 mon.c (mon.2) 548 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:31 vm01 bash[20728]: audit 2026-03-09T16:07:30.997886+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:31 vm01 bash[20728]: audit 2026-03-09T16:07:30.997886+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: audit 2026-03-09T16:07:31.938625+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: audit 2026-03-09T16:07:31.938625+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: cluster 2026-03-09T16:07:31.941125+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: cluster 2026-03-09T16:07:31.941125+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: audit 2026-03-09T16:07:31.945666+0000 mon.c (mon.2) 549 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: audit 2026-03-09T16:07:31.945666+0000 mon.c (mon.2) 549 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: audit 2026-03-09T16:07:31.946538+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: audit 2026-03-09T16:07:31.946538+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: cluster 2026-03-09T16:07:32.938789+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: cluster 2026-03-09T16:07:32.938789+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: audit 2026-03-09T16:07:32.941813+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: audit 2026-03-09T16:07:32.941813+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: cluster 2026-03-09T16:07:32.944764+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:32 vm01 bash[28152]: cluster 2026-03-09T16:07:32.944764+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:07:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:07:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: audit 2026-03-09T16:07:31.938625+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: audit 2026-03-09T16:07:31.938625+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: cluster 2026-03-09T16:07:31.941125+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: cluster 2026-03-09T16:07:31.941125+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: audit 2026-03-09T16:07:31.945666+0000 mon.c (mon.2) 549 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: audit 2026-03-09T16:07:31.945666+0000 mon.c (mon.2) 549 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: audit 2026-03-09T16:07:31.946538+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: audit 2026-03-09T16:07:31.946538+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: cluster 2026-03-09T16:07:32.938789+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: cluster 2026-03-09T16:07:32.938789+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: audit 2026-03-09T16:07:32.941813+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: audit 2026-03-09T16:07:32.941813+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: cluster 2026-03-09T16:07:32.944764+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T16:07:33.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:32 vm01 bash[20728]: cluster 2026-03-09T16:07:32.944764+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: audit 2026-03-09T16:07:31.938625+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: audit 2026-03-09T16:07:31.938625+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: cluster 2026-03-09T16:07:31.941125+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: cluster 2026-03-09T16:07:31.941125+0000 mon.a (mon.0) 3283 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: audit 2026-03-09T16:07:31.945666+0000 mon.c (mon.2) 549 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: audit 2026-03-09T16:07:31.945666+0000 mon.c (mon.2) 549 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: audit 2026-03-09T16:07:31.946538+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: audit 2026-03-09T16:07:31.946538+0000 mon.a (mon.0) 3284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]: dispatch 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: cluster 2026-03-09T16:07:32.938789+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: cluster 2026-03-09T16:07:32.938789+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: audit 2026-03-09T16:07:32.941813+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: audit 2026-03-09T16:07:32.941813+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-116"}]': finished 2026-03-09T16:07:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: cluster 2026-03-09T16:07:32.944764+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T16:07:33.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:32 vm09 bash[22983]: cluster 2026-03-09T16:07:32.944764+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-09T16:07:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:33 vm09 bash[22983]: cluster 2026-03-09T16:07:32.833824+0000 mgr.y (mgr.14520) 531 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:07:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:33 vm09 bash[22983]: cluster 2026-03-09T16:07:32.833824+0000 mgr.y (mgr.14520) 531 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:07:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:33 vm01 bash[28152]: cluster 2026-03-09T16:07:32.833824+0000 mgr.y (mgr.14520) 531 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:07:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:33 vm01 bash[28152]: cluster 2026-03-09T16:07:32.833824+0000 mgr.y (mgr.14520) 531 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:07:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:33 vm01 bash[20728]: cluster 2026-03-09T16:07:32.833824+0000 mgr.y (mgr.14520) 531 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:07:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:33 vm01 bash[20728]: cluster 2026-03-09T16:07:32.833824+0000 mgr.y (mgr.14520) 531 : cluster [DBG] pgmap v901: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:07:35.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:35 vm09 bash[22983]: cluster 2026-03-09T16:07:33.982567+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T16:07:35.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:35 vm09 bash[22983]: cluster 2026-03-09T16:07:33.982567+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T16:07:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:35 vm01 bash[28152]: cluster 2026-03-09T16:07:33.982567+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T16:07:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:35 vm01 bash[28152]: cluster 2026-03-09T16:07:33.982567+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T16:07:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:35 vm01 bash[20728]: cluster 2026-03-09T16:07:33.982567+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T16:07:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:35 vm01 bash[20728]: cluster 2026-03-09T16:07:33.982567+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-09T16:07:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:36 vm09 bash[22983]: cluster 2026-03-09T16:07:34.834301+0000 mgr.y (mgr.14520) 532 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:36 vm09 bash[22983]: cluster 2026-03-09T16:07:34.834301+0000 mgr.y (mgr.14520) 532 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:36 vm09 bash[22983]: cluster 2026-03-09T16:07:35.049202+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T16:07:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:36 vm09 bash[22983]: cluster 2026-03-09T16:07:35.049202+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T16:07:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:36 vm09 bash[22983]: audit 2026-03-09T16:07:35.055633+0000 mon.c (mon.2) 550 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:36 vm09 bash[22983]: audit 2026-03-09T16:07:35.055633+0000 mon.c (mon.2) 550 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:36 vm09 bash[22983]: audit 2026-03-09T16:07:35.056125+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:36 vm09 bash[22983]: audit 2026-03-09T16:07:35.056125+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:36 vm01 bash[28152]: cluster 2026-03-09T16:07:34.834301+0000 mgr.y (mgr.14520) 532 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:36 vm01 bash[28152]: cluster 2026-03-09T16:07:34.834301+0000 mgr.y (mgr.14520) 532 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:36 vm01 bash[28152]: cluster 2026-03-09T16:07:35.049202+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:36 vm01 bash[28152]: cluster 2026-03-09T16:07:35.049202+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:36 vm01 bash[28152]: audit 2026-03-09T16:07:35.055633+0000 mon.c (mon.2) 550 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:36 vm01 bash[28152]: audit 2026-03-09T16:07:35.055633+0000 mon.c (mon.2) 550 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:36 vm01 bash[28152]: audit 2026-03-09T16:07:35.056125+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:36 vm01 bash[28152]: audit 2026-03-09T16:07:35.056125+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:36 vm01 bash[20728]: cluster 2026-03-09T16:07:34.834301+0000 mgr.y (mgr.14520) 532 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:36 vm01 bash[20728]: cluster 2026-03-09T16:07:34.834301+0000 mgr.y (mgr.14520) 532 : cluster [DBG] pgmap v904: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:36 vm01 bash[20728]: cluster 2026-03-09T16:07:35.049202+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:36 vm01 bash[20728]: cluster 2026-03-09T16:07:35.049202+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:36 vm01 bash[20728]: audit 2026-03-09T16:07:35.055633+0000 mon.c (mon.2) 550 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:36 vm01 bash[20728]: audit 2026-03-09T16:07:35.055633+0000 mon.c (mon.2) 550 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:36 vm01 bash[20728]: audit 2026-03-09T16:07:35.056125+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:36 vm01 bash[20728]: audit 2026-03-09T16:07:35.056125+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:37.050 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:07:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:07:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:37 vm09 bash[22983]: audit 2026-03-09T16:07:36.028869+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:37 vm09 bash[22983]: audit 2026-03-09T16:07:36.028869+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:37 vm09 bash[22983]: cluster 2026-03-09T16:07:36.032845+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T16:07:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:37 vm09 bash[22983]: cluster 2026-03-09T16:07:36.032845+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T16:07:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:37 vm09 bash[22983]: cluster 2026-03-09T16:07:37.036056+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T16:07:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:37 vm09 bash[22983]: cluster 2026-03-09T16:07:37.036056+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T16:07:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:37 vm09 bash[22983]: audit 2026-03-09T16:07:37.051549+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:37 vm09 bash[22983]: audit 2026-03-09T16:07:37.051549+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:37 vm01 bash[28152]: audit 2026-03-09T16:07:36.028869+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:37 vm01 bash[28152]: audit 2026-03-09T16:07:36.028869+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:37 vm01 bash[28152]: cluster 2026-03-09T16:07:36.032845+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:37 vm01 bash[28152]: cluster 2026-03-09T16:07:36.032845+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:37 vm01 bash[28152]: cluster 2026-03-09T16:07:37.036056+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:37 vm01 bash[28152]: cluster 2026-03-09T16:07:37.036056+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:37 vm01 bash[28152]: audit 2026-03-09T16:07:37.051549+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:37 vm01 bash[28152]: audit 2026-03-09T16:07:37.051549+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:37 vm01 bash[20728]: audit 2026-03-09T16:07:36.028869+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:37 vm01 bash[20728]: audit 2026-03-09T16:07:36.028869+0000 mon.a (mon.0) 3291 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:37 vm01 bash[20728]: cluster 2026-03-09T16:07:36.032845+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:37 vm01 bash[20728]: cluster 2026-03-09T16:07:36.032845+0000 mon.a (mon.0) 3292 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:37 vm01 bash[20728]: cluster 2026-03-09T16:07:37.036056+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:37 vm01 bash[20728]: cluster 2026-03-09T16:07:37.036056+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:37 vm01 bash[20728]: audit 2026-03-09T16:07:37.051549+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:37 vm01 bash[20728]: audit 2026-03-09T16:07:37.051549+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:36.744195+0000 mgr.y (mgr.14520) 533 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:36.744195+0000 mgr.y (mgr.14520) 533 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: cluster 2026-03-09T16:07:36.834589+0000 mgr.y (mgr.14520) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: cluster 2026-03-09T16:07:36.834589+0000 mgr.y (mgr.14520) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:37.051179+0000 mon.c (mon.2) 551 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:37.051179+0000 mon.c (mon.2) 551 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:38.035360+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:38.035360+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: cluster 2026-03-09T16:07:38.039410+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T16:07:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: cluster 2026-03-09T16:07:38.039410+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T16:07:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:38.041498+0000 mon.c (mon.2) 552 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:38.041498+0000 mon.c (mon.2) 552 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:38.041798+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:38 vm09 bash[22983]: audit 2026-03-09T16:07:38.041798+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:36.744195+0000 mgr.y (mgr.14520) 533 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:36.744195+0000 mgr.y (mgr.14520) 533 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: cluster 2026-03-09T16:07:36.834589+0000 mgr.y (mgr.14520) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: cluster 2026-03-09T16:07:36.834589+0000 mgr.y (mgr.14520) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:37.051179+0000 mon.c (mon.2) 551 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:37.051179+0000 mon.c (mon.2) 551 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:38.035360+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:38.035360+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: cluster 2026-03-09T16:07:38.039410+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: cluster 2026-03-09T16:07:38.039410+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:38.041498+0000 mon.c (mon.2) 552 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:38.041498+0000 mon.c (mon.2) 552 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:38.041798+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:38 vm01 bash[28152]: audit 2026-03-09T16:07:38.041798+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:36.744195+0000 mgr.y (mgr.14520) 533 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:36.744195+0000 mgr.y (mgr.14520) 533 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: cluster 2026-03-09T16:07:36.834589+0000 mgr.y (mgr.14520) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: cluster 2026-03-09T16:07:36.834589+0000 mgr.y (mgr.14520) 534 : cluster [DBG] pgmap v907: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:37.051179+0000 mon.c (mon.2) 551 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:37.051179+0000 mon.c (mon.2) 551 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:38.035360+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:38.035360+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: cluster 2026-03-09T16:07:38.039410+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T16:07:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: cluster 2026-03-09T16:07:38.039410+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-09T16:07:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:38.041498+0000 mon.c (mon.2) 552 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:38.041498+0000 mon.c (mon.2) 552 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:38.041798+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:38 vm01 bash[20728]: audit 2026-03-09T16:07:38.041798+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: cluster 2026-03-09T16:07:38.835077+0000 mgr.y (mgr.14520) 535 : cluster [DBG] pgmap v910: 268 pgs: 11 unknown, 257 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-09T16:07:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: cluster 2026-03-09T16:07:38.835077+0000 mgr.y (mgr.14520) 535 : cluster [DBG] pgmap v910: 268 pgs: 11 unknown, 257 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-09T16:07:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: audit 2026-03-09T16:07:39.038047+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: audit 2026-03-09T16:07:39.038047+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: cluster 2026-03-09T16:07:39.042623+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T16:07:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: cluster 2026-03-09T16:07:39.042623+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T16:07:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: audit 2026-03-09T16:07:39.045780+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: audit 2026-03-09T16:07:39.045780+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: audit 2026-03-09T16:07:39.049975+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:40 vm09 bash[22983]: audit 2026-03-09T16:07:39.049975+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: cluster 2026-03-09T16:07:38.835077+0000 mgr.y (mgr.14520) 535 : cluster [DBG] pgmap v910: 268 pgs: 11 unknown, 257 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: cluster 2026-03-09T16:07:38.835077+0000 mgr.y (mgr.14520) 535 : cluster [DBG] pgmap v910: 268 pgs: 11 unknown, 257 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: audit 2026-03-09T16:07:39.038047+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: audit 2026-03-09T16:07:39.038047+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: cluster 2026-03-09T16:07:39.042623+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: cluster 2026-03-09T16:07:39.042623+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: audit 2026-03-09T16:07:39.045780+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: audit 2026-03-09T16:07:39.045780+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: audit 2026-03-09T16:07:39.049975+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:40 vm01 bash[28152]: audit 2026-03-09T16:07:39.049975+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: cluster 2026-03-09T16:07:38.835077+0000 mgr.y (mgr.14520) 535 : cluster [DBG] pgmap v910: 268 pgs: 11 unknown, 257 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: cluster 2026-03-09T16:07:38.835077+0000 mgr.y (mgr.14520) 535 : cluster [DBG] pgmap v910: 268 pgs: 11 unknown, 257 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: audit 2026-03-09T16:07:39.038047+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: audit 2026-03-09T16:07:39.038047+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: cluster 2026-03-09T16:07:39.042623+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: cluster 2026-03-09T16:07:39.042623+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: audit 2026-03-09T16:07:39.045780+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: audit 2026-03-09T16:07:39.045780+0000 mon.c (mon.2) 553 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: audit 2026-03-09T16:07:39.049975+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:40 vm01 bash[20728]: audit 2026-03-09T16:07:39.049975+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]: dispatch 2026-03-09T16:07:41.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:41 vm09 bash[22983]: cluster 2026-03-09T16:07:40.038022+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:41.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:41 vm09 bash[22983]: cluster 2026-03-09T16:07:40.038022+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:41.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:41 vm09 bash[22983]: audit 2026-03-09T16:07:40.040930+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]': finished 2026-03-09T16:07:41.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:41 vm09 bash[22983]: audit 2026-03-09T16:07:40.040930+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]': finished 2026-03-09T16:07:41.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:41 vm09 bash[22983]: cluster 2026-03-09T16:07:40.050504+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T16:07:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:41 vm09 bash[22983]: cluster 2026-03-09T16:07:40.050504+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:41 vm01 bash[28152]: cluster 2026-03-09T16:07:40.038022+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:41 vm01 bash[28152]: cluster 2026-03-09T16:07:40.038022+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:41 vm01 bash[28152]: audit 2026-03-09T16:07:40.040930+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]': finished 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:41 vm01 bash[28152]: audit 2026-03-09T16:07:40.040930+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]': finished 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:41 vm01 bash[28152]: cluster 2026-03-09T16:07:40.050504+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:41 vm01 bash[28152]: cluster 2026-03-09T16:07:40.050504+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:41 vm01 bash[20728]: cluster 2026-03-09T16:07:40.038022+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:41 vm01 bash[20728]: cluster 2026-03-09T16:07:40.038022+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:41 vm01 bash[20728]: audit 2026-03-09T16:07:40.040930+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]': finished 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:41 vm01 bash[20728]: audit 2026-03-09T16:07:40.040930+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-118", "mode": "writeback"}]': finished 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:41 vm01 bash[20728]: cluster 2026-03-09T16:07:40.050504+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T16:07:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:41 vm01 bash[20728]: cluster 2026-03-09T16:07:40.050504+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-09T16:07:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: cluster 2026-03-09T16:07:40.835347+0000 mgr.y (mgr.14520) 536 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:07:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: cluster 2026-03-09T16:07:40.835347+0000 mgr.y (mgr.14520) 536 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:07:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: cluster 2026-03-09T16:07:41.080629+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T16:07:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: cluster 2026-03-09T16:07:41.080629+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T16:07:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:41.119812+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:41.119812+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:41.120055+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:41.120055+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:41.690087+0000 mon.a (mon.0) 3306 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:07:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:41.690087+0000 mon.a (mon.0) 3306 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:07:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:42.016957+0000 mon.a (mon.0) 3307 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:07:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:42.016957+0000 mon.a (mon.0) 3307 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:07:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:42.017548+0000 mon.a (mon.0) 3308 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:07:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:42.017548+0000 mon.a (mon.0) 3308 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:07:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:42.022397+0000 mon.a (mon.0) 3309 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:42.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:42 vm09 bash[22983]: audit 2026-03-09T16:07:42.022397+0000 mon.a (mon.0) 3309 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: cluster 2026-03-09T16:07:40.835347+0000 mgr.y (mgr.14520) 536 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: cluster 2026-03-09T16:07:40.835347+0000 mgr.y (mgr.14520) 536 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: cluster 2026-03-09T16:07:41.080629+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: cluster 2026-03-09T16:07:41.080629+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:41.119812+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:41.119812+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:41.120055+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:41.120055+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:41.690087+0000 mon.a (mon.0) 3306 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:41.690087+0000 mon.a (mon.0) 3306 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:42.016957+0000 mon.a (mon.0) 3307 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:42.016957+0000 mon.a (mon.0) 3307 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:42.017548+0000 mon.a (mon.0) 3308 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:42.017548+0000 mon.a (mon.0) 3308 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:42.022397+0000 mon.a (mon.0) 3309 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:42 vm01 bash[28152]: audit 2026-03-09T16:07:42.022397+0000 mon.a (mon.0) 3309 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: cluster 2026-03-09T16:07:40.835347+0000 mgr.y (mgr.14520) 536 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: cluster 2026-03-09T16:07:40.835347+0000 mgr.y (mgr.14520) 536 : cluster [DBG] pgmap v913: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:07:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: cluster 2026-03-09T16:07:41.080629+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: cluster 2026-03-09T16:07:41.080629+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:41.119812+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:41.119812+0000 mon.c (mon.2) 554 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:41.120055+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:41.120055+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:41.690087+0000 mon.a (mon.0) 3306 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:41.690087+0000 mon.a (mon.0) 3306 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:42.016957+0000 mon.a (mon.0) 3307 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:42.016957+0000 mon.a (mon.0) 3307 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:42.017548+0000 mon.a (mon.0) 3308 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:42.017548+0000 mon.a (mon.0) 3308 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:42.022397+0000 mon.a (mon.0) 3309 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:42 vm01 bash[20728]: audit 2026-03-09T16:07:42.022397+0000 mon.a (mon.0) 3309 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:43 vm01 bash[28152]: audit 2026-03-09T16:07:42.065347+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:43 vm01 bash[28152]: audit 2026-03-09T16:07:42.065347+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:43 vm01 bash[28152]: cluster 2026-03-09T16:07:42.070645+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:43 vm01 bash[28152]: cluster 2026-03-09T16:07:42.070645+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:43 vm01 bash[28152]: audit 2026-03-09T16:07:42.077717+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:43 vm01 bash[28152]: audit 2026-03-09T16:07:42.077717+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:43 vm01 bash[28152]: audit 2026-03-09T16:07:42.085401+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:43 vm01 bash[28152]: audit 2026-03-09T16:07:42.085401+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:07:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:07:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:43 vm01 bash[20728]: audit 2026-03-09T16:07:42.065347+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:43 vm01 bash[20728]: audit 2026-03-09T16:07:42.065347+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:43 vm01 bash[20728]: cluster 2026-03-09T16:07:42.070645+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:43 vm01 bash[20728]: cluster 2026-03-09T16:07:42.070645+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:43 vm01 bash[20728]: audit 2026-03-09T16:07:42.077717+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:43 vm01 bash[20728]: audit 2026-03-09T16:07:42.077717+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:43 vm01 bash[20728]: audit 2026-03-09T16:07:42.085401+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:43 vm01 bash[20728]: audit 2026-03-09T16:07:42.085401+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:43 vm09 bash[22983]: audit 2026-03-09T16:07:42.065347+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:43 vm09 bash[22983]: audit 2026-03-09T16:07:42.065347+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:43 vm09 bash[22983]: cluster 2026-03-09T16:07:42.070645+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T16:07:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:43 vm09 bash[22983]: cluster 2026-03-09T16:07:42.070645+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-09T16:07:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:43 vm09 bash[22983]: audit 2026-03-09T16:07:42.077717+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:43 vm09 bash[22983]: audit 2026-03-09T16:07:42.077717+0000 mon.c (mon.2) 555 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:43 vm09 bash[22983]: audit 2026-03-09T16:07:42.085401+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:43.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:43 vm09 bash[22983]: audit 2026-03-09T16:07:42.085401+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]: dispatch 2026-03-09T16:07:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: cluster 2026-03-09T16:07:42.835677+0000 mgr.y (mgr.14520) 537 : cluster [DBG] pgmap v916: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: cluster 2026-03-09T16:07:42.835677+0000 mgr.y (mgr.14520) 537 : cluster [DBG] pgmap v916: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: cluster 2026-03-09T16:07:43.079937+0000 mon.a (mon.0) 3313 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: cluster 2026-03-09T16:07:43.079937+0000 mon.a (mon.0) 3313 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: audit 2026-03-09T16:07:43.086682+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: audit 2026-03-09T16:07:43.086682+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: cluster 2026-03-09T16:07:43.109431+0000 mon.a (mon.0) 3315 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T16:07:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: cluster 2026-03-09T16:07:43.109431+0000 mon.a (mon.0) 3315 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T16:07:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: cluster 2026-03-09T16:07:44.001453+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T16:07:44.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:44 vm09 bash[22983]: cluster 2026-03-09T16:07:44.001453+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: cluster 2026-03-09T16:07:42.835677+0000 mgr.y (mgr.14520) 537 : cluster [DBG] pgmap v916: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: cluster 2026-03-09T16:07:42.835677+0000 mgr.y (mgr.14520) 537 : cluster [DBG] pgmap v916: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: cluster 2026-03-09T16:07:43.079937+0000 mon.a (mon.0) 3313 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: cluster 2026-03-09T16:07:43.079937+0000 mon.a (mon.0) 3313 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: audit 2026-03-09T16:07:43.086682+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: audit 2026-03-09T16:07:43.086682+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: cluster 2026-03-09T16:07:43.109431+0000 mon.a (mon.0) 3315 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: cluster 2026-03-09T16:07:43.109431+0000 mon.a (mon.0) 3315 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: cluster 2026-03-09T16:07:44.001453+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:44 vm01 bash[28152]: cluster 2026-03-09T16:07:44.001453+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: cluster 2026-03-09T16:07:42.835677+0000 mgr.y (mgr.14520) 537 : cluster [DBG] pgmap v916: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: cluster 2026-03-09T16:07:42.835677+0000 mgr.y (mgr.14520) 537 : cluster [DBG] pgmap v916: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: cluster 2026-03-09T16:07:43.079937+0000 mon.a (mon.0) 3313 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: cluster 2026-03-09T16:07:43.079937+0000 mon.a (mon.0) 3313 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: audit 2026-03-09T16:07:43.086682+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: audit 2026-03-09T16:07:43.086682+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-118"}]': finished 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: cluster 2026-03-09T16:07:43.109431+0000 mon.a (mon.0) 3315 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: cluster 2026-03-09T16:07:43.109431+0000 mon.a (mon.0) 3315 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: cluster 2026-03-09T16:07:44.001453+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T16:07:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:44 vm01 bash[20728]: cluster 2026-03-09T16:07:44.001453+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-09T16:07:45.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: audit 2026-03-09T16:07:44.422194+0000 mon.a (mon.0) 3317 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:45.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: audit 2026-03-09T16:07:44.422194+0000 mon.a (mon.0) 3317 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:45.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: audit 2026-03-09T16:07:44.423628+0000 mon.a (mon.0) 3318 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:45.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: audit 2026-03-09T16:07:44.423628+0000 mon.a (mon.0) 3318 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:45.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: cluster 2026-03-09T16:07:44.836338+0000 mgr.y (mgr.14520) 538 : cluster [DBG] pgmap v919: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:45.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: cluster 2026-03-09T16:07:44.836338+0000 mgr.y (mgr.14520) 538 : cluster [DBG] pgmap v919: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:45.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: cluster 2026-03-09T16:07:45.022775+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T16:07:45.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: cluster 2026-03-09T16:07:45.022775+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T16:07:45.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: audit 2026-03-09T16:07:45.026614+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: audit 2026-03-09T16:07:45.026614+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: audit 2026-03-09T16:07:45.027940+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:45 vm09 bash[22983]: audit 2026-03-09T16:07:45.027940+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: audit 2026-03-09T16:07:44.422194+0000 mon.a (mon.0) 3317 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: audit 2026-03-09T16:07:44.422194+0000 mon.a (mon.0) 3317 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: audit 2026-03-09T16:07:44.423628+0000 mon.a (mon.0) 3318 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: audit 2026-03-09T16:07:44.423628+0000 mon.a (mon.0) 3318 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: cluster 2026-03-09T16:07:44.836338+0000 mgr.y (mgr.14520) 538 : cluster [DBG] pgmap v919: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: cluster 2026-03-09T16:07:44.836338+0000 mgr.y (mgr.14520) 538 : cluster [DBG] pgmap v919: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: cluster 2026-03-09T16:07:45.022775+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: cluster 2026-03-09T16:07:45.022775+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: audit 2026-03-09T16:07:45.026614+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: audit 2026-03-09T16:07:45.026614+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: audit 2026-03-09T16:07:45.027940+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:45 vm01 bash[28152]: audit 2026-03-09T16:07:45.027940+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: audit 2026-03-09T16:07:44.422194+0000 mon.a (mon.0) 3317 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: audit 2026-03-09T16:07:44.422194+0000 mon.a (mon.0) 3317 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: audit 2026-03-09T16:07:44.423628+0000 mon.a (mon.0) 3318 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: audit 2026-03-09T16:07:44.423628+0000 mon.a (mon.0) 3318 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: cluster 2026-03-09T16:07:44.836338+0000 mgr.y (mgr.14520) 538 : cluster [DBG] pgmap v919: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: cluster 2026-03-09T16:07:44.836338+0000 mgr.y (mgr.14520) 538 : cluster [DBG] pgmap v919: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: cluster 2026-03-09T16:07:45.022775+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: cluster 2026-03-09T16:07:45.022775+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: audit 2026-03-09T16:07:45.026614+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: audit 2026-03-09T16:07:45.026614+0000 mon.c (mon.2) 556 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: audit 2026-03-09T16:07:45.027940+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:45.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:45 vm01 bash[20728]: audit 2026-03-09T16:07:45.027940+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:47.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:07:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:07:47.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:47 vm09 bash[22983]: audit 2026-03-09T16:07:46.006705+0000 mon.a (mon.0) 3321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:47.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:47 vm09 bash[22983]: audit 2026-03-09T16:07:46.006705+0000 mon.a (mon.0) 3321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:47.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:47 vm09 bash[22983]: cluster 2026-03-09T16:07:46.021215+0000 mon.a (mon.0) 3322 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T16:07:47.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:47 vm09 bash[22983]: cluster 2026-03-09T16:07:46.021215+0000 mon.a (mon.0) 3322 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T16:07:47.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:47 vm09 bash[22983]: audit 2026-03-09T16:07:46.051730+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:47 vm09 bash[22983]: audit 2026-03-09T16:07:46.051730+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:47 vm09 bash[22983]: audit 2026-03-09T16:07:46.052004+0000 mon.a (mon.0) 3323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:47 vm09 bash[22983]: audit 2026-03-09T16:07:46.052004+0000 mon.a (mon.0) 3323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:47 vm01 bash[28152]: audit 2026-03-09T16:07:46.006705+0000 mon.a (mon.0) 3321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:47 vm01 bash[28152]: audit 2026-03-09T16:07:46.006705+0000 mon.a (mon.0) 3321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:47 vm01 bash[28152]: cluster 2026-03-09T16:07:46.021215+0000 mon.a (mon.0) 3322 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:47 vm01 bash[28152]: cluster 2026-03-09T16:07:46.021215+0000 mon.a (mon.0) 3322 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:47 vm01 bash[28152]: audit 2026-03-09T16:07:46.051730+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:47 vm01 bash[28152]: audit 2026-03-09T16:07:46.051730+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:47 vm01 bash[28152]: audit 2026-03-09T16:07:46.052004+0000 mon.a (mon.0) 3323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:47 vm01 bash[28152]: audit 2026-03-09T16:07:46.052004+0000 mon.a (mon.0) 3323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:47 vm01 bash[20728]: audit 2026-03-09T16:07:46.006705+0000 mon.a (mon.0) 3321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:47 vm01 bash[20728]: audit 2026-03-09T16:07:46.006705+0000 mon.a (mon.0) 3321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:47 vm01 bash[20728]: cluster 2026-03-09T16:07:46.021215+0000 mon.a (mon.0) 3322 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:47 vm01 bash[20728]: cluster 2026-03-09T16:07:46.021215+0000 mon.a (mon.0) 3322 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:47 vm01 bash[20728]: audit 2026-03-09T16:07:46.051730+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:47 vm01 bash[20728]: audit 2026-03-09T16:07:46.051730+0000 mon.c (mon.2) 557 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:47 vm01 bash[20728]: audit 2026-03-09T16:07:46.052004+0000 mon.a (mon.0) 3323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:47.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:47 vm01 bash[20728]: audit 2026-03-09T16:07:46.052004+0000 mon.a (mon.0) 3323 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: audit 2026-03-09T16:07:46.752323+0000 mgr.y (mgr.14520) 539 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: audit 2026-03-09T16:07:46.752323+0000 mgr.y (mgr.14520) 539 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: cluster 2026-03-09T16:07:46.836708+0000 mgr.y (mgr.14520) 540 : cluster [DBG] pgmap v922: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: cluster 2026-03-09T16:07:46.836708+0000 mgr.y (mgr.14520) 540 : cluster [DBG] pgmap v922: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: audit 2026-03-09T16:07:47.065301+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: audit 2026-03-09T16:07:47.065301+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: cluster 2026-03-09T16:07:47.075978+0000 mon.a (mon.0) 3325 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: cluster 2026-03-09T16:07:47.075978+0000 mon.a (mon.0) 3325 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: audit 2026-03-09T16:07:47.076808+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: audit 2026-03-09T16:07:47.076808+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: audit 2026-03-09T16:07:47.077585+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:48 vm01 bash[28152]: audit 2026-03-09T16:07:47.077585+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: audit 2026-03-09T16:07:46.752323+0000 mgr.y (mgr.14520) 539 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: audit 2026-03-09T16:07:46.752323+0000 mgr.y (mgr.14520) 539 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:48.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: cluster 2026-03-09T16:07:46.836708+0000 mgr.y (mgr.14520) 540 : cluster [DBG] pgmap v922: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: cluster 2026-03-09T16:07:46.836708+0000 mgr.y (mgr.14520) 540 : cluster [DBG] pgmap v922: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: audit 2026-03-09T16:07:47.065301+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: audit 2026-03-09T16:07:47.065301+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: cluster 2026-03-09T16:07:47.075978+0000 mon.a (mon.0) 3325 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T16:07:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: cluster 2026-03-09T16:07:47.075978+0000 mon.a (mon.0) 3325 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T16:07:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: audit 2026-03-09T16:07:47.076808+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: audit 2026-03-09T16:07:47.076808+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: audit 2026-03-09T16:07:47.077585+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:48 vm01 bash[20728]: audit 2026-03-09T16:07:47.077585+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: audit 2026-03-09T16:07:46.752323+0000 mgr.y (mgr.14520) 539 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: audit 2026-03-09T16:07:46.752323+0000 mgr.y (mgr.14520) 539 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: cluster 2026-03-09T16:07:46.836708+0000 mgr.y (mgr.14520) 540 : cluster [DBG] pgmap v922: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: cluster 2026-03-09T16:07:46.836708+0000 mgr.y (mgr.14520) 540 : cluster [DBG] pgmap v922: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: audit 2026-03-09T16:07:47.065301+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: audit 2026-03-09T16:07:47.065301+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: cluster 2026-03-09T16:07:47.075978+0000 mon.a (mon.0) 3325 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: cluster 2026-03-09T16:07:47.075978+0000 mon.a (mon.0) 3325 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: audit 2026-03-09T16:07:47.076808+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: audit 2026-03-09T16:07:47.076808+0000 mon.c (mon.2) 558 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: audit 2026-03-09T16:07:47.077585+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:48 vm09 bash[22983]: audit 2026-03-09T16:07:47.077585+0000 mon.a (mon.0) 3326 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: audit 2026-03-09T16:07:48.123860+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: audit 2026-03-09T16:07:48.123860+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: audit 2026-03-09T16:07:48.129149+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: audit 2026-03-09T16:07:48.129149+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: cluster 2026-03-09T16:07:48.129269+0000 mon.a (mon.0) 3328 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: cluster 2026-03-09T16:07:48.129269+0000 mon.a (mon.0) 3328 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: audit 2026-03-09T16:07:48.131343+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: audit 2026-03-09T16:07:48.131343+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: cluster 2026-03-09T16:07:48.837256+0000 mgr.y (mgr.14520) 541 : cluster [DBG] pgmap v925: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: cluster 2026-03-09T16:07:48.837256+0000 mgr.y (mgr.14520) 541 : cluster [DBG] pgmap v925: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: cluster 2026-03-09T16:07:49.123632+0000 mon.a (mon.0) 3330 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: cluster 2026-03-09T16:07:49.123632+0000 mon.a (mon.0) 3330 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: audit 2026-03-09T16:07:49.126865+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]': finished 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: audit 2026-03-09T16:07:49.126865+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]': finished 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: cluster 2026-03-09T16:07:49.137141+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:49 vm01 bash[28152]: cluster 2026-03-09T16:07:49.137141+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: audit 2026-03-09T16:07:48.123860+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: audit 2026-03-09T16:07:48.123860+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: audit 2026-03-09T16:07:48.129149+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: audit 2026-03-09T16:07:48.129149+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: cluster 2026-03-09T16:07:48.129269+0000 mon.a (mon.0) 3328 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: cluster 2026-03-09T16:07:48.129269+0000 mon.a (mon.0) 3328 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T16:07:49.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: audit 2026-03-09T16:07:48.131343+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: audit 2026-03-09T16:07:48.131343+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: cluster 2026-03-09T16:07:48.837256+0000 mgr.y (mgr.14520) 541 : cluster [DBG] pgmap v925: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:07:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: cluster 2026-03-09T16:07:48.837256+0000 mgr.y (mgr.14520) 541 : cluster [DBG] pgmap v925: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:07:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: cluster 2026-03-09T16:07:49.123632+0000 mon.a (mon.0) 3330 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: cluster 2026-03-09T16:07:49.123632+0000 mon.a (mon.0) 3330 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: audit 2026-03-09T16:07:49.126865+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]': finished 2026-03-09T16:07:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: audit 2026-03-09T16:07:49.126865+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]': finished 2026-03-09T16:07:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: cluster 2026-03-09T16:07:49.137141+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T16:07:49.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:49 vm01 bash[20728]: cluster 2026-03-09T16:07:49.137141+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: audit 2026-03-09T16:07:48.123860+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: audit 2026-03-09T16:07:48.123860+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: audit 2026-03-09T16:07:48.129149+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: audit 2026-03-09T16:07:48.129149+0000 mon.c (mon.2) 559 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: cluster 2026-03-09T16:07:48.129269+0000 mon.a (mon.0) 3328 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: cluster 2026-03-09T16:07:48.129269+0000 mon.a (mon.0) 3328 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: audit 2026-03-09T16:07:48.131343+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: audit 2026-03-09T16:07:48.131343+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]: dispatch 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: cluster 2026-03-09T16:07:48.837256+0000 mgr.y (mgr.14520) 541 : cluster [DBG] pgmap v925: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: cluster 2026-03-09T16:07:48.837256+0000 mgr.y (mgr.14520) 541 : cluster [DBG] pgmap v925: 268 pgs: 17 unknown, 251 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: cluster 2026-03-09T16:07:49.123632+0000 mon.a (mon.0) 3330 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: cluster 2026-03-09T16:07:49.123632+0000 mon.a (mon.0) 3330 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: audit 2026-03-09T16:07:49.126865+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]': finished 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: audit 2026-03-09T16:07:49.126865+0000 mon.a (mon.0) 3331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-120", "mode": "writeback"}]': finished 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: cluster 2026-03-09T16:07:49.137141+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T16:07:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:49 vm09 bash[22983]: cluster 2026-03-09T16:07:49.137141+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-09T16:07:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:50 vm01 bash[28152]: audit 2026-03-09T16:07:49.208989+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:50 vm01 bash[28152]: audit 2026-03-09T16:07:49.208989+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:50 vm01 bash[28152]: audit 2026-03-09T16:07:49.209309+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:50 vm01 bash[28152]: audit 2026-03-09T16:07:49.209309+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:50 vm01 bash[20728]: audit 2026-03-09T16:07:49.208989+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:50 vm01 bash[20728]: audit 2026-03-09T16:07:49.208989+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:50 vm01 bash[20728]: audit 2026-03-09T16:07:49.209309+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:50 vm01 bash[20728]: audit 2026-03-09T16:07:49.209309+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:50 vm09 bash[22983]: audit 2026-03-09T16:07:49.208989+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:50 vm09 bash[22983]: audit 2026-03-09T16:07:49.208989+0000 mon.c (mon.2) 560 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:50 vm09 bash[22983]: audit 2026-03-09T16:07:49.209309+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:50 vm09 bash[22983]: audit 2026-03-09T16:07:49.209309+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: audit 2026-03-09T16:07:50.154856+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: audit 2026-03-09T16:07:50.154856+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: audit 2026-03-09T16:07:50.158598+0000 mon.c (mon.2) 561 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: audit 2026-03-09T16:07:50.158598+0000 mon.c (mon.2) 561 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: cluster 2026-03-09T16:07:50.158969+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: cluster 2026-03-09T16:07:50.158969+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: audit 2026-03-09T16:07:50.159903+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: audit 2026-03-09T16:07:50.159903+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: cluster 2026-03-09T16:07:50.837572+0000 mgr.y (mgr.14520) 542 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: cluster 2026-03-09T16:07:50.837572+0000 mgr.y (mgr.14520) 542 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: cluster 2026-03-09T16:07:51.155106+0000 mon.a (mon.0) 3337 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: cluster 2026-03-09T16:07:51.155106+0000 mon.a (mon.0) 3337 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: audit 2026-03-09T16:07:51.164177+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: audit 2026-03-09T16:07:51.164177+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: cluster 2026-03-09T16:07:51.168789+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T16:07:51.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:51 vm09 bash[22983]: cluster 2026-03-09T16:07:51.168789+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T16:07:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: audit 2026-03-09T16:07:50.154856+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: audit 2026-03-09T16:07:50.154856+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: audit 2026-03-09T16:07:50.158598+0000 mon.c (mon.2) 561 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: audit 2026-03-09T16:07:50.158598+0000 mon.c (mon.2) 561 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: cluster 2026-03-09T16:07:50.158969+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T16:07:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: cluster 2026-03-09T16:07:50.158969+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: audit 2026-03-09T16:07:50.159903+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: audit 2026-03-09T16:07:50.159903+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: cluster 2026-03-09T16:07:50.837572+0000 mgr.y (mgr.14520) 542 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: cluster 2026-03-09T16:07:50.837572+0000 mgr.y (mgr.14520) 542 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: cluster 2026-03-09T16:07:51.155106+0000 mon.a (mon.0) 3337 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: cluster 2026-03-09T16:07:51.155106+0000 mon.a (mon.0) 3337 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: audit 2026-03-09T16:07:51.164177+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: audit 2026-03-09T16:07:51.164177+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: cluster 2026-03-09T16:07:51.168789+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:51 vm01 bash[28152]: cluster 2026-03-09T16:07:51.168789+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: audit 2026-03-09T16:07:50.154856+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: audit 2026-03-09T16:07:50.154856+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: audit 2026-03-09T16:07:50.158598+0000 mon.c (mon.2) 561 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: audit 2026-03-09T16:07:50.158598+0000 mon.c (mon.2) 561 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: cluster 2026-03-09T16:07:50.158969+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: cluster 2026-03-09T16:07:50.158969+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: audit 2026-03-09T16:07:50.159903+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: audit 2026-03-09T16:07:50.159903+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]: dispatch 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: cluster 2026-03-09T16:07:50.837572+0000 mgr.y (mgr.14520) 542 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: cluster 2026-03-09T16:07:50.837572+0000 mgr.y (mgr.14520) 542 : cluster [DBG] pgmap v928: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: cluster 2026-03-09T16:07:51.155106+0000 mon.a (mon.0) 3337 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: cluster 2026-03-09T16:07:51.155106+0000 mon.a (mon.0) 3337 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: audit 2026-03-09T16:07:51.164177+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: audit 2026-03-09T16:07:51.164177+0000 mon.a (mon.0) 3338 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-120"}]': finished 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: cluster 2026-03-09T16:07:51.168789+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T16:07:51.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:51 vm01 bash[20728]: cluster 2026-03-09T16:07:51.168789+0000 mon.a (mon.0) 3339 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-09T16:07:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:07:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:07:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:07:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:53 vm09 bash[22983]: cluster 2026-03-09T16:07:52.198404+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T16:07:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:53 vm09 bash[22983]: cluster 2026-03-09T16:07:52.198404+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T16:07:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:53 vm09 bash[22983]: cluster 2026-03-09T16:07:52.837927+0000 mgr.y (mgr.14520) 543 : cluster [DBG] pgmap v931: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:53 vm09 bash[22983]: cluster 2026-03-09T16:07:52.837927+0000 mgr.y (mgr.14520) 543 : cluster [DBG] pgmap v931: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:53 vm01 bash[28152]: cluster 2026-03-09T16:07:52.198404+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T16:07:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:53 vm01 bash[28152]: cluster 2026-03-09T16:07:52.198404+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T16:07:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:53 vm01 bash[28152]: cluster 2026-03-09T16:07:52.837927+0000 mgr.y (mgr.14520) 543 : cluster [DBG] pgmap v931: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:53 vm01 bash[28152]: cluster 2026-03-09T16:07:52.837927+0000 mgr.y (mgr.14520) 543 : cluster [DBG] pgmap v931: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:53 vm01 bash[20728]: cluster 2026-03-09T16:07:52.198404+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T16:07:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:53 vm01 bash[20728]: cluster 2026-03-09T16:07:52.198404+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-09T16:07:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:53 vm01 bash[20728]: cluster 2026-03-09T16:07:52.837927+0000 mgr.y (mgr.14520) 543 : cluster [DBG] pgmap v931: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:53 vm01 bash[20728]: cluster 2026-03-09T16:07:52.837927+0000 mgr.y (mgr.14520) 543 : cluster [DBG] pgmap v931: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: cluster 2026-03-09T16:07:53.212702+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T16:07:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: cluster 2026-03-09T16:07:53.212702+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:53.240801+0000 mon.c (mon.2) 562 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:53.240801+0000 mon.c (mon.2) 562 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:53.241110+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:53.241110+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:54.001708+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:54.001708+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: cluster 2026-03-09T16:07:54.010199+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: cluster 2026-03-09T16:07:54.010199+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:54.054028+0000 mon.c (mon.2) 563 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:54.054028+0000 mon.c (mon.2) 563 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:54.054256+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:54 vm09 bash[22983]: audit 2026-03-09T16:07:54.054256+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: cluster 2026-03-09T16:07:53.212702+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: cluster 2026-03-09T16:07:53.212702+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:53.240801+0000 mon.c (mon.2) 562 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:53.240801+0000 mon.c (mon.2) 562 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:53.241110+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:53.241110+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:54.001708+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:54.001708+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: cluster 2026-03-09T16:07:54.010199+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: cluster 2026-03-09T16:07:54.010199+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:54.054028+0000 mon.c (mon.2) 563 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:54.054028+0000 mon.c (mon.2) 563 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:54.054256+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:54 vm01 bash[28152]: audit 2026-03-09T16:07:54.054256+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: cluster 2026-03-09T16:07:53.212702+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T16:07:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: cluster 2026-03-09T16:07:53.212702+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:53.240801+0000 mon.c (mon.2) 562 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:53.240801+0000 mon.c (mon.2) 562 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:53.241110+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:53.241110+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:54.001708+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:54.001708+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: cluster 2026-03-09T16:07:54.010199+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: cluster 2026-03-09T16:07:54.010199+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:54.054028+0000 mon.c (mon.2) 563 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:54.054028+0000 mon.c (mon.2) 563 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:54.054256+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:54.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:54 vm01 bash[20728]: audit 2026-03-09T16:07:54.054256+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:07:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: cluster 2026-03-09T16:07:54.838697+0000 mgr.y (mgr.14520) 544 : cluster [DBG] pgmap v934: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:07:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: cluster 2026-03-09T16:07:54.838697+0000 mgr.y (mgr.14520) 544 : cluster [DBG] pgmap v934: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:07:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: audit 2026-03-09T16:07:55.005657+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: audit 2026-03-09T16:07:55.005657+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: audit 2026-03-09T16:07:55.015817+0000 mon.c (mon.2) 564 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: audit 2026-03-09T16:07:55.015817+0000 mon.c (mon.2) 564 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: cluster 2026-03-09T16:07:55.017376+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T16:07:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: cluster 2026-03-09T16:07:55.017376+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T16:07:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: audit 2026-03-09T16:07:55.018439+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:56 vm09 bash[22983]: audit 2026-03-09T16:07:55.018439+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: cluster 2026-03-09T16:07:54.838697+0000 mgr.y (mgr.14520) 544 : cluster [DBG] pgmap v934: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: cluster 2026-03-09T16:07:54.838697+0000 mgr.y (mgr.14520) 544 : cluster [DBG] pgmap v934: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: audit 2026-03-09T16:07:55.005657+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: audit 2026-03-09T16:07:55.005657+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: audit 2026-03-09T16:07:55.015817+0000 mon.c (mon.2) 564 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: audit 2026-03-09T16:07:55.015817+0000 mon.c (mon.2) 564 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: cluster 2026-03-09T16:07:55.017376+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: cluster 2026-03-09T16:07:55.017376+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: audit 2026-03-09T16:07:55.018439+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:56 vm01 bash[28152]: audit 2026-03-09T16:07:55.018439+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: cluster 2026-03-09T16:07:54.838697+0000 mgr.y (mgr.14520) 544 : cluster [DBG] pgmap v934: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: cluster 2026-03-09T16:07:54.838697+0000 mgr.y (mgr.14520) 544 : cluster [DBG] pgmap v934: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: audit 2026-03-09T16:07:55.005657+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:56.494 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: audit 2026-03-09T16:07:55.005657+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:07:56.495 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: audit 2026-03-09T16:07:55.015817+0000 mon.c (mon.2) 564 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.495 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: audit 2026-03-09T16:07:55.015817+0000 mon.c (mon.2) 564 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.495 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: cluster 2026-03-09T16:07:55.017376+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T16:07:56.495 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: cluster 2026-03-09T16:07:55.017376+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-09T16:07:56.495 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: audit 2026-03-09T16:07:55.018439+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:56.495 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:56 vm01 bash[20728]: audit 2026-03-09T16:07:55.018439+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:57.101 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:07:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:07:57.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: audit 2026-03-09T16:07:56.071020+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: audit 2026-03-09T16:07:56.071020+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: cluster 2026-03-09T16:07:56.081888+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: cluster 2026-03-09T16:07:56.081888+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: audit 2026-03-09T16:07:56.082627+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: audit 2026-03-09T16:07:56.082627+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: audit 2026-03-09T16:07:56.083568+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: audit 2026-03-09T16:07:56.083568+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: cluster 2026-03-09T16:07:57.071334+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: cluster 2026-03-09T16:07:57.071334+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: audit 2026-03-09T16:07:57.075352+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]': finished 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: audit 2026-03-09T16:07:57.075352+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]': finished 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: cluster 2026-03-09T16:07:57.079681+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T16:07:57.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:57 vm09 bash[22983]: cluster 2026-03-09T16:07:57.079681+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T16:07:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: audit 2026-03-09T16:07:56.071020+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:07:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: audit 2026-03-09T16:07:56.071020+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:07:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: cluster 2026-03-09T16:07:56.081888+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T16:07:57.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: cluster 2026-03-09T16:07:56.081888+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: audit 2026-03-09T16:07:56.082627+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: audit 2026-03-09T16:07:56.082627+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: audit 2026-03-09T16:07:56.083568+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: audit 2026-03-09T16:07:56.083568+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: cluster 2026-03-09T16:07:57.071334+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: cluster 2026-03-09T16:07:57.071334+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: audit 2026-03-09T16:07:57.075352+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]': finished 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: audit 2026-03-09T16:07:57.075352+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]': finished 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: cluster 2026-03-09T16:07:57.079681+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:57 vm01 bash[28152]: cluster 2026-03-09T16:07:57.079681+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: audit 2026-03-09T16:07:56.071020+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: audit 2026-03-09T16:07:56.071020+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: cluster 2026-03-09T16:07:56.081888+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: cluster 2026-03-09T16:07:56.081888+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: audit 2026-03-09T16:07:56.082627+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: audit 2026-03-09T16:07:56.082627+0000 mon.c (mon.2) 565 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: audit 2026-03-09T16:07:56.083568+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: audit 2026-03-09T16:07:56.083568+0000 mon.a (mon.0) 3351 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]: dispatch 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: cluster 2026-03-09T16:07:57.071334+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: cluster 2026-03-09T16:07:57.071334+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: audit 2026-03-09T16:07:57.075352+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]': finished 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: audit 2026-03-09T16:07:57.075352+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-122", "mode": "writeback"}]': finished 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: cluster 2026-03-09T16:07:57.079681+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T16:07:57.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:57 vm01 bash[20728]: cluster 2026-03-09T16:07:57.079681+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-09T16:07:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:58 vm09 bash[22983]: audit 2026-03-09T16:07:56.755178+0000 mgr.y (mgr.14520) 545 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:58 vm09 bash[22983]: audit 2026-03-09T16:07:56.755178+0000 mgr.y (mgr.14520) 545 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:58 vm09 bash[22983]: cluster 2026-03-09T16:07:56.839353+0000 mgr.y (mgr.14520) 546 : cluster [DBG] pgmap v937: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:58 vm09 bash[22983]: cluster 2026-03-09T16:07:56.839353+0000 mgr.y (mgr.14520) 546 : cluster [DBG] pgmap v937: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:58 vm09 bash[22983]: audit 2026-03-09T16:07:57.155532+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:58 vm09 bash[22983]: audit 2026-03-09T16:07:57.155532+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:58 vm09 bash[22983]: audit 2026-03-09T16:07:57.155891+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:58 vm09 bash[22983]: audit 2026-03-09T16:07:57.155891+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:58 vm01 bash[20728]: audit 2026-03-09T16:07:56.755178+0000 mgr.y (mgr.14520) 545 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:58 vm01 bash[20728]: audit 2026-03-09T16:07:56.755178+0000 mgr.y (mgr.14520) 545 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:58 vm01 bash[20728]: cluster 2026-03-09T16:07:56.839353+0000 mgr.y (mgr.14520) 546 : cluster [DBG] pgmap v937: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:58 vm01 bash[20728]: cluster 2026-03-09T16:07:56.839353+0000 mgr.y (mgr.14520) 546 : cluster [DBG] pgmap v937: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:58 vm01 bash[20728]: audit 2026-03-09T16:07:57.155532+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:58 vm01 bash[20728]: audit 2026-03-09T16:07:57.155532+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:58 vm01 bash[20728]: audit 2026-03-09T16:07:57.155891+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:58 vm01 bash[20728]: audit 2026-03-09T16:07:57.155891+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:58 vm01 bash[28152]: audit 2026-03-09T16:07:56.755178+0000 mgr.y (mgr.14520) 545 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:58 vm01 bash[28152]: audit 2026-03-09T16:07:56.755178+0000 mgr.y (mgr.14520) 545 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:58 vm01 bash[28152]: cluster 2026-03-09T16:07:56.839353+0000 mgr.y (mgr.14520) 546 : cluster [DBG] pgmap v937: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:58 vm01 bash[28152]: cluster 2026-03-09T16:07:56.839353+0000 mgr.y (mgr.14520) 546 : cluster [DBG] pgmap v937: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:07:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:58 vm01 bash[28152]: audit 2026-03-09T16:07:57.155532+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:58 vm01 bash[28152]: audit 2026-03-09T16:07:57.155532+0000 mon.c (mon.2) 566 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:58 vm01 bash[28152]: audit 2026-03-09T16:07:57.155891+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:58.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:58 vm01 bash[28152]: audit 2026-03-09T16:07:57.155891+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:07:59.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:59 vm09 bash[22983]: audit 2026-03-09T16:07:58.117368+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:59 vm09 bash[22983]: audit 2026-03-09T16:07:58.117368+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:59 vm09 bash[22983]: audit 2026-03-09T16:07:58.124175+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:59 vm09 bash[22983]: audit 2026-03-09T16:07:58.124175+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:59 vm09 bash[22983]: cluster 2026-03-09T16:07:58.130547+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T16:07:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:59 vm09 bash[22983]: cluster 2026-03-09T16:07:58.130547+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T16:07:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:59 vm09 bash[22983]: audit 2026-03-09T16:07:58.131674+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:07:59 vm09 bash[22983]: audit 2026-03-09T16:07:58.131674+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:59 vm01 bash[28152]: audit 2026-03-09T16:07:58.117368+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:59 vm01 bash[28152]: audit 2026-03-09T16:07:58.117368+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:59 vm01 bash[28152]: audit 2026-03-09T16:07:58.124175+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:59 vm01 bash[28152]: audit 2026-03-09T16:07:58.124175+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:59 vm01 bash[28152]: cluster 2026-03-09T16:07:58.130547+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T16:07:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:59 vm01 bash[28152]: cluster 2026-03-09T16:07:58.130547+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T16:07:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:59 vm01 bash[28152]: audit 2026-03-09T16:07:58.131674+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:07:59 vm01 bash[28152]: audit 2026-03-09T16:07:58.131674+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:59 vm01 bash[20728]: audit 2026-03-09T16:07:58.117368+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:59 vm01 bash[20728]: audit 2026-03-09T16:07:58.117368+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:07:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:59 vm01 bash[20728]: audit 2026-03-09T16:07:58.124175+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:59 vm01 bash[20728]: audit 2026-03-09T16:07:58.124175+0000 mon.c (mon.2) 567 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:59 vm01 bash[20728]: cluster 2026-03-09T16:07:58.130547+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T16:07:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:59 vm01 bash[20728]: cluster 2026-03-09T16:07:58.130547+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-09T16:07:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:59 vm01 bash[20728]: audit 2026-03-09T16:07:58.131674+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:07:59.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:07:59 vm01 bash[20728]: audit 2026-03-09T16:07:58.131674+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]: dispatch 2026-03-09T16:08:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: cluster 2026-03-09T16:07:58.839997+0000 mgr.y (mgr.14520) 547 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: cluster 2026-03-09T16:07:58.839997+0000 mgr.y (mgr.14520) 547 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: cluster 2026-03-09T16:07:59.119767+0000 mon.a (mon.0) 3359 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: cluster 2026-03-09T16:07:59.119767+0000 mon.a (mon.0) 3359 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: audit 2026-03-09T16:07:59.131172+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:08:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: audit 2026-03-09T16:07:59.131172+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:08:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: cluster 2026-03-09T16:07:59.157084+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T16:08:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: cluster 2026-03-09T16:07:59.157084+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T16:08:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: audit 2026-03-09T16:07:59.430149+0000 mon.a (mon.0) 3362 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:00.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:00 vm09 bash[22983]: audit 2026-03-09T16:07:59.430149+0000 mon.a (mon.0) 3362 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: cluster 2026-03-09T16:07:58.839997+0000 mgr.y (mgr.14520) 547 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: cluster 2026-03-09T16:07:58.839997+0000 mgr.y (mgr.14520) 547 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: cluster 2026-03-09T16:07:59.119767+0000 mon.a (mon.0) 3359 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: cluster 2026-03-09T16:07:59.119767+0000 mon.a (mon.0) 3359 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: audit 2026-03-09T16:07:59.131172+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: audit 2026-03-09T16:07:59.131172+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: cluster 2026-03-09T16:07:59.157084+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: cluster 2026-03-09T16:07:59.157084+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: audit 2026-03-09T16:07:59.430149+0000 mon.a (mon.0) 3362 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:00 vm01 bash[28152]: audit 2026-03-09T16:07:59.430149+0000 mon.a (mon.0) 3362 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: cluster 2026-03-09T16:07:58.839997+0000 mgr.y (mgr.14520) 547 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:00.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: cluster 2026-03-09T16:07:58.839997+0000 mgr.y (mgr.14520) 547 : cluster [DBG] pgmap v940: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: cluster 2026-03-09T16:07:59.119767+0000 mon.a (mon.0) 3359 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: cluster 2026-03-09T16:07:59.119767+0000 mon.a (mon.0) 3359 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: audit 2026-03-09T16:07:59.131172+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:08:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: audit 2026-03-09T16:07:59.131172+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-122"}]': finished 2026-03-09T16:08:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: cluster 2026-03-09T16:07:59.157084+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T16:08:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: cluster 2026-03-09T16:07:59.157084+0000 mon.a (mon.0) 3361 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-09T16:08:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: audit 2026-03-09T16:07:59.430149+0000 mon.a (mon.0) 3362 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:00.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:00 vm01 bash[20728]: audit 2026-03-09T16:07:59.430149+0000 mon.a (mon.0) 3362 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:01 vm09 bash[22983]: cluster 2026-03-09T16:08:00.316069+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T16:08:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:01 vm09 bash[22983]: cluster 2026-03-09T16:08:00.316069+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T16:08:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:01 vm09 bash[22983]: cluster 2026-03-09T16:08:00.840309+0000 mgr.y (mgr.14520) 548 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:01.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:01 vm09 bash[22983]: cluster 2026-03-09T16:08:00.840309+0000 mgr.y (mgr.14520) 548 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:01.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:01 vm01 bash[20728]: cluster 2026-03-09T16:08:00.316069+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T16:08:01.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:01 vm01 bash[20728]: cluster 2026-03-09T16:08:00.316069+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T16:08:01.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:01 vm01 bash[20728]: cluster 2026-03-09T16:08:00.840309+0000 mgr.y (mgr.14520) 548 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:01.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:01 vm01 bash[20728]: cluster 2026-03-09T16:08:00.840309+0000 mgr.y (mgr.14520) 548 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:01.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:01 vm01 bash[28152]: cluster 2026-03-09T16:08:00.316069+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T16:08:01.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:01 vm01 bash[28152]: cluster 2026-03-09T16:08:00.316069+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-09T16:08:01.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:01 vm01 bash[28152]: cluster 2026-03-09T16:08:00.840309+0000 mgr.y (mgr.14520) 548 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:01.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:01 vm01 bash[28152]: cluster 2026-03-09T16:08:00.840309+0000 mgr.y (mgr.14520) 548 : cluster [DBG] pgmap v943: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T16:08:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:02 vm09 bash[22983]: cluster 2026-03-09T16:08:01.343733+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T16:08:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:02 vm09 bash[22983]: cluster 2026-03-09T16:08:01.343733+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T16:08:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:02 vm09 bash[22983]: audit 2026-03-09T16:08:01.347943+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:02 vm09 bash[22983]: audit 2026-03-09T16:08:01.347943+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:02 vm09 bash[22983]: audit 2026-03-09T16:08:01.362847+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:02 vm09 bash[22983]: audit 2026-03-09T16:08:01.362847+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:02 vm01 bash[20728]: cluster 2026-03-09T16:08:01.343733+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:02 vm01 bash[20728]: cluster 2026-03-09T16:08:01.343733+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:02 vm01 bash[20728]: audit 2026-03-09T16:08:01.347943+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:02 vm01 bash[20728]: audit 2026-03-09T16:08:01.347943+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:02 vm01 bash[20728]: audit 2026-03-09T16:08:01.362847+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:02 vm01 bash[20728]: audit 2026-03-09T16:08:01.362847+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:02 vm01 bash[28152]: cluster 2026-03-09T16:08:01.343733+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:02 vm01 bash[28152]: cluster 2026-03-09T16:08:01.343733+0000 mon.a (mon.0) 3364 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:02 vm01 bash[28152]: audit 2026-03-09T16:08:01.347943+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:02 vm01 bash[28152]: audit 2026-03-09T16:08:01.347943+0000 mon.c (mon.2) 568 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:02 vm01 bash[28152]: audit 2026-03-09T16:08:01.362847+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:02.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:02 vm01 bash[28152]: audit 2026-03-09T16:08:01.362847+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:08:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:08:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:08:03.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:03 vm09 bash[22983]: audit 2026-03-09T16:08:02.344682+0000 mon.a (mon.0) 3366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:03 vm09 bash[22983]: audit 2026-03-09T16:08:02.344682+0000 mon.a (mon.0) 3366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:03 vm09 bash[22983]: cluster 2026-03-09T16:08:02.352848+0000 mon.a (mon.0) 3367 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T16:08:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:03 vm09 bash[22983]: cluster 2026-03-09T16:08:02.352848+0000 mon.a (mon.0) 3367 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T16:08:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:03 vm09 bash[22983]: cluster 2026-03-09T16:08:02.840728+0000 mgr.y (mgr.14520) 549 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:03.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:03 vm09 bash[22983]: cluster 2026-03-09T16:08:02.840728+0000 mgr.y (mgr.14520) 549 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:03 vm01 bash[28152]: audit 2026-03-09T16:08:02.344682+0000 mon.a (mon.0) 3366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:03 vm01 bash[28152]: audit 2026-03-09T16:08:02.344682+0000 mon.a (mon.0) 3366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:03 vm01 bash[28152]: cluster 2026-03-09T16:08:02.352848+0000 mon.a (mon.0) 3367 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:03 vm01 bash[28152]: cluster 2026-03-09T16:08:02.352848+0000 mon.a (mon.0) 3367 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:03 vm01 bash[28152]: cluster 2026-03-09T16:08:02.840728+0000 mgr.y (mgr.14520) 549 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:03 vm01 bash[28152]: cluster 2026-03-09T16:08:02.840728+0000 mgr.y (mgr.14520) 549 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:03 vm01 bash[20728]: audit 2026-03-09T16:08:02.344682+0000 mon.a (mon.0) 3366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:03 vm01 bash[20728]: audit 2026-03-09T16:08:02.344682+0000 mon.a (mon.0) 3366 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:03 vm01 bash[20728]: cluster 2026-03-09T16:08:02.352848+0000 mon.a (mon.0) 3367 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:03 vm01 bash[20728]: cluster 2026-03-09T16:08:02.352848+0000 mon.a (mon.0) 3367 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:03 vm01 bash[20728]: cluster 2026-03-09T16:08:02.840728+0000 mgr.y (mgr.14520) 549 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:03.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:03 vm01 bash[20728]: cluster 2026-03-09T16:08:02.840728+0000 mgr.y (mgr.14520) 549 : cluster [DBG] pgmap v946: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:04 vm09 bash[22983]: cluster 2026-03-09T16:08:03.384590+0000 mon.a (mon.0) 3368 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T16:08:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:04 vm09 bash[22983]: cluster 2026-03-09T16:08:03.384590+0000 mon.a (mon.0) 3368 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T16:08:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:04 vm09 bash[22983]: audit 2026-03-09T16:08:03.429817+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:04 vm09 bash[22983]: audit 2026-03-09T16:08:03.429817+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:04 vm09 bash[22983]: audit 2026-03-09T16:08:03.430399+0000 mon.a (mon.0) 3369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:04 vm09 bash[22983]: audit 2026-03-09T16:08:03.430399+0000 mon.a (mon.0) 3369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:04 vm01 bash[28152]: cluster 2026-03-09T16:08:03.384590+0000 mon.a (mon.0) 3368 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T16:08:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:04 vm01 bash[28152]: cluster 2026-03-09T16:08:03.384590+0000 mon.a (mon.0) 3368 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T16:08:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:04 vm01 bash[28152]: audit 2026-03-09T16:08:03.429817+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:04 vm01 bash[28152]: audit 2026-03-09T16:08:03.429817+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:04 vm01 bash[28152]: audit 2026-03-09T16:08:03.430399+0000 mon.a (mon.0) 3369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:04 vm01 bash[28152]: audit 2026-03-09T16:08:03.430399+0000 mon.a (mon.0) 3369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:04 vm01 bash[20728]: cluster 2026-03-09T16:08:03.384590+0000 mon.a (mon.0) 3368 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T16:08:04.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:04 vm01 bash[20728]: cluster 2026-03-09T16:08:03.384590+0000 mon.a (mon.0) 3368 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-09T16:08:04.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:04 vm01 bash[20728]: audit 2026-03-09T16:08:03.429817+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:04 vm01 bash[20728]: audit 2026-03-09T16:08:03.429817+0000 mon.c (mon.2) 569 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:04 vm01 bash[20728]: audit 2026-03-09T16:08:03.430399+0000 mon.a (mon.0) 3369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:04.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:04 vm01 bash[20728]: audit 2026-03-09T16:08:03.430399+0000 mon.a (mon.0) 3369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: audit 2026-03-09T16:08:04.390881+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: audit 2026-03-09T16:08:04.390881+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: cluster 2026-03-09T16:08:04.394276+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: cluster 2026-03-09T16:08:04.394276+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: audit 2026-03-09T16:08:04.400494+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: audit 2026-03-09T16:08:04.400494+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: audit 2026-03-09T16:08:04.400685+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: audit 2026-03-09T16:08:04.400685+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: cluster 2026-03-09T16:08:04.841070+0000 mgr.y (mgr.14520) 550 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:05.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:05 vm01 bash[28152]: cluster 2026-03-09T16:08:04.841070+0000 mgr.y (mgr.14520) 550 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: audit 2026-03-09T16:08:04.390881+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: audit 2026-03-09T16:08:04.390881+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: cluster 2026-03-09T16:08:04.394276+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: cluster 2026-03-09T16:08:04.394276+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: audit 2026-03-09T16:08:04.400494+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: audit 2026-03-09T16:08:04.400494+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: audit 2026-03-09T16:08:04.400685+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: audit 2026-03-09T16:08:04.400685+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: cluster 2026-03-09T16:08:04.841070+0000 mgr.y (mgr.14520) 550 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:05.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:05 vm01 bash[20728]: cluster 2026-03-09T16:08:04.841070+0000 mgr.y (mgr.14520) 550 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: audit 2026-03-09T16:08:04.390881+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: audit 2026-03-09T16:08:04.390881+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: cluster 2026-03-09T16:08:04.394276+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T16:08:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: cluster 2026-03-09T16:08:04.394276+0000 mon.a (mon.0) 3371 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-09T16:08:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: audit 2026-03-09T16:08:04.400494+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: audit 2026-03-09T16:08:04.400494+0000 mon.c (mon.2) 570 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: audit 2026-03-09T16:08:04.400685+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: audit 2026-03-09T16:08:04.400685+0000 mon.a (mon.0) 3372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: cluster 2026-03-09T16:08:04.841070+0000 mgr.y (mgr.14520) 550 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:05.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:05 vm09 bash[22983]: cluster 2026-03-09T16:08:04.841070+0000 mgr.y (mgr.14520) 550 : cluster [DBG] pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:06 vm01 bash[28152]: audit 2026-03-09T16:08:05.403059+0000 mon.a (mon.0) 3373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:06 vm01 bash[28152]: audit 2026-03-09T16:08:05.403059+0000 mon.a (mon.0) 3373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:06 vm01 bash[28152]: cluster 2026-03-09T16:08:05.419555+0000 mon.a (mon.0) 3374 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T16:08:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:06 vm01 bash[28152]: cluster 2026-03-09T16:08:05.419555+0000 mon.a (mon.0) 3374 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T16:08:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:06 vm01 bash[28152]: audit 2026-03-09T16:08:05.420810+0000 mon.c (mon.2) 571 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:06 vm01 bash[28152]: audit 2026-03-09T16:08:05.420810+0000 mon.c (mon.2) 571 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:06 vm01 bash[28152]: audit 2026-03-09T16:08:05.421170+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:06 vm01 bash[28152]: audit 2026-03-09T16:08:05.421170+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:06 vm01 bash[20728]: audit 2026-03-09T16:08:05.403059+0000 mon.a (mon.0) 3373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:06 vm01 bash[20728]: audit 2026-03-09T16:08:05.403059+0000 mon.a (mon.0) 3373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:06 vm01 bash[20728]: cluster 2026-03-09T16:08:05.419555+0000 mon.a (mon.0) 3374 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T16:08:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:06 vm01 bash[20728]: cluster 2026-03-09T16:08:05.419555+0000 mon.a (mon.0) 3374 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T16:08:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:06 vm01 bash[20728]: audit 2026-03-09T16:08:05.420810+0000 mon.c (mon.2) 571 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:06 vm01 bash[20728]: audit 2026-03-09T16:08:05.420810+0000 mon.c (mon.2) 571 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:06 vm01 bash[20728]: audit 2026-03-09T16:08:05.421170+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:06 vm01 bash[20728]: audit 2026-03-09T16:08:05.421170+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.762 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:06 vm09 bash[22983]: audit 2026-03-09T16:08:05.403059+0000 mon.a (mon.0) 3373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:06.762 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:06 vm09 bash[22983]: audit 2026-03-09T16:08:05.403059+0000 mon.a (mon.0) 3373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:06.762 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:06 vm09 bash[22983]: cluster 2026-03-09T16:08:05.419555+0000 mon.a (mon.0) 3374 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T16:08:06.762 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:06 vm09 bash[22983]: cluster 2026-03-09T16:08:05.419555+0000 mon.a (mon.0) 3374 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-09T16:08:06.762 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:06 vm09 bash[22983]: audit 2026-03-09T16:08:05.420810+0000 mon.c (mon.2) 571 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.762 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:06 vm09 bash[22983]: audit 2026-03-09T16:08:05.420810+0000 mon.c (mon.2) 571 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.762 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:06 vm09 bash[22983]: audit 2026-03-09T16:08:05.421170+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:06.762 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:06 vm09 bash[22983]: audit 2026-03-09T16:08:05.421170+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]: dispatch 2026-03-09T16:08:07.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:08:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: cluster 2026-03-09T16:08:06.403496+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: cluster 2026-03-09T16:08:06.403496+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: audit 2026-03-09T16:08:06.408665+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]': finished 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: audit 2026-03-09T16:08:06.408665+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]': finished 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: cluster 2026-03-09T16:08:06.413384+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: cluster 2026-03-09T16:08:06.413384+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: audit 2026-03-09T16:08:06.766025+0000 mgr.y (mgr.14520) 551 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: audit 2026-03-09T16:08:06.766025+0000 mgr.y (mgr.14520) 551 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: cluster 2026-03-09T16:08:06.841437+0000 mgr.y (mgr.14520) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:07.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:07 vm09 bash[22983]: cluster 2026-03-09T16:08:06.841437+0000 mgr.y (mgr.14520) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:07.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: cluster 2026-03-09T16:08:06.403496+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: cluster 2026-03-09T16:08:06.403496+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: audit 2026-03-09T16:08:06.408665+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]': finished 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: audit 2026-03-09T16:08:06.408665+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]': finished 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: cluster 2026-03-09T16:08:06.413384+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: cluster 2026-03-09T16:08:06.413384+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: audit 2026-03-09T16:08:06.766025+0000 mgr.y (mgr.14520) 551 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: audit 2026-03-09T16:08:06.766025+0000 mgr.y (mgr.14520) 551 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: cluster 2026-03-09T16:08:06.841437+0000 mgr.y (mgr.14520) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:07 vm01 bash[28152]: cluster 2026-03-09T16:08:06.841437+0000 mgr.y (mgr.14520) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: cluster 2026-03-09T16:08:06.403496+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: cluster 2026-03-09T16:08:06.403496+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: audit 2026-03-09T16:08:06.408665+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]': finished 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: audit 2026-03-09T16:08:06.408665+0000 mon.a (mon.0) 3377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-124", "mode": "writeback"}]': finished 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: cluster 2026-03-09T16:08:06.413384+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: cluster 2026-03-09T16:08:06.413384+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: audit 2026-03-09T16:08:06.766025+0000 mgr.y (mgr.14520) 551 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: audit 2026-03-09T16:08:06.766025+0000 mgr.y (mgr.14520) 551 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: cluster 2026-03-09T16:08:06.841437+0000 mgr.y (mgr.14520) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:07.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:07 vm01 bash[20728]: cluster 2026-03-09T16:08:06.841437+0000 mgr.y (mgr.14520) 552 : cluster [DBG] pgmap v952: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T16:08:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:08 vm09 bash[22983]: cluster 2026-03-09T16:08:07.450128+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T16:08:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:08 vm09 bash[22983]: cluster 2026-03-09T16:08:07.450128+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T16:08:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:08 vm09 bash[22983]: audit 2026-03-09T16:08:07.654005+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:08 vm09 bash[22983]: audit 2026-03-09T16:08:07.654005+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:08 vm09 bash[22983]: audit 2026-03-09T16:08:07.654493+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:08 vm09 bash[22983]: audit 2026-03-09T16:08:07.654493+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:08 vm01 bash[28152]: cluster 2026-03-09T16:08:07.450128+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T16:08:08.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:08 vm01 bash[28152]: cluster 2026-03-09T16:08:07.450128+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T16:08:08.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:08 vm01 bash[28152]: audit 2026-03-09T16:08:07.654005+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:08 vm01 bash[28152]: audit 2026-03-09T16:08:07.654005+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:08 vm01 bash[28152]: audit 2026-03-09T16:08:07.654493+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:08 vm01 bash[28152]: audit 2026-03-09T16:08:07.654493+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:08 vm01 bash[20728]: cluster 2026-03-09T16:08:07.450128+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T16:08:08.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:08 vm01 bash[20728]: cluster 2026-03-09T16:08:07.450128+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-09T16:08:08.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:08 vm01 bash[20728]: audit 2026-03-09T16:08:07.654005+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:08 vm01 bash[20728]: audit 2026-03-09T16:08:07.654005+0000 mon.c (mon.2) 572 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:08 vm01 bash[20728]: audit 2026-03-09T16:08:07.654493+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:08.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:08 vm01 bash[20728]: audit 2026-03-09T16:08:07.654493+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: audit 2026-03-09T16:08:08.527323+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: audit 2026-03-09T16:08:08.527323+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: cluster 2026-03-09T16:08:08.533774+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: cluster 2026-03-09T16:08:08.533774+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: audit 2026-03-09T16:08:08.542150+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: audit 2026-03-09T16:08:08.542150+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: audit 2026-03-09T16:08:08.544060+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: audit 2026-03-09T16:08:08.544060+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: cluster 2026-03-09T16:08:08.842462+0000 mgr.y (mgr.14520) 553 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:08:09.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:09 vm01 bash[28152]: cluster 2026-03-09T16:08:08.842462+0000 mgr.y (mgr.14520) 553 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: audit 2026-03-09T16:08:08.527323+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: audit 2026-03-09T16:08:08.527323+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: cluster 2026-03-09T16:08:08.533774+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: cluster 2026-03-09T16:08:08.533774+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: audit 2026-03-09T16:08:08.542150+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: audit 2026-03-09T16:08:08.542150+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: audit 2026-03-09T16:08:08.544060+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: audit 2026-03-09T16:08:08.544060+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: cluster 2026-03-09T16:08:08.842462+0000 mgr.y (mgr.14520) 553 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:08:09.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:09 vm01 bash[20728]: cluster 2026-03-09T16:08:08.842462+0000 mgr.y (mgr.14520) 553 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:08:10.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: audit 2026-03-09T16:08:08.527323+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:10.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: audit 2026-03-09T16:08:08.527323+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:10.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: cluster 2026-03-09T16:08:08.533774+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T16:08:10.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: cluster 2026-03-09T16:08:08.533774+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-09T16:08:10.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: audit 2026-03-09T16:08:08.542150+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:10.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: audit 2026-03-09T16:08:08.542150+0000 mon.c (mon.2) 573 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: audit 2026-03-09T16:08:08.544060+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: audit 2026-03-09T16:08:08.544060+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]: dispatch 2026-03-09T16:08:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: cluster 2026-03-09T16:08:08.842462+0000 mgr.y (mgr.14520) 553 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:08:10.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:09 vm09 bash[22983]: cluster 2026-03-09T16:08:08.842462+0000 mgr.y (mgr.14520) 553 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:10 vm01 bash[28152]: cluster 2026-03-09T16:08:09.594750+0000 mon.a (mon.0) 3384 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:10 vm01 bash[28152]: cluster 2026-03-09T16:08:09.594750+0000 mon.a (mon.0) 3384 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:10 vm01 bash[28152]: audit 2026-03-09T16:08:09.850024+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:10 vm01 bash[28152]: audit 2026-03-09T16:08:09.850024+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:10 vm01 bash[28152]: cluster 2026-03-09T16:08:09.860845+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:10 vm01 bash[28152]: cluster 2026-03-09T16:08:09.860845+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:10 vm01 bash[20728]: cluster 2026-03-09T16:08:09.594750+0000 mon.a (mon.0) 3384 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:10 vm01 bash[20728]: cluster 2026-03-09T16:08:09.594750+0000 mon.a (mon.0) 3384 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:10 vm01 bash[20728]: audit 2026-03-09T16:08:09.850024+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:10 vm01 bash[20728]: audit 2026-03-09T16:08:09.850024+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:10.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:10 vm01 bash[20728]: cluster 2026-03-09T16:08:09.860845+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T16:08:10.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:10 vm01 bash[20728]: cluster 2026-03-09T16:08:09.860845+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T16:08:11.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:10 vm09 bash[22983]: cluster 2026-03-09T16:08:09.594750+0000 mon.a (mon.0) 3384 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:11.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:10 vm09 bash[22983]: cluster 2026-03-09T16:08:09.594750+0000 mon.a (mon.0) 3384 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:11.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:10 vm09 bash[22983]: audit 2026-03-09T16:08:09.850024+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:11.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:10 vm09 bash[22983]: audit 2026-03-09T16:08:09.850024+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-124"}]': finished 2026-03-09T16:08:11.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:10 vm09 bash[22983]: cluster 2026-03-09T16:08:09.860845+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T16:08:11.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:10 vm09 bash[22983]: cluster 2026-03-09T16:08:09.860845+0000 mon.a (mon.0) 3386 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-09T16:08:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:11 vm09 bash[22983]: cluster 2026-03-09T16:08:10.842813+0000 mgr.y (mgr.14520) 554 : cluster [DBG] pgmap v957: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:08:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:11 vm09 bash[22983]: cluster 2026-03-09T16:08:10.842813+0000 mgr.y (mgr.14520) 554 : cluster [DBG] pgmap v957: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:08:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:11 vm09 bash[22983]: cluster 2026-03-09T16:08:10.859413+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T16:08:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:11 vm09 bash[22983]: cluster 2026-03-09T16:08:10.859413+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T16:08:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:11 vm09 bash[22983]: cluster 2026-03-09T16:08:11.677116+0000 mon.a (mon.0) 3388 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:11 vm09 bash[22983]: cluster 2026-03-09T16:08:11.677116+0000 mon.a (mon.0) 3388 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:11 vm01 bash[28152]: cluster 2026-03-09T16:08:10.842813+0000 mgr.y (mgr.14520) 554 : cluster [DBG] pgmap v957: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:11 vm01 bash[28152]: cluster 2026-03-09T16:08:10.842813+0000 mgr.y (mgr.14520) 554 : cluster [DBG] pgmap v957: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:11 vm01 bash[28152]: cluster 2026-03-09T16:08:10.859413+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:11 vm01 bash[28152]: cluster 2026-03-09T16:08:10.859413+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:11 vm01 bash[28152]: cluster 2026-03-09T16:08:11.677116+0000 mon.a (mon.0) 3388 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:11 vm01 bash[28152]: cluster 2026-03-09T16:08:11.677116+0000 mon.a (mon.0) 3388 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:11 vm01 bash[20728]: cluster 2026-03-09T16:08:10.842813+0000 mgr.y (mgr.14520) 554 : cluster [DBG] pgmap v957: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:11 vm01 bash[20728]: cluster 2026-03-09T16:08:10.842813+0000 mgr.y (mgr.14520) 554 : cluster [DBG] pgmap v957: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:11 vm01 bash[20728]: cluster 2026-03-09T16:08:10.859413+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:11 vm01 bash[20728]: cluster 2026-03-09T16:08:10.859413+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:11 vm01 bash[20728]: cluster 2026-03-09T16:08:11.677116+0000 mon.a (mon.0) 3388 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:12.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:11 vm01 bash[20728]: cluster 2026-03-09T16:08:11.677116+0000 mon.a (mon.0) 3388 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:13.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:08:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:08:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:08:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:13 vm09 bash[22983]: cluster 2026-03-09T16:08:11.893653+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T16:08:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:13 vm09 bash[22983]: cluster 2026-03-09T16:08:11.893653+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T16:08:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:13 vm09 bash[22983]: audit 2026-03-09T16:08:11.896871+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:13 vm09 bash[22983]: audit 2026-03-09T16:08:11.896871+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:13 vm09 bash[22983]: audit 2026-03-09T16:08:11.899038+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:13 vm09 bash[22983]: audit 2026-03-09T16:08:11.899038+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:13 vm01 bash[28152]: cluster 2026-03-09T16:08:11.893653+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T16:08:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:13 vm01 bash[28152]: cluster 2026-03-09T16:08:11.893653+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T16:08:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:13 vm01 bash[28152]: audit 2026-03-09T16:08:11.896871+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:13 vm01 bash[28152]: audit 2026-03-09T16:08:11.896871+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:13 vm01 bash[28152]: audit 2026-03-09T16:08:11.899038+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:13 vm01 bash[28152]: audit 2026-03-09T16:08:11.899038+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:13 vm01 bash[20728]: cluster 2026-03-09T16:08:11.893653+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T16:08:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:13 vm01 bash[20728]: cluster 2026-03-09T16:08:11.893653+0000 mon.a (mon.0) 3389 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-09T16:08:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:13 vm01 bash[20728]: audit 2026-03-09T16:08:11.896871+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:13 vm01 bash[20728]: audit 2026-03-09T16:08:11.896871+0000 mon.c (mon.2) 574 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:13 vm01 bash[20728]: audit 2026-03-09T16:08:11.899038+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:13 vm01 bash[20728]: audit 2026-03-09T16:08:11.899038+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: cluster 2026-03-09T16:08:12.843114+0000 mgr.y (mgr.14520) 555 : cluster [DBG] pgmap v960: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: cluster 2026-03-09T16:08:12.843114+0000 mgr.y (mgr.14520) 555 : cluster [DBG] pgmap v960: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:13.336600+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:13.336600+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: cluster 2026-03-09T16:08:13.361205+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: cluster 2026-03-09T16:08:13.361205+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:13.363850+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:13.363850+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:13.364744+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:13.364744+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:14.034347+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:14.034347+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: cluster 2026-03-09T16:08:14.036981+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: cluster 2026-03-09T16:08:14.036981+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:14.042886+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:14.042886+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:14.043358+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:14 vm09 bash[22983]: audit 2026-03-09T16:08:14.043358+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: cluster 2026-03-09T16:08:12.843114+0000 mgr.y (mgr.14520) 555 : cluster [DBG] pgmap v960: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: cluster 2026-03-09T16:08:12.843114+0000 mgr.y (mgr.14520) 555 : cluster [DBG] pgmap v960: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:13.336600+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:13.336600+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: cluster 2026-03-09T16:08:13.361205+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: cluster 2026-03-09T16:08:13.361205+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:13.363850+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:13.363850+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:13.364744+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:13.364744+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:14.034347+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:14.034347+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: cluster 2026-03-09T16:08:14.036981+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: cluster 2026-03-09T16:08:14.036981+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:14.042886+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:14.042886+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:14.043358+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:14 vm01 bash[28152]: audit 2026-03-09T16:08:14.043358+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: cluster 2026-03-09T16:08:12.843114+0000 mgr.y (mgr.14520) 555 : cluster [DBG] pgmap v960: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: cluster 2026-03-09T16:08:12.843114+0000 mgr.y (mgr.14520) 555 : cluster [DBG] pgmap v960: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:13.336600+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:13.336600+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: cluster 2026-03-09T16:08:13.361205+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: cluster 2026-03-09T16:08:13.361205+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:13.363850+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:13.363850+0000 mon.c (mon.2) 575 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:13.364744+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:13.364744+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:14.034347+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:14.034347+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: cluster 2026-03-09T16:08:14.036981+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: cluster 2026-03-09T16:08:14.036981+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:14.042886+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:14.042886+0000 mon.c (mon.2) 576 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:14.043358+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:14 vm01 bash[20728]: audit 2026-03-09T16:08:14.043358+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: audit 2026-03-09T16:08:14.437929+0000 mon.a (mon.0) 3397 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: audit 2026-03-09T16:08:14.437929+0000 mon.a (mon.0) 3397 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: cluster 2026-03-09T16:08:14.843740+0000 mgr.y (mgr.14520) 556 : cluster [DBG] pgmap v963: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: cluster 2026-03-09T16:08:14.843740+0000 mgr.y (mgr.14520) 556 : cluster [DBG] pgmap v963: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: audit 2026-03-09T16:08:15.037990+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: audit 2026-03-09T16:08:15.037990+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: cluster 2026-03-09T16:08:15.045751+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: cluster 2026-03-09T16:08:15.045751+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: audit 2026-03-09T16:08:15.051854+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: audit 2026-03-09T16:08:15.051854+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: audit 2026-03-09T16:08:15.052169+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:15 vm09 bash[22983]: audit 2026-03-09T16:08:15.052169+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: audit 2026-03-09T16:08:14.437929+0000 mon.a (mon.0) 3397 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: audit 2026-03-09T16:08:14.437929+0000 mon.a (mon.0) 3397 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: cluster 2026-03-09T16:08:14.843740+0000 mgr.y (mgr.14520) 556 : cluster [DBG] pgmap v963: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: cluster 2026-03-09T16:08:14.843740+0000 mgr.y (mgr.14520) 556 : cluster [DBG] pgmap v963: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: audit 2026-03-09T16:08:15.037990+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: audit 2026-03-09T16:08:15.037990+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: cluster 2026-03-09T16:08:15.045751+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: cluster 2026-03-09T16:08:15.045751+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: audit 2026-03-09T16:08:15.051854+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: audit 2026-03-09T16:08:15.051854+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: audit 2026-03-09T16:08:15.052169+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:15 vm01 bash[20728]: audit 2026-03-09T16:08:15.052169+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: audit 2026-03-09T16:08:14.437929+0000 mon.a (mon.0) 3397 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: audit 2026-03-09T16:08:14.437929+0000 mon.a (mon.0) 3397 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: cluster 2026-03-09T16:08:14.843740+0000 mgr.y (mgr.14520) 556 : cluster [DBG] pgmap v963: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: cluster 2026-03-09T16:08:14.843740+0000 mgr.y (mgr.14520) 556 : cluster [DBG] pgmap v963: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: audit 2026-03-09T16:08:15.037990+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: audit 2026-03-09T16:08:15.037990+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: cluster 2026-03-09T16:08:15.045751+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: cluster 2026-03-09T16:08:15.045751+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: audit 2026-03-09T16:08:15.051854+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: audit 2026-03-09T16:08:15.051854+0000 mon.c (mon.2) 577 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: audit 2026-03-09T16:08:15.052169+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:15 vm01 bash[28152]: audit 2026-03-09T16:08:15.052169+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]: dispatch 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: cluster 2026-03-09T16:08:16.038089+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: cluster 2026-03-09T16:08:16.038089+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: audit 2026-03-09T16:08:16.041694+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]': finished 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: audit 2026-03-09T16:08:16.041694+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]': finished 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: cluster 2026-03-09T16:08:16.056954+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: cluster 2026-03-09T16:08:16.056954+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: audit 2026-03-09T16:08:16.133219+0000 mon.c (mon.2) 578 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: audit 2026-03-09T16:08:16.133219+0000 mon.c (mon.2) 578 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: audit 2026-03-09T16:08:16.133524+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:16 vm09 bash[22983]: audit 2026-03-09T16:08:16.133524+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: cluster 2026-03-09T16:08:16.038089+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: cluster 2026-03-09T16:08:16.038089+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: audit 2026-03-09T16:08:16.041694+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]': finished 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: audit 2026-03-09T16:08:16.041694+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]': finished 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: cluster 2026-03-09T16:08:16.056954+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: cluster 2026-03-09T16:08:16.056954+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: audit 2026-03-09T16:08:16.133219+0000 mon.c (mon.2) 578 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: audit 2026-03-09T16:08:16.133219+0000 mon.c (mon.2) 578 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: audit 2026-03-09T16:08:16.133524+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:16 vm01 bash[20728]: audit 2026-03-09T16:08:16.133524+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: cluster 2026-03-09T16:08:16.038089+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: cluster 2026-03-09T16:08:16.038089+0000 mon.a (mon.0) 3401 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: audit 2026-03-09T16:08:16.041694+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]': finished 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: audit 2026-03-09T16:08:16.041694+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-126", "mode": "writeback"}]': finished 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: cluster 2026-03-09T16:08:16.056954+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: cluster 2026-03-09T16:08:16.056954+0000 mon.a (mon.0) 3403 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: audit 2026-03-09T16:08:16.133219+0000 mon.c (mon.2) 578 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: audit 2026-03-09T16:08:16.133219+0000 mon.c (mon.2) 578 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: audit 2026-03-09T16:08:16.133524+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:16.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:16 vm01 bash[28152]: audit 2026-03-09T16:08:16.133524+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:17.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:08:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:08:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: audit 2026-03-09T16:08:16.776993+0000 mgr.y (mgr.14520) 557 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: audit 2026-03-09T16:08:16.776993+0000 mgr.y (mgr.14520) 557 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: cluster 2026-03-09T16:08:16.844219+0000 mgr.y (mgr.14520) 558 : cluster [DBG] pgmap v966: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: cluster 2026-03-09T16:08:16.844219+0000 mgr.y (mgr.14520) 558 : cluster [DBG] pgmap v966: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: audit 2026-03-09T16:08:17.045489+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: audit 2026-03-09T16:08:17.045489+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: audit 2026-03-09T16:08:17.051058+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: audit 2026-03-09T16:08:17.051058+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: cluster 2026-03-09T16:08:17.052287+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T16:08:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: cluster 2026-03-09T16:08:17.052287+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T16:08:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: audit 2026-03-09T16:08:17.052654+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:18 vm09 bash[22983]: audit 2026-03-09T16:08:17.052654+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: audit 2026-03-09T16:08:16.776993+0000 mgr.y (mgr.14520) 557 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: audit 2026-03-09T16:08:16.776993+0000 mgr.y (mgr.14520) 557 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: cluster 2026-03-09T16:08:16.844219+0000 mgr.y (mgr.14520) 558 : cluster [DBG] pgmap v966: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: cluster 2026-03-09T16:08:16.844219+0000 mgr.y (mgr.14520) 558 : cluster [DBG] pgmap v966: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: audit 2026-03-09T16:08:17.045489+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: audit 2026-03-09T16:08:17.045489+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: audit 2026-03-09T16:08:17.051058+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: audit 2026-03-09T16:08:17.051058+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: cluster 2026-03-09T16:08:17.052287+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T16:08:18.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: cluster 2026-03-09T16:08:17.052287+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: audit 2026-03-09T16:08:17.052654+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:18 vm01 bash[28152]: audit 2026-03-09T16:08:17.052654+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: audit 2026-03-09T16:08:16.776993+0000 mgr.y (mgr.14520) 557 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: audit 2026-03-09T16:08:16.776993+0000 mgr.y (mgr.14520) 557 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: cluster 2026-03-09T16:08:16.844219+0000 mgr.y (mgr.14520) 558 : cluster [DBG] pgmap v966: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: cluster 2026-03-09T16:08:16.844219+0000 mgr.y (mgr.14520) 558 : cluster [DBG] pgmap v966: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: audit 2026-03-09T16:08:17.045489+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: audit 2026-03-09T16:08:17.045489+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: audit 2026-03-09T16:08:17.051058+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: audit 2026-03-09T16:08:17.051058+0000 mon.c (mon.2) 579 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: cluster 2026-03-09T16:08:17.052287+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: cluster 2026-03-09T16:08:17.052287+0000 mon.a (mon.0) 3406 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: audit 2026-03-09T16:08:17.052654+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:18.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:18 vm01 bash[20728]: audit 2026-03-09T16:08:17.052654+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]: dispatch 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringFlush (9431 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapHasChunk 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapHasChunk (6089 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollback 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollback (5035 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollbackRefcount 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollbackRefcount (25712 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictRollback 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictRollback (13147 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PropagateBaseTierError 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PropagateBaseTierError (12286 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HelloWriteReturn 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: 00000000 79 6f 75 20 6d 69 67 68 74 20 73 65 65 20 74 68 |you might see th| 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: 00000010 69 73 |is| 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: 00000012 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HelloWriteReturn (12270 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier (6036 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP (561268 ms total) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.Dirty 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.Dirty (1368 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.FlushWriteRaces 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.FlushWriteRaces (11091 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.CallForcesPromote 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.CallForcesPromote (18590 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.HitSetNone 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.HitSetNone (2 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP (31051 ms total) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Overlay 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Overlay (7124 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Promote 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Promote (8095 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnap 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: waiting for scrub... 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: done waiting 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnap (24713 ms) 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace 2026-03-09T16:08:19.296 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace (10028 ms) 2026-03-09T16:08:19.297 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Whiteout 2026-03-09T16:08:19.297 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Whiteout (8191 ms) 2026-03-09T16:08:19.297 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Evict 2026-03-09T16:08:19.297 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Evict (8129 ms) 2026-03-09T16:08:19.297 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.EvictSnap 2026-03-09T16:08:19.297 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.EvictSnap (10531 ms) 2026-03-09T16:08:19.297 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlush 2026-03-09T16:08:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: cluster 2026-03-09T16:08:18.045696+0000 mon.a (mon.0) 3408 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: cluster 2026-03-09T16:08:18.045696+0000 mon.a (mon.0) 3408 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: audit 2026-03-09T16:08:18.178749+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: audit 2026-03-09T16:08:18.178749+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: cluster 2026-03-09T16:08:18.233088+0000 mon.a (mon.0) 3410 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T16:08:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: cluster 2026-03-09T16:08:18.233088+0000 mon.a (mon.0) 3410 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T16:08:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: cluster 2026-03-09T16:08:18.844790+0000 mgr.y (mgr.14520) 559 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:08:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: cluster 2026-03-09T16:08:18.844790+0000 mgr.y (mgr.14520) 559 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:08:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: cluster 2026-03-09T16:08:19.005745+0000 mon.a (mon.0) 3411 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:19.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:19 vm09 bash[22983]: cluster 2026-03-09T16:08:19.005745+0000 mon.a (mon.0) 3411 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: cluster 2026-03-09T16:08:18.045696+0000 mon.a (mon.0) 3408 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: cluster 2026-03-09T16:08:18.045696+0000 mon.a (mon.0) 3408 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: audit 2026-03-09T16:08:18.178749+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: audit 2026-03-09T16:08:18.178749+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: cluster 2026-03-09T16:08:18.233088+0000 mon.a (mon.0) 3410 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: cluster 2026-03-09T16:08:18.233088+0000 mon.a (mon.0) 3410 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: cluster 2026-03-09T16:08:18.844790+0000 mgr.y (mgr.14520) 559 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: cluster 2026-03-09T16:08:18.844790+0000 mgr.y (mgr.14520) 559 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: cluster 2026-03-09T16:08:19.005745+0000 mon.a (mon.0) 3411 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:19 vm01 bash[28152]: cluster 2026-03-09T16:08:19.005745+0000 mon.a (mon.0) 3411 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: cluster 2026-03-09T16:08:18.045696+0000 mon.a (mon.0) 3408 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: cluster 2026-03-09T16:08:18.045696+0000 mon.a (mon.0) 3408 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: audit 2026-03-09T16:08:18.178749+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: audit 2026-03-09T16:08:18.178749+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-126"}]': finished 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: cluster 2026-03-09T16:08:18.233088+0000 mon.a (mon.0) 3410 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: cluster 2026-03-09T16:08:18.233088+0000 mon.a (mon.0) 3410 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: cluster 2026-03-09T16:08:18.844790+0000 mgr.y (mgr.14520) 559 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: cluster 2026-03-09T16:08:18.844790+0000 mgr.y (mgr.14520) 559 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: cluster 2026-03-09T16:08:19.005745+0000 mon.a (mon.0) 3411 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:19.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:19 vm01 bash[20728]: cluster 2026-03-09T16:08:19.005745+0000 mon.a (mon.0) 3411 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:20 vm09 bash[22983]: cluster 2026-03-09T16:08:19.299182+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T16:08:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:20 vm09 bash[22983]: cluster 2026-03-09T16:08:19.299182+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T16:08:20.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:20 vm01 bash[28152]: cluster 2026-03-09T16:08:19.299182+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T16:08:20.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:20 vm01 bash[28152]: cluster 2026-03-09T16:08:19.299182+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T16:08:20.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:20 vm01 bash[20728]: cluster 2026-03-09T16:08:19.299182+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T16:08:20.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:20 vm01 bash[20728]: cluster 2026-03-09T16:08:19.299182+0000 mon.a (mon.0) 3412 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-09T16:08:21.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:21 vm09 bash[22983]: cluster 2026-03-09T16:08:20.299889+0000 mon.a (mon.0) 3413 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T16:08:21.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:21 vm09 bash[22983]: cluster 2026-03-09T16:08:20.299889+0000 mon.a (mon.0) 3413 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T16:08:21.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:21 vm09 bash[22983]: audit 2026-03-09T16:08:20.320047+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:21 vm09 bash[22983]: audit 2026-03-09T16:08:20.320047+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:21 vm09 bash[22983]: audit 2026-03-09T16:08:20.320542+0000 mon.a (mon.0) 3414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:21 vm09 bash[22983]: audit 2026-03-09T16:08:20.320542+0000 mon.a (mon.0) 3414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:21 vm09 bash[22983]: cluster 2026-03-09T16:08:20.845107+0000 mgr.y (mgr.14520) 560 : cluster [DBG] pgmap v972: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:21.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:21 vm09 bash[22983]: cluster 2026-03-09T16:08:20.845107+0000 mgr.y (mgr.14520) 560 : cluster [DBG] pgmap v972: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:21 vm01 bash[28152]: cluster 2026-03-09T16:08:20.299889+0000 mon.a (mon.0) 3413 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:21 vm01 bash[28152]: cluster 2026-03-09T16:08:20.299889+0000 mon.a (mon.0) 3413 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:21 vm01 bash[28152]: audit 2026-03-09T16:08:20.320047+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:21 vm01 bash[28152]: audit 2026-03-09T16:08:20.320047+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:21 vm01 bash[28152]: audit 2026-03-09T16:08:20.320542+0000 mon.a (mon.0) 3414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:21 vm01 bash[28152]: audit 2026-03-09T16:08:20.320542+0000 mon.a (mon.0) 3414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:21 vm01 bash[28152]: cluster 2026-03-09T16:08:20.845107+0000 mgr.y (mgr.14520) 560 : cluster [DBG] pgmap v972: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:21 vm01 bash[28152]: cluster 2026-03-09T16:08:20.845107+0000 mgr.y (mgr.14520) 560 : cluster [DBG] pgmap v972: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:21 vm01 bash[20728]: cluster 2026-03-09T16:08:20.299889+0000 mon.a (mon.0) 3413 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:21 vm01 bash[20728]: cluster 2026-03-09T16:08:20.299889+0000 mon.a (mon.0) 3413 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:21 vm01 bash[20728]: audit 2026-03-09T16:08:20.320047+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:21 vm01 bash[20728]: audit 2026-03-09T16:08:20.320047+0000 mon.c (mon.2) 580 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:21 vm01 bash[20728]: audit 2026-03-09T16:08:20.320542+0000 mon.a (mon.0) 3414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:21 vm01 bash[20728]: audit 2026-03-09T16:08:20.320542+0000 mon.a (mon.0) 3414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:21.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:21 vm01 bash[20728]: cluster 2026-03-09T16:08:20.845107+0000 mgr.y (mgr.14520) 560 : cluster [DBG] pgmap v972: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:21.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:21 vm01 bash[20728]: cluster 2026-03-09T16:08:20.845107+0000 mgr.y (mgr.14520) 560 : cluster [DBG] pgmap v972: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:21.302808+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:21.302808+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: cluster 2026-03-09T16:08:21.312049+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: cluster 2026-03-09T16:08:21.312049+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:21.353445+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:21.353445+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:21.353794+0000 mon.a (mon.0) 3417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:21.353794+0000 mon.a (mon.0) 3417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:22.310012+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:22.310012+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: cluster 2026-03-09T16:08:22.315170+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: cluster 2026-03-09T16:08:22.315170+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:22.323770+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:22.323770+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:22.325350+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:22 vm09 bash[22983]: audit 2026-03-09T16:08:22.325350+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:21.302808+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:22.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:21.302808+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: cluster 2026-03-09T16:08:21.312049+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: cluster 2026-03-09T16:08:21.312049+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:21.353445+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:21.353445+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:21.353794+0000 mon.a (mon.0) 3417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:21.353794+0000 mon.a (mon.0) 3417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:22.310012+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:22.310012+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: cluster 2026-03-09T16:08:22.315170+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: cluster 2026-03-09T16:08:22.315170+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:22.323770+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:22.323770+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:22.325350+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:22 vm01 bash[28152]: audit 2026-03-09T16:08:22.325350+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:21.302808+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:21.302808+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: cluster 2026-03-09T16:08:21.312049+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: cluster 2026-03-09T16:08:21.312049+0000 mon.a (mon.0) 3416 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:21.353445+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:21.353445+0000 mon.c (mon.2) 581 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:21.353794+0000 mon.a (mon.0) 3417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:21.353794+0000 mon.a (mon.0) 3417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:22.310012+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:22.310012+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: cluster 2026-03-09T16:08:22.315170+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: cluster 2026-03-09T16:08:22.315170+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:22.323770+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:22.323770+0000 mon.c (mon.2) 582 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:22.325350+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:22.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:22 vm01 bash[20728]: audit 2026-03-09T16:08:22.325350+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:08:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:08:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:08:23.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: cluster 2026-03-09T16:08:22.845566+0000 mgr.y (mgr.14520) 561 : cluster [DBG] pgmap v975: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:08:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: cluster 2026-03-09T16:08:22.845566+0000 mgr.y (mgr.14520) 561 : cluster [DBG] pgmap v975: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:08:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: audit 2026-03-09T16:08:23.313952+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: audit 2026-03-09T16:08:23.313952+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: audit 2026-03-09T16:08:23.319887+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: audit 2026-03-09T16:08:23.319887+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: cluster 2026-03-09T16:08:23.324295+0000 mon.a (mon.0) 3422 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T16:08:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: cluster 2026-03-09T16:08:23.324295+0000 mon.a (mon.0) 3422 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T16:08:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: audit 2026-03-09T16:08:23.324846+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:23 vm09 bash[22983]: audit 2026-03-09T16:08:23.324846+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: cluster 2026-03-09T16:08:22.845566+0000 mgr.y (mgr.14520) 561 : cluster [DBG] pgmap v975: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:08:23.692 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: cluster 2026-03-09T16:08:22.845566+0000 mgr.y (mgr.14520) 561 : cluster [DBG] pgmap v975: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: audit 2026-03-09T16:08:23.313952+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: audit 2026-03-09T16:08:23.313952+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: audit 2026-03-09T16:08:23.319887+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: audit 2026-03-09T16:08:23.319887+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: cluster 2026-03-09T16:08:23.324295+0000 mon.a (mon.0) 3422 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: cluster 2026-03-09T16:08:23.324295+0000 mon.a (mon.0) 3422 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: audit 2026-03-09T16:08:23.324846+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:23 vm01 bash[20728]: audit 2026-03-09T16:08:23.324846+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: cluster 2026-03-09T16:08:22.845566+0000 mgr.y (mgr.14520) 561 : cluster [DBG] pgmap v975: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: cluster 2026-03-09T16:08:22.845566+0000 mgr.y (mgr.14520) 561 : cluster [DBG] pgmap v975: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: audit 2026-03-09T16:08:23.313952+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: audit 2026-03-09T16:08:23.313952+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: audit 2026-03-09T16:08:23.319887+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: audit 2026-03-09T16:08:23.319887+0000 mon.c (mon.2) 583 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: cluster 2026-03-09T16:08:23.324295+0000 mon.a (mon.0) 3422 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: cluster 2026-03-09T16:08:23.324295+0000 mon.a (mon.0) 3422 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: audit 2026-03-09T16:08:23.324846+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:23.693 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:23 vm01 bash[28152]: audit 2026-03-09T16:08:23.324846+0000 mon.a (mon.0) 3423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]: dispatch 2026-03-09T16:08:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:24 vm09 bash[22983]: cluster 2026-03-09T16:08:24.314073+0000 mon.a (mon.0) 3424 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:24 vm09 bash[22983]: cluster 2026-03-09T16:08:24.314073+0000 mon.a (mon.0) 3424 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:24 vm09 bash[22983]: audit 2026-03-09T16:08:24.318435+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]': finished 2026-03-09T16:08:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:24 vm09 bash[22983]: audit 2026-03-09T16:08:24.318435+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]': finished 2026-03-09T16:08:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:24 vm09 bash[22983]: cluster 2026-03-09T16:08:24.323770+0000 mon.a (mon.0) 3426 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T16:08:24.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:24 vm09 bash[22983]: cluster 2026-03-09T16:08:24.323770+0000 mon.a (mon.0) 3426 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T16:08:24.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:24 vm01 bash[20728]: cluster 2026-03-09T16:08:24.314073+0000 mon.a (mon.0) 3424 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:24.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:24 vm01 bash[20728]: cluster 2026-03-09T16:08:24.314073+0000 mon.a (mon.0) 3424 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:24.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:24 vm01 bash[20728]: audit 2026-03-09T16:08:24.318435+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]': finished 2026-03-09T16:08:24.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:24 vm01 bash[20728]: audit 2026-03-09T16:08:24.318435+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]': finished 2026-03-09T16:08:24.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:24 vm01 bash[20728]: cluster 2026-03-09T16:08:24.323770+0000 mon.a (mon.0) 3426 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T16:08:24.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:24 vm01 bash[20728]: cluster 2026-03-09T16:08:24.323770+0000 mon.a (mon.0) 3426 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T16:08:24.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:24 vm01 bash[28152]: cluster 2026-03-09T16:08:24.314073+0000 mon.a (mon.0) 3424 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:24.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:24 vm01 bash[28152]: cluster 2026-03-09T16:08:24.314073+0000 mon.a (mon.0) 3424 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:24.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:24 vm01 bash[28152]: audit 2026-03-09T16:08:24.318435+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]': finished 2026-03-09T16:08:24.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:24 vm01 bash[28152]: audit 2026-03-09T16:08:24.318435+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-128", "mode": "writeback"}]': finished 2026-03-09T16:08:24.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:24 vm01 bash[28152]: cluster 2026-03-09T16:08:24.323770+0000 mon.a (mon.0) 3426 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T16:08:24.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:24 vm01 bash[28152]: cluster 2026-03-09T16:08:24.323770+0000 mon.a (mon.0) 3426 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-09T16:08:25.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:25 vm09 bash[22983]: cluster 2026-03-09T16:08:24.845945+0000 mgr.y (mgr.14520) 562 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:25.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:25 vm09 bash[22983]: cluster 2026-03-09T16:08:24.845945+0000 mgr.y (mgr.14520) 562 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:25.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:25 vm01 bash[28152]: cluster 2026-03-09T16:08:24.845945+0000 mgr.y (mgr.14520) 562 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:25.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:25 vm01 bash[28152]: cluster 2026-03-09T16:08:24.845945+0000 mgr.y (mgr.14520) 562 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:25.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:25 vm01 bash[20728]: cluster 2026-03-09T16:08:24.845945+0000 mgr.y (mgr.14520) 562 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:25.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:25 vm01 bash[20728]: cluster 2026-03-09T16:08:24.845945+0000 mgr.y (mgr.14520) 562 : cluster [DBG] pgmap v978: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:27.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:08:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:08:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:27 vm01 bash[28152]: audit 2026-03-09T16:08:26.787883+0000 mgr.y (mgr.14520) 563 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:27 vm01 bash[28152]: audit 2026-03-09T16:08:26.787883+0000 mgr.y (mgr.14520) 563 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:27 vm01 bash[28152]: cluster 2026-03-09T16:08:26.846288+0000 mgr.y (mgr.14520) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 923 B/s rd, 0 op/s 2026-03-09T16:08:28.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:27 vm01 bash[28152]: cluster 2026-03-09T16:08:26.846288+0000 mgr.y (mgr.14520) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 923 B/s rd, 0 op/s 2026-03-09T16:08:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:27 vm01 bash[20728]: audit 2026-03-09T16:08:26.787883+0000 mgr.y (mgr.14520) 563 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:27 vm01 bash[20728]: audit 2026-03-09T16:08:26.787883+0000 mgr.y (mgr.14520) 563 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:27 vm01 bash[20728]: cluster 2026-03-09T16:08:26.846288+0000 mgr.y (mgr.14520) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 923 B/s rd, 0 op/s 2026-03-09T16:08:28.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:27 vm01 bash[20728]: cluster 2026-03-09T16:08:26.846288+0000 mgr.y (mgr.14520) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 923 B/s rd, 0 op/s 2026-03-09T16:08:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:27 vm09 bash[22983]: audit 2026-03-09T16:08:26.787883+0000 mgr.y (mgr.14520) 563 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:27 vm09 bash[22983]: audit 2026-03-09T16:08:26.787883+0000 mgr.y (mgr.14520) 563 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:27 vm09 bash[22983]: cluster 2026-03-09T16:08:26.846288+0000 mgr.y (mgr.14520) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 923 B/s rd, 0 op/s 2026-03-09T16:08:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:27 vm09 bash[22983]: cluster 2026-03-09T16:08:26.846288+0000 mgr.y (mgr.14520) 564 : cluster [DBG] pgmap v979: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 923 B/s rd, 0 op/s 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: cluster 2026-03-09T16:08:28.846990+0000 mgr.y (mgr.14520) 565 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 940 B/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: cluster 2026-03-09T16:08:28.846990+0000 mgr.y (mgr.14520) 565 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 940 B/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: audit 2026-03-09T16:08:29.392807+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: audit 2026-03-09T16:08:29.392807+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: audit 2026-03-09T16:08:29.393226+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: audit 2026-03-09T16:08:29.393226+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: audit 2026-03-09T16:08:29.449388+0000 mon.a (mon.0) 3428 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: audit 2026-03-09T16:08:29.449388+0000 mon.a (mon.0) 3428 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: audit 2026-03-09T16:08:29.450164+0000 mon.a (mon.0) 3429 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:30.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:29 vm01 bash[20728]: audit 2026-03-09T16:08:29.450164+0000 mon.a (mon.0) 3429 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: cluster 2026-03-09T16:08:28.846990+0000 mgr.y (mgr.14520) 565 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 940 B/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: cluster 2026-03-09T16:08:28.846990+0000 mgr.y (mgr.14520) 565 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 940 B/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: audit 2026-03-09T16:08:29.392807+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: audit 2026-03-09T16:08:29.392807+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: audit 2026-03-09T16:08:29.393226+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: audit 2026-03-09T16:08:29.393226+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: audit 2026-03-09T16:08:29.449388+0000 mon.a (mon.0) 3428 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: audit 2026-03-09T16:08:29.449388+0000 mon.a (mon.0) 3428 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: audit 2026-03-09T16:08:29.450164+0000 mon.a (mon.0) 3429 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:30.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:29 vm01 bash[28152]: audit 2026-03-09T16:08:29.450164+0000 mon.a (mon.0) 3429 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: cluster 2026-03-09T16:08:28.846990+0000 mgr.y (mgr.14520) 565 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 940 B/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:08:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: cluster 2026-03-09T16:08:28.846990+0000 mgr.y (mgr.14520) 565 : cluster [DBG] pgmap v980: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 940 B/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:08:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: audit 2026-03-09T16:08:29.392807+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: audit 2026-03-09T16:08:29.392807+0000 mon.c (mon.2) 584 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: audit 2026-03-09T16:08:29.393226+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: audit 2026-03-09T16:08:29.393226+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: audit 2026-03-09T16:08:29.449388+0000 mon.a (mon.0) 3428 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: audit 2026-03-09T16:08:29.449388+0000 mon.a (mon.0) 3428 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: audit 2026-03-09T16:08:29.450164+0000 mon.a (mon.0) 3429 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:29 vm09 bash[22983]: audit 2026-03-09T16:08:29.450164+0000 mon.a (mon.0) 3429 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: audit 2026-03-09T16:08:29.912397+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: audit 2026-03-09T16:08:29.912397+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: audit 2026-03-09T16:08:29.922337+0000 mon.c (mon.2) 585 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: audit 2026-03-09T16:08:29.922337+0000 mon.c (mon.2) 585 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: cluster 2026-03-09T16:08:29.923272+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: cluster 2026-03-09T16:08:29.923272+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: audit 2026-03-09T16:08:29.924792+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: audit 2026-03-09T16:08:29.924792+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: cluster 2026-03-09T16:08:30.912464+0000 mon.a (mon.0) 3433 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:31.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:30 vm09 bash[22983]: cluster 2026-03-09T16:08:30.912464+0000 mon.a (mon.0) 3433 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: audit 2026-03-09T16:08:29.912397+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: audit 2026-03-09T16:08:29.912397+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: audit 2026-03-09T16:08:29.922337+0000 mon.c (mon.2) 585 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: audit 2026-03-09T16:08:29.922337+0000 mon.c (mon.2) 585 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: cluster 2026-03-09T16:08:29.923272+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: cluster 2026-03-09T16:08:29.923272+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: audit 2026-03-09T16:08:29.924792+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: audit 2026-03-09T16:08:29.924792+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: cluster 2026-03-09T16:08:30.912464+0000 mon.a (mon.0) 3433 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:30 vm01 bash[20728]: cluster 2026-03-09T16:08:30.912464+0000 mon.a (mon.0) 3433 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: audit 2026-03-09T16:08:29.912397+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: audit 2026-03-09T16:08:29.912397+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: audit 2026-03-09T16:08:29.922337+0000 mon.c (mon.2) 585 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: audit 2026-03-09T16:08:29.922337+0000 mon.c (mon.2) 585 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: cluster 2026-03-09T16:08:29.923272+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T16:08:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: cluster 2026-03-09T16:08:29.923272+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-09T16:08:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: audit 2026-03-09T16:08:29.924792+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: audit 2026-03-09T16:08:29.924792+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]: dispatch 2026-03-09T16:08:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: cluster 2026-03-09T16:08:30.912464+0000 mon.a (mon.0) 3433 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:31.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:30 vm01 bash[28152]: cluster 2026-03-09T16:08:30.912464+0000 mon.a (mon.0) 3433 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:32 vm09 bash[22983]: cluster 2026-03-09T16:08:30.847367+0000 mgr.y (mgr.14520) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 271 B/s wr, 1 op/s 2026-03-09T16:08:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:32 vm09 bash[22983]: cluster 2026-03-09T16:08:30.847367+0000 mgr.y (mgr.14520) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 271 B/s wr, 1 op/s 2026-03-09T16:08:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:32 vm09 bash[22983]: audit 2026-03-09T16:08:30.916709+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:32 vm09 bash[22983]: audit 2026-03-09T16:08:30.916709+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:32 vm09 bash[22983]: cluster 2026-03-09T16:08:30.923210+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T16:08:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:32 vm09 bash[22983]: cluster 2026-03-09T16:08:30.923210+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:32 vm01 bash[28152]: cluster 2026-03-09T16:08:30.847367+0000 mgr.y (mgr.14520) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 271 B/s wr, 1 op/s 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:32 vm01 bash[28152]: cluster 2026-03-09T16:08:30.847367+0000 mgr.y (mgr.14520) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 271 B/s wr, 1 op/s 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:32 vm01 bash[28152]: audit 2026-03-09T16:08:30.916709+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:32 vm01 bash[28152]: audit 2026-03-09T16:08:30.916709+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:32 vm01 bash[28152]: cluster 2026-03-09T16:08:30.923210+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:32 vm01 bash[28152]: cluster 2026-03-09T16:08:30.923210+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:32 vm01 bash[20728]: cluster 2026-03-09T16:08:30.847367+0000 mgr.y (mgr.14520) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 271 B/s wr, 1 op/s 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:32 vm01 bash[20728]: cluster 2026-03-09T16:08:30.847367+0000 mgr.y (mgr.14520) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 271 B/s wr, 1 op/s 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:32 vm01 bash[20728]: audit 2026-03-09T16:08:30.916709+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:32 vm01 bash[20728]: audit 2026-03-09T16:08:30.916709+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-128"}]': finished 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:32 vm01 bash[20728]: cluster 2026-03-09T16:08:30.923210+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T16:08:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:32 vm01 bash[20728]: cluster 2026-03-09T16:08:30.923210+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-09T16:08:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:33 vm01 bash[28152]: cluster 2026-03-09T16:08:31.986346+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T16:08:33.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:33 vm01 bash[28152]: cluster 2026-03-09T16:08:31.986346+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T16:08:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:08:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:08:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:08:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:33 vm01 bash[20728]: cluster 2026-03-09T16:08:31.986346+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T16:08:33.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:33 vm01 bash[20728]: cluster 2026-03-09T16:08:31.986346+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T16:08:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:33 vm09 bash[22983]: cluster 2026-03-09T16:08:31.986346+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T16:08:33.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:33 vm09 bash[22983]: cluster 2026-03-09T16:08:31.986346+0000 mon.a (mon.0) 3436 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-09T16:08:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:34 vm09 bash[22983]: cluster 2026-03-09T16:08:32.847663+0000 mgr.y (mgr.14520) 567 : cluster [DBG] pgmap v985: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:08:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:34 vm09 bash[22983]: cluster 2026-03-09T16:08:32.847663+0000 mgr.y (mgr.14520) 567 : cluster [DBG] pgmap v985: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:08:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:34 vm09 bash[22983]: cluster 2026-03-09T16:08:33.023982+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T16:08:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:34 vm09 bash[22983]: cluster 2026-03-09T16:08:33.023982+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T16:08:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:34 vm09 bash[22983]: audit 2026-03-09T16:08:33.049922+0000 mon.c (mon.2) 586 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:34 vm09 bash[22983]: audit 2026-03-09T16:08:33.049922+0000 mon.c (mon.2) 586 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:34 vm09 bash[22983]: audit 2026-03-09T16:08:33.050193+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:34 vm09 bash[22983]: audit 2026-03-09T16:08:33.050193+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:34 vm01 bash[28152]: cluster 2026-03-09T16:08:32.847663+0000 mgr.y (mgr.14520) 567 : cluster [DBG] pgmap v985: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:34 vm01 bash[28152]: cluster 2026-03-09T16:08:32.847663+0000 mgr.y (mgr.14520) 567 : cluster [DBG] pgmap v985: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:34 vm01 bash[28152]: cluster 2026-03-09T16:08:33.023982+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:34 vm01 bash[28152]: cluster 2026-03-09T16:08:33.023982+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:34 vm01 bash[28152]: audit 2026-03-09T16:08:33.049922+0000 mon.c (mon.2) 586 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:34 vm01 bash[28152]: audit 2026-03-09T16:08:33.049922+0000 mon.c (mon.2) 586 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:34 vm01 bash[28152]: audit 2026-03-09T16:08:33.050193+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:34 vm01 bash[28152]: audit 2026-03-09T16:08:33.050193+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:34 vm01 bash[20728]: cluster 2026-03-09T16:08:32.847663+0000 mgr.y (mgr.14520) 567 : cluster [DBG] pgmap v985: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:34 vm01 bash[20728]: cluster 2026-03-09T16:08:32.847663+0000 mgr.y (mgr.14520) 567 : cluster [DBG] pgmap v985: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:34 vm01 bash[20728]: cluster 2026-03-09T16:08:33.023982+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T16:08:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:34 vm01 bash[20728]: cluster 2026-03-09T16:08:33.023982+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-09T16:08:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:34 vm01 bash[20728]: audit 2026-03-09T16:08:33.049922+0000 mon.c (mon.2) 586 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:34 vm01 bash[20728]: audit 2026-03-09T16:08:33.049922+0000 mon.c (mon.2) 586 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:34 vm01 bash[20728]: audit 2026-03-09T16:08:33.050193+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:34.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:34 vm01 bash[20728]: audit 2026-03-09T16:08:33.050193+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:34.020404+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:34.020404+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:34.036348+0000 mon.c (mon.2) 587 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:34.036348+0000 mon.c (mon.2) 587 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: cluster 2026-03-09T16:08:34.038263+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: cluster 2026-03-09T16:08:34.038263+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:34.047751+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:34.047751+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:35.025044+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:35.025044+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:35.028776+0000 mon.c (mon.2) 588 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:35.028776+0000 mon.c (mon.2) 588 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: cluster 2026-03-09T16:08:35.037848+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: cluster 2026-03-09T16:08:35.037848+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:35.038617+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:35 vm09 bash[22983]: audit 2026-03-09T16:08:35.038617+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:34.020404+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:34.020404+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:34.036348+0000 mon.c (mon.2) 587 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:34.036348+0000 mon.c (mon.2) 587 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: cluster 2026-03-09T16:08:34.038263+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: cluster 2026-03-09T16:08:34.038263+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:34.047751+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:34.047751+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:35.025044+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:35.025044+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:35.028776+0000 mon.c (mon.2) 588 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:35.028776+0000 mon.c (mon.2) 588 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: cluster 2026-03-09T16:08:35.037848+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: cluster 2026-03-09T16:08:35.037848+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:35.038617+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:35 vm01 bash[28152]: audit 2026-03-09T16:08:35.038617+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:34.020404+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:34.020404+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:34.036348+0000 mon.c (mon.2) 587 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:34.036348+0000 mon.c (mon.2) 587 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: cluster 2026-03-09T16:08:34.038263+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: cluster 2026-03-09T16:08:34.038263+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:34.047751+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:34.047751+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:35.025044+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:35.025044+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:35.028776+0000 mon.c (mon.2) 588 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:35.028776+0000 mon.c (mon.2) 588 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: cluster 2026-03-09T16:08:35.037848+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: cluster 2026-03-09T16:08:35.037848+0000 mon.a (mon.0) 3443 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:35.038617+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:35.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:35 vm01 bash[20728]: audit 2026-03-09T16:08:35.038617+0000 mon.a (mon.0) 3444 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: cluster 2026-03-09T16:08:34.848237+0000 mgr.y (mgr.14520) 568 : cluster [DBG] pgmap v988: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: cluster 2026-03-09T16:08:34.848237+0000 mgr.y (mgr.14520) 568 : cluster [DBG] pgmap v988: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: audit 2026-03-09T16:08:36.028592+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: audit 2026-03-09T16:08:36.028592+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: audit 2026-03-09T16:08:36.034224+0000 mon.c (mon.2) 589 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: audit 2026-03-09T16:08:36.034224+0000 mon.c (mon.2) 589 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: cluster 2026-03-09T16:08:36.043839+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T16:08:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: cluster 2026-03-09T16:08:36.043839+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T16:08:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: audit 2026-03-09T16:08:36.044383+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:36 vm09 bash[22983]: audit 2026-03-09T16:08:36.044383+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: cluster 2026-03-09T16:08:34.848237+0000 mgr.y (mgr.14520) 568 : cluster [DBG] pgmap v988: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: cluster 2026-03-09T16:08:34.848237+0000 mgr.y (mgr.14520) 568 : cluster [DBG] pgmap v988: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: audit 2026-03-09T16:08:36.028592+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: audit 2026-03-09T16:08:36.028592+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: audit 2026-03-09T16:08:36.034224+0000 mon.c (mon.2) 589 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: audit 2026-03-09T16:08:36.034224+0000 mon.c (mon.2) 589 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: cluster 2026-03-09T16:08:36.043839+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: cluster 2026-03-09T16:08:36.043839+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: audit 2026-03-09T16:08:36.044383+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:36 vm01 bash[28152]: audit 2026-03-09T16:08:36.044383+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: cluster 2026-03-09T16:08:34.848237+0000 mgr.y (mgr.14520) 568 : cluster [DBG] pgmap v988: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: cluster 2026-03-09T16:08:34.848237+0000 mgr.y (mgr.14520) 568 : cluster [DBG] pgmap v988: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: audit 2026-03-09T16:08:36.028592+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: audit 2026-03-09T16:08:36.028592+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: audit 2026-03-09T16:08:36.034224+0000 mon.c (mon.2) 589 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: audit 2026-03-09T16:08:36.034224+0000 mon.c (mon.2) 589 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: cluster 2026-03-09T16:08:36.043839+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: cluster 2026-03-09T16:08:36.043839+0000 mon.a (mon.0) 3446 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: audit 2026-03-09T16:08:36.044383+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:36 vm01 bash[20728]: audit 2026-03-09T16:08:36.044383+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]: dispatch 2026-03-09T16:08:37.076 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:08:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:08:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:37 vm09 bash[22983]: cluster 2026-03-09T16:08:37.028830+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:37 vm09 bash[22983]: cluster 2026-03-09T16:08:37.028830+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:37 vm09 bash[22983]: audit 2026-03-09T16:08:37.056136+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]': finished 2026-03-09T16:08:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:37 vm09 bash[22983]: audit 2026-03-09T16:08:37.056136+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]': finished 2026-03-09T16:08:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:37 vm09 bash[22983]: cluster 2026-03-09T16:08:37.061459+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T16:08:37.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:37 vm09 bash[22983]: cluster 2026-03-09T16:08:37.061459+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T16:08:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:37 vm01 bash[28152]: cluster 2026-03-09T16:08:37.028830+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:37 vm01 bash[28152]: cluster 2026-03-09T16:08:37.028830+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:37 vm01 bash[28152]: audit 2026-03-09T16:08:37.056136+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]': finished 2026-03-09T16:08:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:37 vm01 bash[28152]: audit 2026-03-09T16:08:37.056136+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]': finished 2026-03-09T16:08:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:37 vm01 bash[28152]: cluster 2026-03-09T16:08:37.061459+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T16:08:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:37 vm01 bash[28152]: cluster 2026-03-09T16:08:37.061459+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T16:08:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:37 vm01 bash[20728]: cluster 2026-03-09T16:08:37.028830+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:37 vm01 bash[20728]: cluster 2026-03-09T16:08:37.028830+0000 mon.a (mon.0) 3448 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:37 vm01 bash[20728]: audit 2026-03-09T16:08:37.056136+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]': finished 2026-03-09T16:08:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:37 vm01 bash[20728]: audit 2026-03-09T16:08:37.056136+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-130", "mode": "writeback"}]': finished 2026-03-09T16:08:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:37 vm01 bash[20728]: cluster 2026-03-09T16:08:37.061459+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T16:08:37.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:37 vm01 bash[20728]: cluster 2026-03-09T16:08:37.061459+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:38 vm01 bash[28152]: audit 2026-03-09T16:08:36.798711+0000 mgr.y (mgr.14520) 569 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:38 vm01 bash[28152]: audit 2026-03-09T16:08:36.798711+0000 mgr.y (mgr.14520) 569 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:38 vm01 bash[28152]: cluster 2026-03-09T16:08:36.848656+0000 mgr.y (mgr.14520) 570 : cluster [DBG] pgmap v991: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:38 vm01 bash[28152]: cluster 2026-03-09T16:08:36.848656+0000 mgr.y (mgr.14520) 570 : cluster [DBG] pgmap v991: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:38 vm01 bash[28152]: audit 2026-03-09T16:08:37.159632+0000 mon.c (mon.2) 590 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:38 vm01 bash[28152]: audit 2026-03-09T16:08:37.159632+0000 mon.c (mon.2) 590 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:38 vm01 bash[28152]: audit 2026-03-09T16:08:37.160408+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:38 vm01 bash[28152]: audit 2026-03-09T16:08:37.160408+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:38 vm01 bash[20728]: audit 2026-03-09T16:08:36.798711+0000 mgr.y (mgr.14520) 569 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:38 vm01 bash[20728]: audit 2026-03-09T16:08:36.798711+0000 mgr.y (mgr.14520) 569 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:38 vm01 bash[20728]: cluster 2026-03-09T16:08:36.848656+0000 mgr.y (mgr.14520) 570 : cluster [DBG] pgmap v991: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:38 vm01 bash[20728]: cluster 2026-03-09T16:08:36.848656+0000 mgr.y (mgr.14520) 570 : cluster [DBG] pgmap v991: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:38 vm01 bash[20728]: audit 2026-03-09T16:08:37.159632+0000 mon.c (mon.2) 590 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:38 vm01 bash[20728]: audit 2026-03-09T16:08:37.159632+0000 mon.c (mon.2) 590 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:38 vm01 bash[20728]: audit 2026-03-09T16:08:37.160408+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:38 vm01 bash[20728]: audit 2026-03-09T16:08:37.160408+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:38 vm09 bash[22983]: audit 2026-03-09T16:08:36.798711+0000 mgr.y (mgr.14520) 569 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:38 vm09 bash[22983]: audit 2026-03-09T16:08:36.798711+0000 mgr.y (mgr.14520) 569 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:38 vm09 bash[22983]: cluster 2026-03-09T16:08:36.848656+0000 mgr.y (mgr.14520) 570 : cluster [DBG] pgmap v991: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:38 vm09 bash[22983]: cluster 2026-03-09T16:08:36.848656+0000 mgr.y (mgr.14520) 570 : cluster [DBG] pgmap v991: 268 pgs: 18 creating+peering, 250 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:38 vm09 bash[22983]: audit 2026-03-09T16:08:37.159632+0000 mon.c (mon.2) 590 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:38 vm09 bash[22983]: audit 2026-03-09T16:08:37.159632+0000 mon.c (mon.2) 590 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:38 vm09 bash[22983]: audit 2026-03-09T16:08:37.160408+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:38.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:38 vm09 bash[22983]: audit 2026-03-09T16:08:37.160408+0000 mon.a (mon.0) 3451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: audit 2026-03-09T16:08:38.124220+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: audit 2026-03-09T16:08:38.124220+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: audit 2026-03-09T16:08:38.131130+0000 mon.c (mon.2) 591 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: audit 2026-03-09T16:08:38.131130+0000 mon.c (mon.2) 591 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: cluster 2026-03-09T16:08:38.137666+0000 mon.a (mon.0) 3453 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: cluster 2026-03-09T16:08:38.137666+0000 mon.a (mon.0) 3453 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: audit 2026-03-09T16:08:38.139338+0000 mon.a (mon.0) 3454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: audit 2026-03-09T16:08:38.139338+0000 mon.a (mon.0) 3454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: cluster 2026-03-09T16:08:38.849250+0000 mgr.y (mgr.14520) 571 : cluster [DBG] pgmap v994: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: cluster 2026-03-09T16:08:38.849250+0000 mgr.y (mgr.14520) 571 : cluster [DBG] pgmap v994: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: cluster 2026-03-09T16:08:39.124260+0000 mon.a (mon.0) 3455 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: cluster 2026-03-09T16:08:39.124260+0000 mon.a (mon.0) 3455 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: audit 2026-03-09T16:08:39.127620+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: audit 2026-03-09T16:08:39.127620+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: cluster 2026-03-09T16:08:39.135739+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T16:08:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:39 vm01 bash[28152]: cluster 2026-03-09T16:08:39.135739+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: audit 2026-03-09T16:08:38.124220+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: audit 2026-03-09T16:08:38.124220+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: audit 2026-03-09T16:08:38.131130+0000 mon.c (mon.2) 591 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: audit 2026-03-09T16:08:38.131130+0000 mon.c (mon.2) 591 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: cluster 2026-03-09T16:08:38.137666+0000 mon.a (mon.0) 3453 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: cluster 2026-03-09T16:08:38.137666+0000 mon.a (mon.0) 3453 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: audit 2026-03-09T16:08:38.139338+0000 mon.a (mon.0) 3454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: audit 2026-03-09T16:08:38.139338+0000 mon.a (mon.0) 3454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: cluster 2026-03-09T16:08:38.849250+0000 mgr.y (mgr.14520) 571 : cluster [DBG] pgmap v994: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: cluster 2026-03-09T16:08:38.849250+0000 mgr.y (mgr.14520) 571 : cluster [DBG] pgmap v994: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: cluster 2026-03-09T16:08:39.124260+0000 mon.a (mon.0) 3455 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: cluster 2026-03-09T16:08:39.124260+0000 mon.a (mon.0) 3455 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: audit 2026-03-09T16:08:39.127620+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: audit 2026-03-09T16:08:39.127620+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: cluster 2026-03-09T16:08:39.135739+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T16:08:39.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:39 vm01 bash[20728]: cluster 2026-03-09T16:08:39.135739+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T16:08:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: audit 2026-03-09T16:08:38.124220+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: audit 2026-03-09T16:08:38.124220+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: audit 2026-03-09T16:08:38.131130+0000 mon.c (mon.2) 591 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: audit 2026-03-09T16:08:38.131130+0000 mon.c (mon.2) 591 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: cluster 2026-03-09T16:08:38.137666+0000 mon.a (mon.0) 3453 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: cluster 2026-03-09T16:08:38.137666+0000 mon.a (mon.0) 3453 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: audit 2026-03-09T16:08:38.139338+0000 mon.a (mon.0) 3454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: audit 2026-03-09T16:08:38.139338+0000 mon.a (mon.0) 3454 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]: dispatch 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: cluster 2026-03-09T16:08:38.849250+0000 mgr.y (mgr.14520) 571 : cluster [DBG] pgmap v994: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: cluster 2026-03-09T16:08:38.849250+0000 mgr.y (mgr.14520) 571 : cluster [DBG] pgmap v994: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: cluster 2026-03-09T16:08:39.124260+0000 mon.a (mon.0) 3455 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: cluster 2026-03-09T16:08:39.124260+0000 mon.a (mon.0) 3455 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: audit 2026-03-09T16:08:39.127620+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: audit 2026-03-09T16:08:39.127620+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-130"}]': finished 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: cluster 2026-03-09T16:08:39.135739+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T16:08:39.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:39 vm09 bash[22983]: cluster 2026-03-09T16:08:39.135739+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-09T16:08:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:41 vm01 bash[28152]: cluster 2026-03-09T16:08:40.164713+0000 mon.a (mon.0) 3458 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T16:08:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:41 vm01 bash[28152]: cluster 2026-03-09T16:08:40.164713+0000 mon.a (mon.0) 3458 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T16:08:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:41 vm01 bash[28152]: cluster 2026-03-09T16:08:40.849597+0000 mgr.y (mgr.14520) 572 : cluster [DBG] pgmap v997: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:41 vm01 bash[28152]: cluster 2026-03-09T16:08:40.849597+0000 mgr.y (mgr.14520) 572 : cluster [DBG] pgmap v997: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:41 vm01 bash[20728]: cluster 2026-03-09T16:08:40.164713+0000 mon.a (mon.0) 3458 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T16:08:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:41 vm01 bash[20728]: cluster 2026-03-09T16:08:40.164713+0000 mon.a (mon.0) 3458 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T16:08:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:41 vm01 bash[20728]: cluster 2026-03-09T16:08:40.849597+0000 mgr.y (mgr.14520) 572 : cluster [DBG] pgmap v997: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:41 vm01 bash[20728]: cluster 2026-03-09T16:08:40.849597+0000 mgr.y (mgr.14520) 572 : cluster [DBG] pgmap v997: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:41 vm09 bash[22983]: cluster 2026-03-09T16:08:40.164713+0000 mon.a (mon.0) 3458 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T16:08:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:41 vm09 bash[22983]: cluster 2026-03-09T16:08:40.164713+0000 mon.a (mon.0) 3458 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-09T16:08:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:41 vm09 bash[22983]: cluster 2026-03-09T16:08:40.849597+0000 mgr.y (mgr.14520) 572 : cluster [DBG] pgmap v997: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:41 vm09 bash[22983]: cluster 2026-03-09T16:08:40.849597+0000 mgr.y (mgr.14520) 572 : cluster [DBG] pgmap v997: 236 pgs: 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T16:08:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:42 vm09 bash[22983]: cluster 2026-03-09T16:08:41.187682+0000 mon.a (mon.0) 3459 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T16:08:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:42 vm09 bash[22983]: cluster 2026-03-09T16:08:41.187682+0000 mon.a (mon.0) 3459 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T16:08:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:42 vm09 bash[22983]: audit 2026-03-09T16:08:41.190053+0000 mon.c (mon.2) 592 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:42 vm09 bash[22983]: audit 2026-03-09T16:08:41.190053+0000 mon.c (mon.2) 592 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:42 vm09 bash[22983]: audit 2026-03-09T16:08:41.191220+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:42 vm09 bash[22983]: audit 2026-03-09T16:08:41.191220+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:42 vm09 bash[22983]: audit 2026-03-09T16:08:42.066215+0000 mon.a (mon.0) 3461 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:08:42.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:42 vm09 bash[22983]: audit 2026-03-09T16:08:42.066215+0000 mon.a (mon.0) 3461 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:08:42.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:42 vm01 bash[20728]: cluster 2026-03-09T16:08:41.187682+0000 mon.a (mon.0) 3459 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T16:08:42.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:42 vm01 bash[20728]: cluster 2026-03-09T16:08:41.187682+0000 mon.a (mon.0) 3459 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T16:08:42.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:42 vm01 bash[20728]: audit 2026-03-09T16:08:41.190053+0000 mon.c (mon.2) 592 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:42 vm01 bash[20728]: audit 2026-03-09T16:08:41.190053+0000 mon.c (mon.2) 592 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:42 vm01 bash[20728]: audit 2026-03-09T16:08:41.191220+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:42 vm01 bash[20728]: audit 2026-03-09T16:08:41.191220+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:42 vm01 bash[20728]: audit 2026-03-09T16:08:42.066215+0000 mon.a (mon.0) 3461 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:42 vm01 bash[20728]: audit 2026-03-09T16:08:42.066215+0000 mon.a (mon.0) 3461 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:42 vm01 bash[28152]: cluster 2026-03-09T16:08:41.187682+0000 mon.a (mon.0) 3459 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:42 vm01 bash[28152]: cluster 2026-03-09T16:08:41.187682+0000 mon.a (mon.0) 3459 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:42 vm01 bash[28152]: audit 2026-03-09T16:08:41.190053+0000 mon.c (mon.2) 592 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:42 vm01 bash[28152]: audit 2026-03-09T16:08:41.190053+0000 mon.c (mon.2) 592 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:42 vm01 bash[28152]: audit 2026-03-09T16:08:41.191220+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:42 vm01 bash[28152]: audit 2026-03-09T16:08:41.191220+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:42 vm01 bash[28152]: audit 2026-03-09T16:08:42.066215+0000 mon.a (mon.0) 3461 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:08:42.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:42 vm01 bash[28152]: audit 2026-03-09T16:08:42.066215+0000 mon.a (mon.0) 3461 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:08:43.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:08:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:08:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:08:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.196756+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.196756+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: cluster 2026-03-09T16:08:42.200380+0000 mon.a (mon.0) 3463 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: cluster 2026-03-09T16:08:42.200380+0000 mon.a (mon.0) 3463 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.263243+0000 mon.c (mon.2) 593 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.263243+0000 mon.c (mon.2) 593 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.263844+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.263844+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.435835+0000 mon.a (mon.0) 3465 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.435835+0000 mon.a (mon.0) 3465 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.444381+0000 mon.a (mon.0) 3466 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.444381+0000 mon.a (mon.0) 3466 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.780303+0000 mon.a (mon.0) 3467 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.780303+0000 mon.a (mon.0) 3467 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.781177+0000 mon.a (mon.0) 3468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.781177+0000 mon.a (mon.0) 3468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.788080+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: audit 2026-03-09T16:08:42.788080+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: cluster 2026-03-09T16:08:42.850080+0000 mgr.y (mgr.14520) 573 : cluster [DBG] pgmap v1000: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:43 vm09 bash[22983]: cluster 2026-03-09T16:08:42.850080+0000 mgr.y (mgr.14520) 573 : cluster [DBG] pgmap v1000: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.196756+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.196756+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: cluster 2026-03-09T16:08:42.200380+0000 mon.a (mon.0) 3463 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T16:08:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: cluster 2026-03-09T16:08:42.200380+0000 mon.a (mon.0) 3463 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.263243+0000 mon.c (mon.2) 593 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.263243+0000 mon.c (mon.2) 593 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.263844+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.263844+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.435835+0000 mon.a (mon.0) 3465 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.435835+0000 mon.a (mon.0) 3465 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.444381+0000 mon.a (mon.0) 3466 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.444381+0000 mon.a (mon.0) 3466 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.780303+0000 mon.a (mon.0) 3467 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.780303+0000 mon.a (mon.0) 3467 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.781177+0000 mon.a (mon.0) 3468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.781177+0000 mon.a (mon.0) 3468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.788080+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: audit 2026-03-09T16:08:42.788080+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: cluster 2026-03-09T16:08:42.850080+0000 mgr.y (mgr.14520) 573 : cluster [DBG] pgmap v1000: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:43 vm01 bash[28152]: cluster 2026-03-09T16:08:42.850080+0000 mgr.y (mgr.14520) 573 : cluster [DBG] pgmap v1000: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.196756+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.196756+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: cluster 2026-03-09T16:08:42.200380+0000 mon.a (mon.0) 3463 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: cluster 2026-03-09T16:08:42.200380+0000 mon.a (mon.0) 3463 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.263243+0000 mon.c (mon.2) 593 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.263243+0000 mon.c (mon.2) 593 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.263844+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.263844+0000 mon.a (mon.0) 3464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.435835+0000 mon.a (mon.0) 3465 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.435835+0000 mon.a (mon.0) 3465 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.444381+0000 mon.a (mon.0) 3466 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.444381+0000 mon.a (mon.0) 3466 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.780303+0000 mon.a (mon.0) 3467 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.780303+0000 mon.a (mon.0) 3467 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.781177+0000 mon.a (mon.0) 3468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.781177+0000 mon.a (mon.0) 3468 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.788080+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: audit 2026-03-09T16:08:42.788080+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: cluster 2026-03-09T16:08:42.850080+0000 mgr.y (mgr.14520) 573 : cluster [DBG] pgmap v1000: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:43 vm01 bash[20728]: cluster 2026-03-09T16:08:42.850080+0000 mgr.y (mgr.14520) 573 : cluster [DBG] pgmap v1000: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:08:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:44 vm09 bash[22983]: audit 2026-03-09T16:08:43.239683+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:44 vm09 bash[22983]: audit 2026-03-09T16:08:43.239683+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:44 vm09 bash[22983]: cluster 2026-03-09T16:08:43.245722+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T16:08:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:44 vm09 bash[22983]: cluster 2026-03-09T16:08:43.245722+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T16:08:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:44 vm09 bash[22983]: audit 2026-03-09T16:08:43.253100+0000 mon.c (mon.2) 594 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:44 vm09 bash[22983]: audit 2026-03-09T16:08:43.253100+0000 mon.c (mon.2) 594 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:44 vm09 bash[22983]: audit 2026-03-09T16:08:43.256737+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:44 vm09 bash[22983]: audit 2026-03-09T16:08:43.256737+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:44 vm01 bash[28152]: audit 2026-03-09T16:08:43.239683+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:44 vm01 bash[28152]: audit 2026-03-09T16:08:43.239683+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:44 vm01 bash[28152]: cluster 2026-03-09T16:08:43.245722+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:44 vm01 bash[28152]: cluster 2026-03-09T16:08:43.245722+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:44 vm01 bash[28152]: audit 2026-03-09T16:08:43.253100+0000 mon.c (mon.2) 594 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:44 vm01 bash[28152]: audit 2026-03-09T16:08:43.253100+0000 mon.c (mon.2) 594 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:44 vm01 bash[28152]: audit 2026-03-09T16:08:43.256737+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:44 vm01 bash[28152]: audit 2026-03-09T16:08:43.256737+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:44 vm01 bash[20728]: audit 2026-03-09T16:08:43.239683+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:44 vm01 bash[20728]: audit 2026-03-09T16:08:43.239683+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:44 vm01 bash[20728]: cluster 2026-03-09T16:08:43.245722+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T16:08:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:44 vm01 bash[20728]: cluster 2026-03-09T16:08:43.245722+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-09T16:08:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:44 vm01 bash[20728]: audit 2026-03-09T16:08:43.253100+0000 mon.c (mon.2) 594 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:44 vm01 bash[20728]: audit 2026-03-09T16:08:43.253100+0000 mon.c (mon.2) 594 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:44 vm01 bash[20728]: audit 2026-03-09T16:08:43.256737+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:44 vm01 bash[20728]: audit 2026-03-09T16:08:43.256737+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.282321+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.282321+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.296419+0000 mon.c (mon.2) 595 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.296419+0000 mon.c (mon.2) 595 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: cluster 2026-03-09T16:08:44.298937+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: cluster 2026-03-09T16:08:44.298937+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.300578+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.300578+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.464370+0000 mon.a (mon.0) 3476 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.464370+0000 mon.a (mon.0) 3476 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.465329+0000 mon.a (mon.0) 3477 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:44.465329+0000 mon.a (mon.0) 3477 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: cluster 2026-03-09T16:08:44.850554+0000 mgr.y (mgr.14520) 574 : cluster [DBG] pgmap v1003: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: cluster 2026-03-09T16:08:44.850554+0000 mgr.y (mgr.14520) 574 : cluster [DBG] pgmap v1003: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: cluster 2026-03-09T16:08:45.282530+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: cluster 2026-03-09T16:08:45.282530+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:45.285639+0000 mon.a (mon.0) 3479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]': finished 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: audit 2026-03-09T16:08:45.285639+0000 mon.a (mon.0) 3479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]': finished 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: cluster 2026-03-09T16:08:45.299664+0000 mon.a (mon.0) 3480 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T16:08:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:45 vm09 bash[22983]: cluster 2026-03-09T16:08:45.299664+0000 mon.a (mon.0) 3480 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T16:08:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.282321+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.282321+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.296419+0000 mon.c (mon.2) 595 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.296419+0000 mon.c (mon.2) 595 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: cluster 2026-03-09T16:08:44.298937+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: cluster 2026-03-09T16:08:44.298937+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.300578+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.300578+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.464370+0000 mon.a (mon.0) 3476 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.464370+0000 mon.a (mon.0) 3476 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.465329+0000 mon.a (mon.0) 3477 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:44.465329+0000 mon.a (mon.0) 3477 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: cluster 2026-03-09T16:08:44.850554+0000 mgr.y (mgr.14520) 574 : cluster [DBG] pgmap v1003: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: cluster 2026-03-09T16:08:44.850554+0000 mgr.y (mgr.14520) 574 : cluster [DBG] pgmap v1003: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: cluster 2026-03-09T16:08:45.282530+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: cluster 2026-03-09T16:08:45.282530+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:45.285639+0000 mon.a (mon.0) 3479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]': finished 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: audit 2026-03-09T16:08:45.285639+0000 mon.a (mon.0) 3479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]': finished 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: cluster 2026-03-09T16:08:45.299664+0000 mon.a (mon.0) 3480 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:45 vm01 bash[28152]: cluster 2026-03-09T16:08:45.299664+0000 mon.a (mon.0) 3480 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.282321+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.282321+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.296419+0000 mon.c (mon.2) 595 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.296419+0000 mon.c (mon.2) 595 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: cluster 2026-03-09T16:08:44.298937+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: cluster 2026-03-09T16:08:44.298937+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.300578+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.300578+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.464370+0000 mon.a (mon.0) 3476 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.464370+0000 mon.a (mon.0) 3476 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.465329+0000 mon.a (mon.0) 3477 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:44.465329+0000 mon.a (mon.0) 3477 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: cluster 2026-03-09T16:08:44.850554+0000 mgr.y (mgr.14520) 574 : cluster [DBG] pgmap v1003: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: cluster 2026-03-09T16:08:44.850554+0000 mgr.y (mgr.14520) 574 : cluster [DBG] pgmap v1003: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: cluster 2026-03-09T16:08:45.282530+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: cluster 2026-03-09T16:08:45.282530+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:45.285639+0000 mon.a (mon.0) 3479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]': finished 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: audit 2026-03-09T16:08:45.285639+0000 mon.a (mon.0) 3479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-132", "mode": "writeback"}]': finished 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: cluster 2026-03-09T16:08:45.299664+0000 mon.a (mon.0) 3480 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T16:08:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:45 vm01 bash[20728]: cluster 2026-03-09T16:08:45.299664+0000 mon.a (mon.0) 3480 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-09T16:08:47.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:08:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:08:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:47 vm09 bash[22983]: cluster 2026-03-09T16:08:46.313187+0000 mon.a (mon.0) 3481 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T16:08:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:47 vm09 bash[22983]: cluster 2026-03-09T16:08:46.313187+0000 mon.a (mon.0) 3481 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T16:08:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:47 vm09 bash[22983]: audit 2026-03-09T16:08:46.807131+0000 mgr.y (mgr.14520) 575 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:47 vm09 bash[22983]: audit 2026-03-09T16:08:46.807131+0000 mgr.y (mgr.14520) 575 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:47 vm09 bash[22983]: cluster 2026-03-09T16:08:46.850985+0000 mgr.y (mgr.14520) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:47 vm09 bash[22983]: cluster 2026-03-09T16:08:46.850985+0000 mgr.y (mgr.14520) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:47 vm01 bash[28152]: cluster 2026-03-09T16:08:46.313187+0000 mon.a (mon.0) 3481 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T16:08:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:47 vm01 bash[28152]: cluster 2026-03-09T16:08:46.313187+0000 mon.a (mon.0) 3481 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T16:08:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:47 vm01 bash[28152]: audit 2026-03-09T16:08:46.807131+0000 mgr.y (mgr.14520) 575 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:47 vm01 bash[28152]: audit 2026-03-09T16:08:46.807131+0000 mgr.y (mgr.14520) 575 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:47 vm01 bash[28152]: cluster 2026-03-09T16:08:46.850985+0000 mgr.y (mgr.14520) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:47 vm01 bash[28152]: cluster 2026-03-09T16:08:46.850985+0000 mgr.y (mgr.14520) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:47 vm01 bash[20728]: cluster 2026-03-09T16:08:46.313187+0000 mon.a (mon.0) 3481 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T16:08:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:47 vm01 bash[20728]: cluster 2026-03-09T16:08:46.313187+0000 mon.a (mon.0) 3481 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-09T16:08:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:47 vm01 bash[20728]: audit 2026-03-09T16:08:46.807131+0000 mgr.y (mgr.14520) 575 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:47 vm01 bash[20728]: audit 2026-03-09T16:08:46.807131+0000 mgr.y (mgr.14520) 575 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:47 vm01 bash[20728]: cluster 2026-03-09T16:08:46.850985+0000 mgr.y (mgr.14520) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:47.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:47 vm01 bash[20728]: cluster 2026-03-09T16:08:46.850985+0000 mgr.y (mgr.14520) 576 : cluster [DBG] pgmap v1006: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:08:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:48 vm09 bash[22983]: cluster 2026-03-09T16:08:47.339725+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T16:08:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:48 vm09 bash[22983]: cluster 2026-03-09T16:08:47.339725+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T16:08:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:48 vm09 bash[22983]: audit 2026-03-09T16:08:47.386771+0000 mon.c (mon.2) 596 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:48 vm09 bash[22983]: audit 2026-03-09T16:08:47.386771+0000 mon.c (mon.2) 596 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:48 vm09 bash[22983]: audit 2026-03-09T16:08:47.387233+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:48 vm09 bash[22983]: audit 2026-03-09T16:08:47.387233+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:48 vm01 bash[28152]: cluster 2026-03-09T16:08:47.339725+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:48 vm01 bash[28152]: cluster 2026-03-09T16:08:47.339725+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:48 vm01 bash[28152]: audit 2026-03-09T16:08:47.386771+0000 mon.c (mon.2) 596 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:48 vm01 bash[28152]: audit 2026-03-09T16:08:47.386771+0000 mon.c (mon.2) 596 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:48 vm01 bash[28152]: audit 2026-03-09T16:08:47.387233+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:48 vm01 bash[28152]: audit 2026-03-09T16:08:47.387233+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:48 vm01 bash[20728]: cluster 2026-03-09T16:08:47.339725+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:48 vm01 bash[20728]: cluster 2026-03-09T16:08:47.339725+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:48 vm01 bash[20728]: audit 2026-03-09T16:08:47.386771+0000 mon.c (mon.2) 596 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:48 vm01 bash[20728]: audit 2026-03-09T16:08:47.386771+0000 mon.c (mon.2) 596 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:48 vm01 bash[20728]: audit 2026-03-09T16:08:47.387233+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:48.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:48 vm01 bash[20728]: audit 2026-03-09T16:08:47.387233+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: audit 2026-03-09T16:08:48.338473+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: audit 2026-03-09T16:08:48.338473+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: cluster 2026-03-09T16:08:48.345739+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T16:08:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: cluster 2026-03-09T16:08:48.345739+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T16:08:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: audit 2026-03-09T16:08:48.363755+0000 mon.c (mon.2) 597 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: audit 2026-03-09T16:08:48.363755+0000 mon.c (mon.2) 597 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: audit 2026-03-09T16:08:48.364243+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: audit 2026-03-09T16:08:48.364243+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: cluster 2026-03-09T16:08:48.851568+0000 mgr.y (mgr.14520) 577 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:08:49.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:49 vm09 bash[22983]: cluster 2026-03-09T16:08:48.851568+0000 mgr.y (mgr.14520) 577 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: audit 2026-03-09T16:08:48.338473+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: audit 2026-03-09T16:08:48.338473+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: cluster 2026-03-09T16:08:48.345739+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: cluster 2026-03-09T16:08:48.345739+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: audit 2026-03-09T16:08:48.363755+0000 mon.c (mon.2) 597 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: audit 2026-03-09T16:08:48.363755+0000 mon.c (mon.2) 597 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: audit 2026-03-09T16:08:48.364243+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: audit 2026-03-09T16:08:48.364243+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: cluster 2026-03-09T16:08:48.851568+0000 mgr.y (mgr.14520) 577 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:49 vm01 bash[28152]: cluster 2026-03-09T16:08:48.851568+0000 mgr.y (mgr.14520) 577 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: audit 2026-03-09T16:08:48.338473+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: audit 2026-03-09T16:08:48.338473+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: cluster 2026-03-09T16:08:48.345739+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: cluster 2026-03-09T16:08:48.345739+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: audit 2026-03-09T16:08:48.363755+0000 mon.c (mon.2) 597 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: audit 2026-03-09T16:08:48.363755+0000 mon.c (mon.2) 597 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: audit 2026-03-09T16:08:48.364243+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: audit 2026-03-09T16:08:48.364243+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: cluster 2026-03-09T16:08:48.851568+0000 mgr.y (mgr.14520) 577 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:08:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:49 vm01 bash[20728]: cluster 2026-03-09T16:08:48.851568+0000 mgr.y (mgr.14520) 577 : cluster [DBG] pgmap v1009: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:08:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:50 vm09 bash[22983]: audit 2026-03-09T16:08:49.349612+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:50 vm09 bash[22983]: audit 2026-03-09T16:08:49.349612+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:50 vm09 bash[22983]: cluster 2026-03-09T16:08:49.370131+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T16:08:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:50 vm09 bash[22983]: cluster 2026-03-09T16:08:49.370131+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T16:08:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:50 vm01 bash[28152]: audit 2026-03-09T16:08:49.349612+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:50 vm01 bash[28152]: audit 2026-03-09T16:08:49.349612+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:50 vm01 bash[28152]: cluster 2026-03-09T16:08:49.370131+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T16:08:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:50 vm01 bash[28152]: cluster 2026-03-09T16:08:49.370131+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T16:08:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:50 vm01 bash[20728]: audit 2026-03-09T16:08:49.349612+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:50 vm01 bash[20728]: audit 2026-03-09T16:08:49.349612+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:50 vm01 bash[20728]: cluster 2026-03-09T16:08:49.370131+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T16:08:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:50 vm01 bash[20728]: cluster 2026-03-09T16:08:49.370131+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:51 vm01 bash[28152]: cluster 2026-03-09T16:08:50.357336+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:51 vm01 bash[28152]: cluster 2026-03-09T16:08:50.357336+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:51 vm01 bash[28152]: audit 2026-03-09T16:08:50.401099+0000 mon.c (mon.2) 598 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:51 vm01 bash[28152]: audit 2026-03-09T16:08:50.401099+0000 mon.c (mon.2) 598 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:51 vm01 bash[28152]: audit 2026-03-09T16:08:50.401460+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:51 vm01 bash[28152]: audit 2026-03-09T16:08:50.401460+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:51 vm01 bash[28152]: cluster 2026-03-09T16:08:50.851965+0000 mgr.y (mgr.14520) 578 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:51 vm01 bash[28152]: cluster 2026-03-09T16:08:50.851965+0000 mgr.y (mgr.14520) 578 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:51 vm01 bash[20728]: cluster 2026-03-09T16:08:50.357336+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:51 vm01 bash[20728]: cluster 2026-03-09T16:08:50.357336+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:51 vm01 bash[20728]: audit 2026-03-09T16:08:50.401099+0000 mon.c (mon.2) 598 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:51 vm01 bash[20728]: audit 2026-03-09T16:08:50.401099+0000 mon.c (mon.2) 598 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:51 vm01 bash[20728]: audit 2026-03-09T16:08:50.401460+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:51 vm01 bash[20728]: audit 2026-03-09T16:08:50.401460+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:51 vm01 bash[20728]: cluster 2026-03-09T16:08:50.851965+0000 mgr.y (mgr.14520) 578 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T16:08:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:51 vm01 bash[20728]: cluster 2026-03-09T16:08:50.851965+0000 mgr.y (mgr.14520) 578 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T16:08:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:51 vm09 bash[22983]: cluster 2026-03-09T16:08:50.357336+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T16:08:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:51 vm09 bash[22983]: cluster 2026-03-09T16:08:50.357336+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-09T16:08:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:51 vm09 bash[22983]: audit 2026-03-09T16:08:50.401099+0000 mon.c (mon.2) 598 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:51 vm09 bash[22983]: audit 2026-03-09T16:08:50.401099+0000 mon.c (mon.2) 598 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:51 vm09 bash[22983]: audit 2026-03-09T16:08:50.401460+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:51 vm09 bash[22983]: audit 2026-03-09T16:08:50.401460+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:51 vm09 bash[22983]: cluster 2026-03-09T16:08:50.851965+0000 mgr.y (mgr.14520) 578 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T16:08:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:51 vm09 bash[22983]: cluster 2026-03-09T16:08:50.851965+0000 mgr.y (mgr.14520) 578 : cluster [DBG] pgmap v1012: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:52 vm01 bash[28152]: audit 2026-03-09T16:08:51.378716+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:52 vm01 bash[28152]: audit 2026-03-09T16:08:51.378716+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:52 vm01 bash[28152]: cluster 2026-03-09T16:08:51.381948+0000 mon.a (mon.0) 3492 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:52 vm01 bash[28152]: cluster 2026-03-09T16:08:51.381948+0000 mon.a (mon.0) 3492 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:52 vm01 bash[28152]: audit 2026-03-09T16:08:51.385764+0000 mon.c (mon.2) 599 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:52 vm01 bash[28152]: audit 2026-03-09T16:08:51.385764+0000 mon.c (mon.2) 599 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:52 vm01 bash[28152]: audit 2026-03-09T16:08:51.406938+0000 mon.a (mon.0) 3493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:52 vm01 bash[28152]: audit 2026-03-09T16:08:51.406938+0000 mon.a (mon.0) 3493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:52 vm01 bash[20728]: audit 2026-03-09T16:08:51.378716+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:52 vm01 bash[20728]: audit 2026-03-09T16:08:51.378716+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:52 vm01 bash[20728]: cluster 2026-03-09T16:08:51.381948+0000 mon.a (mon.0) 3492 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:52 vm01 bash[20728]: cluster 2026-03-09T16:08:51.381948+0000 mon.a (mon.0) 3492 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:52 vm01 bash[20728]: audit 2026-03-09T16:08:51.385764+0000 mon.c (mon.2) 599 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:52 vm01 bash[20728]: audit 2026-03-09T16:08:51.385764+0000 mon.c (mon.2) 599 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:52 vm01 bash[20728]: audit 2026-03-09T16:08:51.406938+0000 mon.a (mon.0) 3493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:52 vm01 bash[20728]: audit 2026-03-09T16:08:51.406938+0000 mon.a (mon.0) 3493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:52 vm09 bash[22983]: audit 2026-03-09T16:08:51.378716+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:52.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:52 vm09 bash[22983]: audit 2026-03-09T16:08:51.378716+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:52.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:52 vm09 bash[22983]: cluster 2026-03-09T16:08:51.381948+0000 mon.a (mon.0) 3492 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T16:08:52.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:52 vm09 bash[22983]: cluster 2026-03-09T16:08:51.381948+0000 mon.a (mon.0) 3492 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-09T16:08:52.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:52 vm09 bash[22983]: audit 2026-03-09T16:08:51.385764+0000 mon.c (mon.2) 599 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:52 vm09 bash[22983]: audit 2026-03-09T16:08:51.385764+0000 mon.c (mon.2) 599 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:52 vm09 bash[22983]: audit 2026-03-09T16:08:51.406938+0000 mon.a (mon.0) 3493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:52.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:52 vm09 bash[22983]: audit 2026-03-09T16:08:51.406938+0000 mon.a (mon.0) 3493 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]: dispatch 2026-03-09T16:08:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:08:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:08:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:08:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:53 vm09 bash[22983]: cluster 2026-03-09T16:08:52.386555+0000 mon.a (mon.0) 3494 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:53 vm09 bash[22983]: cluster 2026-03-09T16:08:52.386555+0000 mon.a (mon.0) 3494 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:53 vm09 bash[22983]: audit 2026-03-09T16:08:52.398107+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:53 vm09 bash[22983]: audit 2026-03-09T16:08:52.398107+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:53 vm09 bash[22983]: cluster 2026-03-09T16:08:52.405495+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T16:08:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:53 vm09 bash[22983]: cluster 2026-03-09T16:08:52.405495+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T16:08:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:53 vm09 bash[22983]: cluster 2026-03-09T16:08:52.852505+0000 mgr.y (mgr.14520) 579 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:08:53.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:53 vm09 bash[22983]: cluster 2026-03-09T16:08:52.852505+0000 mgr.y (mgr.14520) 579 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:53 vm01 bash[28152]: cluster 2026-03-09T16:08:52.386555+0000 mon.a (mon.0) 3494 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:53 vm01 bash[28152]: cluster 2026-03-09T16:08:52.386555+0000 mon.a (mon.0) 3494 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:53 vm01 bash[28152]: audit 2026-03-09T16:08:52.398107+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:53 vm01 bash[28152]: audit 2026-03-09T16:08:52.398107+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:53 vm01 bash[28152]: cluster 2026-03-09T16:08:52.405495+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:53 vm01 bash[28152]: cluster 2026-03-09T16:08:52.405495+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:53 vm01 bash[28152]: cluster 2026-03-09T16:08:52.852505+0000 mgr.y (mgr.14520) 579 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:53 vm01 bash[28152]: cluster 2026-03-09T16:08:52.852505+0000 mgr.y (mgr.14520) 579 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:53 vm01 bash[20728]: cluster 2026-03-09T16:08:52.386555+0000 mon.a (mon.0) 3494 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:53 vm01 bash[20728]: cluster 2026-03-09T16:08:52.386555+0000 mon.a (mon.0) 3494 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:53 vm01 bash[20728]: audit 2026-03-09T16:08:52.398107+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:53 vm01 bash[20728]: audit 2026-03-09T16:08:52.398107+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-132"}]': finished 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:53 vm01 bash[20728]: cluster 2026-03-09T16:08:52.405495+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T16:08:53.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:53 vm01 bash[20728]: cluster 2026-03-09T16:08:52.405495+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-09T16:08:53.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:53 vm01 bash[20728]: cluster 2026-03-09T16:08:52.852505+0000 mgr.y (mgr.14520) 579 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:08:53.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:53 vm01 bash[20728]: cluster 2026-03-09T16:08:52.852505+0000 mgr.y (mgr.14520) 579 : cluster [DBG] pgmap v1015: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:08:54.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:54 vm09 bash[22983]: cluster 2026-03-09T16:08:53.416807+0000 mon.a (mon.0) 3497 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:54.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:54 vm09 bash[22983]: cluster 2026-03-09T16:08:53.416807+0000 mon.a (mon.0) 3497 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:54.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:54 vm09 bash[22983]: cluster 2026-03-09T16:08:53.429324+0000 mon.a (mon.0) 3498 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T16:08:54.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:54 vm09 bash[22983]: cluster 2026-03-09T16:08:53.429324+0000 mon.a (mon.0) 3498 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T16:08:54.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:54 vm09 bash[22983]: cluster 2026-03-09T16:08:54.429116+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T16:08:54.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:54 vm09 bash[22983]: cluster 2026-03-09T16:08:54.429116+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:54 vm01 bash[28152]: cluster 2026-03-09T16:08:53.416807+0000 mon.a (mon.0) 3497 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:54 vm01 bash[28152]: cluster 2026-03-09T16:08:53.416807+0000 mon.a (mon.0) 3497 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:54 vm01 bash[28152]: cluster 2026-03-09T16:08:53.429324+0000 mon.a (mon.0) 3498 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:54 vm01 bash[28152]: cluster 2026-03-09T16:08:53.429324+0000 mon.a (mon.0) 3498 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:54 vm01 bash[28152]: cluster 2026-03-09T16:08:54.429116+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:54 vm01 bash[28152]: cluster 2026-03-09T16:08:54.429116+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:54 vm01 bash[20728]: cluster 2026-03-09T16:08:53.416807+0000 mon.a (mon.0) 3497 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:54 vm01 bash[20728]: cluster 2026-03-09T16:08:53.416807+0000 mon.a (mon.0) 3497 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:54 vm01 bash[20728]: cluster 2026-03-09T16:08:53.429324+0000 mon.a (mon.0) 3498 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:54 vm01 bash[20728]: cluster 2026-03-09T16:08:53.429324+0000 mon.a (mon.0) 3498 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:54 vm01 bash[20728]: cluster 2026-03-09T16:08:54.429116+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T16:08:54.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:54 vm01 bash[20728]: cluster 2026-03-09T16:08:54.429116+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-09T16:08:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: audit 2026-03-09T16:08:54.442853+0000 mon.c (mon.2) 600 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: audit 2026-03-09T16:08:54.442853+0000 mon.c (mon.2) 600 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: audit 2026-03-09T16:08:54.445349+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: audit 2026-03-09T16:08:54.445349+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: cluster 2026-03-09T16:08:54.852854+0000 mgr.y (mgr.14520) 580 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: cluster 2026-03-09T16:08:54.852854+0000 mgr.y (mgr.14520) 580 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: audit 2026-03-09T16:08:55.427735+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: audit 2026-03-09T16:08:55.427735+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: audit 2026-03-09T16:08:55.442089+0000 mon.c (mon.2) 601 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: audit 2026-03-09T16:08:55.442089+0000 mon.c (mon.2) 601 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: cluster 2026-03-09T16:08:55.453277+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T16:08:55.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:55 vm09 bash[22983]: cluster 2026-03-09T16:08:55.453277+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: audit 2026-03-09T16:08:54.442853+0000 mon.c (mon.2) 600 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: audit 2026-03-09T16:08:54.442853+0000 mon.c (mon.2) 600 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: audit 2026-03-09T16:08:54.445349+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: audit 2026-03-09T16:08:54.445349+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: cluster 2026-03-09T16:08:54.852854+0000 mgr.y (mgr.14520) 580 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: cluster 2026-03-09T16:08:54.852854+0000 mgr.y (mgr.14520) 580 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: audit 2026-03-09T16:08:55.427735+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: audit 2026-03-09T16:08:55.427735+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: audit 2026-03-09T16:08:55.442089+0000 mon.c (mon.2) 601 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:55.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: audit 2026-03-09T16:08:55.442089+0000 mon.c (mon.2) 601 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: cluster 2026-03-09T16:08:55.453277+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:55 vm01 bash[28152]: cluster 2026-03-09T16:08:55.453277+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: audit 2026-03-09T16:08:54.442853+0000 mon.c (mon.2) 600 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: audit 2026-03-09T16:08:54.442853+0000 mon.c (mon.2) 600 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: audit 2026-03-09T16:08:54.445349+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: audit 2026-03-09T16:08:54.445349+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: cluster 2026-03-09T16:08:54.852854+0000 mgr.y (mgr.14520) 580 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: cluster 2026-03-09T16:08:54.852854+0000 mgr.y (mgr.14520) 580 : cluster [DBG] pgmap v1018: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: audit 2026-03-09T16:08:55.427735+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: audit 2026-03-09T16:08:55.427735+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: audit 2026-03-09T16:08:55.442089+0000 mon.c (mon.2) 601 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: audit 2026-03-09T16:08:55.442089+0000 mon.c (mon.2) 601 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: cluster 2026-03-09T16:08:55.453277+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T16:08:55.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:55 vm01 bash[20728]: cluster 2026-03-09T16:08:55.453277+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: audit 2026-03-09T16:08:55.454162+0000 mon.a (mon.0) 3503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: audit 2026-03-09T16:08:55.454162+0000 mon.a (mon.0) 3503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: audit 2026-03-09T16:08:56.430636+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: audit 2026-03-09T16:08:56.430636+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: cluster 2026-03-09T16:08:56.434472+0000 mon.a (mon.0) 3505 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: cluster 2026-03-09T16:08:56.434472+0000 mon.a (mon.0) 3505 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: audit 2026-03-09T16:08:56.442993+0000 mon.c (mon.2) 602 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: audit 2026-03-09T16:08:56.442993+0000 mon.c (mon.2) 602 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: audit 2026-03-09T16:08:56.443500+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.814 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:56 vm09 bash[22983]: audit 2026-03-09T16:08:56.443500+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: audit 2026-03-09T16:08:55.454162+0000 mon.a (mon.0) 3503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: audit 2026-03-09T16:08:55.454162+0000 mon.a (mon.0) 3503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: audit 2026-03-09T16:08:56.430636+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: audit 2026-03-09T16:08:56.430636+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: cluster 2026-03-09T16:08:56.434472+0000 mon.a (mon.0) 3505 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: cluster 2026-03-09T16:08:56.434472+0000 mon.a (mon.0) 3505 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: audit 2026-03-09T16:08:56.442993+0000 mon.c (mon.2) 602 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: audit 2026-03-09T16:08:56.442993+0000 mon.c (mon.2) 602 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: audit 2026-03-09T16:08:56.443500+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:56 vm01 bash[28152]: audit 2026-03-09T16:08:56.443500+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: audit 2026-03-09T16:08:55.454162+0000 mon.a (mon.0) 3503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: audit 2026-03-09T16:08:55.454162+0000 mon.a (mon.0) 3503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: audit 2026-03-09T16:08:56.430636+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: audit 2026-03-09T16:08:56.430636+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: cluster 2026-03-09T16:08:56.434472+0000 mon.a (mon.0) 3505 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: cluster 2026-03-09T16:08:56.434472+0000 mon.a (mon.0) 3505 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: audit 2026-03-09T16:08:56.442993+0000 mon.c (mon.2) 602 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: audit 2026-03-09T16:08:56.442993+0000 mon.c (mon.2) 602 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: audit 2026-03-09T16:08:56.443500+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:56.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:56 vm01 bash[20728]: audit 2026-03-09T16:08:56.443500+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:57.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:08:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:08:57.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: audit 2026-03-09T16:08:56.817883+0000 mgr.y (mgr.14520) 581 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:57.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: audit 2026-03-09T16:08:56.817883+0000 mgr.y (mgr.14520) 581 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:57.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: cluster 2026-03-09T16:08:56.853167+0000 mgr.y (mgr.14520) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: cluster 2026-03-09T16:08:56.853167+0000 mgr.y (mgr.14520) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: audit 2026-03-09T16:08:57.454876+0000 mon.a (mon.0) 3507 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:08:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: audit 2026-03-09T16:08:57.454876+0000 mon.a (mon.0) 3507 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:08:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: cluster 2026-03-09T16:08:57.465031+0000 mon.a (mon.0) 3508 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T16:08:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: cluster 2026-03-09T16:08:57.465031+0000 mon.a (mon.0) 3508 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T16:08:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: audit 2026-03-09T16:08:57.471302+0000 mon.c (mon.2) 603 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: audit 2026-03-09T16:08:57.471302+0000 mon.c (mon.2) 603 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: audit 2026-03-09T16:08:57.471856+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:57 vm09 bash[22983]: audit 2026-03-09T16:08:57.471856+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: audit 2026-03-09T16:08:56.817883+0000 mgr.y (mgr.14520) 581 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: audit 2026-03-09T16:08:56.817883+0000 mgr.y (mgr.14520) 581 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: cluster 2026-03-09T16:08:56.853167+0000 mgr.y (mgr.14520) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: cluster 2026-03-09T16:08:56.853167+0000 mgr.y (mgr.14520) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: audit 2026-03-09T16:08:57.454876+0000 mon.a (mon.0) 3507 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:08:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: audit 2026-03-09T16:08:57.454876+0000 mon.a (mon.0) 3507 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:08:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: cluster 2026-03-09T16:08:57.465031+0000 mon.a (mon.0) 3508 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T16:08:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: cluster 2026-03-09T16:08:57.465031+0000 mon.a (mon.0) 3508 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T16:08:57.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: audit 2026-03-09T16:08:57.471302+0000 mon.c (mon.2) 603 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: audit 2026-03-09T16:08:57.471302+0000 mon.c (mon.2) 603 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: audit 2026-03-09T16:08:57.471856+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:57 vm01 bash[28152]: audit 2026-03-09T16:08:57.471856+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: audit 2026-03-09T16:08:56.817883+0000 mgr.y (mgr.14520) 581 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: audit 2026-03-09T16:08:56.817883+0000 mgr.y (mgr.14520) 581 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: cluster 2026-03-09T16:08:56.853167+0000 mgr.y (mgr.14520) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: cluster 2026-03-09T16:08:56.853167+0000 mgr.y (mgr.14520) 582 : cluster [DBG] pgmap v1021: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: audit 2026-03-09T16:08:57.454876+0000 mon.a (mon.0) 3507 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: audit 2026-03-09T16:08:57.454876+0000 mon.a (mon.0) 3507 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: cluster 2026-03-09T16:08:57.465031+0000 mon.a (mon.0) 3508 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: cluster 2026-03-09T16:08:57.465031+0000 mon.a (mon.0) 3508 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: audit 2026-03-09T16:08:57.471302+0000 mon.c (mon.2) 603 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: audit 2026-03-09T16:08:57.471302+0000 mon.c (mon.2) 603 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: audit 2026-03-09T16:08:57.471856+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:57.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:57 vm01 bash[20728]: audit 2026-03-09T16:08:57.471856+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]: dispatch 2026-03-09T16:08:58.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:58 vm09 bash[22983]: cluster 2026-03-09T16:08:58.455038+0000 mon.a (mon.0) 3510 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:58.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:58 vm09 bash[22983]: cluster 2026-03-09T16:08:58.455038+0000 mon.a (mon.0) 3510 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:58.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:58 vm09 bash[22983]: audit 2026-03-09T16:08:58.458773+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]': finished 2026-03-09T16:08:58.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:58 vm09 bash[22983]: audit 2026-03-09T16:08:58.458773+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]': finished 2026-03-09T16:08:58.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:58 vm09 bash[22983]: cluster 2026-03-09T16:08:58.468270+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T16:08:58.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:58 vm09 bash[22983]: cluster 2026-03-09T16:08:58.468270+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:58 vm01 bash[20728]: cluster 2026-03-09T16:08:58.455038+0000 mon.a (mon.0) 3510 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:58 vm01 bash[20728]: cluster 2026-03-09T16:08:58.455038+0000 mon.a (mon.0) 3510 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:58 vm01 bash[20728]: audit 2026-03-09T16:08:58.458773+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]': finished 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:58 vm01 bash[20728]: audit 2026-03-09T16:08:58.458773+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]': finished 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:58 vm01 bash[20728]: cluster 2026-03-09T16:08:58.468270+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:58 vm01 bash[20728]: cluster 2026-03-09T16:08:58.468270+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:58 vm01 bash[28152]: cluster 2026-03-09T16:08:58.455038+0000 mon.a (mon.0) 3510 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:58 vm01 bash[28152]: cluster 2026-03-09T16:08:58.455038+0000 mon.a (mon.0) 3510 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:58 vm01 bash[28152]: audit 2026-03-09T16:08:58.458773+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]': finished 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:58 vm01 bash[28152]: audit 2026-03-09T16:08:58.458773+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-134", "mode": "writeback"}]': finished 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:58 vm01 bash[28152]: cluster 2026-03-09T16:08:58.468270+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T16:08:58.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:58 vm01 bash[28152]: cluster 2026-03-09T16:08:58.468270+0000 mon.a (mon.0) 3512 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:58.521504+0000 mon.c (mon.2) 604 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:58.521504+0000 mon.c (mon.2) 604 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:58.521861+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:58.521861+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: cluster 2026-03-09T16:08:58.853829+0000 mgr.y (mgr.14520) 583 : cluster [DBG] pgmap v1024: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: cluster 2026-03-09T16:08:58.853829+0000 mgr.y (mgr.14520) 583 : cluster [DBG] pgmap v1024: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: cluster 2026-03-09T16:08:59.013903+0000 mon.a (mon.0) 3514 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: cluster 2026-03-09T16:08:59.013903+0000 mon.a (mon.0) 3514 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.056885+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.056885+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: cluster 2026-03-09T16:08:59.066615+0000 mon.a (mon.0) 3516 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: cluster 2026-03-09T16:08:59.066615+0000 mon.a (mon.0) 3516 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.070256+0000 mon.c (mon.2) 605 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.070256+0000 mon.c (mon.2) 605 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.070573+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.070573+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.477588+0000 mon.a (mon.0) 3518 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.477588+0000 mon.a (mon.0) 3518 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.481491+0000 mon.a (mon.0) 3519 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:08:59 vm09 bash[22983]: audit 2026-03-09T16:08:59.481491+0000 mon.a (mon.0) 3519 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:59.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:58.521504+0000 mon.c (mon.2) 604 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:58.521504+0000 mon.c (mon.2) 604 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:58.521861+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:58.521861+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: cluster 2026-03-09T16:08:58.853829+0000 mgr.y (mgr.14520) 583 : cluster [DBG] pgmap v1024: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: cluster 2026-03-09T16:08:58.853829+0000 mgr.y (mgr.14520) 583 : cluster [DBG] pgmap v1024: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: cluster 2026-03-09T16:08:59.013903+0000 mon.a (mon.0) 3514 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: cluster 2026-03-09T16:08:59.013903+0000 mon.a (mon.0) 3514 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.056885+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.056885+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: cluster 2026-03-09T16:08:59.066615+0000 mon.a (mon.0) 3516 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: cluster 2026-03-09T16:08:59.066615+0000 mon.a (mon.0) 3516 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.070256+0000 mon.c (mon.2) 605 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.070256+0000 mon.c (mon.2) 605 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.070573+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.070573+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.477588+0000 mon.a (mon.0) 3518 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.477588+0000 mon.a (mon.0) 3518 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.481491+0000 mon.a (mon.0) 3519 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:08:59 vm01 bash[28152]: audit 2026-03-09T16:08:59.481491+0000 mon.a (mon.0) 3519 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:58.521504+0000 mon.c (mon.2) 604 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:58.521504+0000 mon.c (mon.2) 604 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:58.521861+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:58.521861+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: cluster 2026-03-09T16:08:58.853829+0000 mgr.y (mgr.14520) 583 : cluster [DBG] pgmap v1024: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: cluster 2026-03-09T16:08:58.853829+0000 mgr.y (mgr.14520) 583 : cluster [DBG] pgmap v1024: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: cluster 2026-03-09T16:08:59.013903+0000 mon.a (mon.0) 3514 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: cluster 2026-03-09T16:08:59.013903+0000 mon.a (mon.0) 3514 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.056885+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.056885+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: cluster 2026-03-09T16:08:59.066615+0000 mon.a (mon.0) 3516 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: cluster 2026-03-09T16:08:59.066615+0000 mon.a (mon.0) 3516 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.070256+0000 mon.c (mon.2) 605 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.070256+0000 mon.c (mon.2) 605 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.070573+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.070573+0000 mon.a (mon.0) 3517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.477588+0000 mon.a (mon.0) 3518 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.477588+0000 mon.a (mon.0) 3518 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.481491+0000 mon.a (mon.0) 3519 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:08:59.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:08:59 vm01 bash[20728]: audit 2026-03-09T16:08:59.481491+0000 mon.a (mon.0) 3519 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:00.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:00 vm09 bash[22983]: cluster 2026-03-09T16:09:00.057051+0000 mon.a (mon.0) 3520 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:00.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:00 vm09 bash[22983]: cluster 2026-03-09T16:09:00.057051+0000 mon.a (mon.0) 3520 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:00 vm09 bash[22983]: audit 2026-03-09T16:09:00.059937+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:09:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:00 vm09 bash[22983]: audit 2026-03-09T16:09:00.059937+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:09:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:00 vm09 bash[22983]: cluster 2026-03-09T16:09:00.063116+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T16:09:00.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:00 vm09 bash[22983]: cluster 2026-03-09T16:09:00.063116+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T16:09:00.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:00 vm01 bash[28152]: cluster 2026-03-09T16:09:00.057051+0000 mon.a (mon.0) 3520 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:00.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:00 vm01 bash[28152]: cluster 2026-03-09T16:09:00.057051+0000 mon.a (mon.0) 3520 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:00.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:00 vm01 bash[28152]: audit 2026-03-09T16:09:00.059937+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:09:00.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:00 vm01 bash[28152]: audit 2026-03-09T16:09:00.059937+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:09:00.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:00 vm01 bash[28152]: cluster 2026-03-09T16:09:00.063116+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T16:09:00.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:00 vm01 bash[28152]: cluster 2026-03-09T16:09:00.063116+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T16:09:00.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:00 vm01 bash[20728]: cluster 2026-03-09T16:09:00.057051+0000 mon.a (mon.0) 3520 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:00.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:00 vm01 bash[20728]: cluster 2026-03-09T16:09:00.057051+0000 mon.a (mon.0) 3520 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:00.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:00 vm01 bash[20728]: audit 2026-03-09T16:09:00.059937+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:09:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:00 vm01 bash[20728]: audit 2026-03-09T16:09:00.059937+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-134"}]': finished 2026-03-09T16:09:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:00 vm01 bash[20728]: cluster 2026-03-09T16:09:00.063116+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T16:09:00.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:00 vm01 bash[20728]: cluster 2026-03-09T16:09:00.063116+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-09T16:09:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:02 vm09 bash[22983]: cluster 2026-03-09T16:09:00.854156+0000 mgr.y (mgr.14520) 584 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T16:09:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:02 vm09 bash[22983]: cluster 2026-03-09T16:09:00.854156+0000 mgr.y (mgr.14520) 584 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T16:09:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:02 vm09 bash[22983]: cluster 2026-03-09T16:09:01.073460+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T16:09:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:02 vm09 bash[22983]: cluster 2026-03-09T16:09:01.073460+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T16:09:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:02 vm01 bash[28152]: cluster 2026-03-09T16:09:00.854156+0000 mgr.y (mgr.14520) 584 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T16:09:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:02 vm01 bash[28152]: cluster 2026-03-09T16:09:00.854156+0000 mgr.y (mgr.14520) 584 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T16:09:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:02 vm01 bash[28152]: cluster 2026-03-09T16:09:01.073460+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T16:09:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:02 vm01 bash[28152]: cluster 2026-03-09T16:09:01.073460+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T16:09:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:02 vm01 bash[20728]: cluster 2026-03-09T16:09:00.854156+0000 mgr.y (mgr.14520) 584 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T16:09:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:02 vm01 bash[20728]: cluster 2026-03-09T16:09:00.854156+0000 mgr.y (mgr.14520) 584 : cluster [DBG] pgmap v1027: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-09T16:09:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:02 vm01 bash[20728]: cluster 2026-03-09T16:09:01.073460+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T16:09:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:02 vm01 bash[20728]: cluster 2026-03-09T16:09:01.073460+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:03 vm01 bash[28152]: cluster 2026-03-09T16:09:02.097784+0000 mon.a (mon.0) 3524 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:03 vm01 bash[28152]: cluster 2026-03-09T16:09:02.097784+0000 mon.a (mon.0) 3524 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:03 vm01 bash[28152]: audit 2026-03-09T16:09:02.104009+0000 mon.c (mon.2) 606 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:03 vm01 bash[28152]: audit 2026-03-09T16:09:02.104009+0000 mon.c (mon.2) 606 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:03 vm01 bash[28152]: audit 2026-03-09T16:09:02.104270+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:03 vm01 bash[28152]: audit 2026-03-09T16:09:02.104270+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:09:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:09:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:03 vm01 bash[20728]: cluster 2026-03-09T16:09:02.097784+0000 mon.a (mon.0) 3524 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:03 vm01 bash[20728]: cluster 2026-03-09T16:09:02.097784+0000 mon.a (mon.0) 3524 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:03 vm01 bash[20728]: audit 2026-03-09T16:09:02.104009+0000 mon.c (mon.2) 606 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:03 vm01 bash[20728]: audit 2026-03-09T16:09:02.104009+0000 mon.c (mon.2) 606 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:03 vm01 bash[20728]: audit 2026-03-09T16:09:02.104270+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:03 vm01 bash[20728]: audit 2026-03-09T16:09:02.104270+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:03 vm09 bash[22983]: cluster 2026-03-09T16:09:02.097784+0000 mon.a (mon.0) 3524 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T16:09:03.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:03 vm09 bash[22983]: cluster 2026-03-09T16:09:02.097784+0000 mon.a (mon.0) 3524 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-09T16:09:03.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:03 vm09 bash[22983]: audit 2026-03-09T16:09:02.104009+0000 mon.c (mon.2) 606 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:03 vm09 bash[22983]: audit 2026-03-09T16:09:02.104009+0000 mon.c (mon.2) 606 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:03 vm09 bash[22983]: audit 2026-03-09T16:09:02.104270+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:03.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:03 vm09 bash[22983]: audit 2026-03-09T16:09:02.104270+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: cluster 2026-03-09T16:09:02.854456+0000 mgr.y (mgr.14520) 585 : cluster [DBG] pgmap v1030: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: cluster 2026-03-09T16:09:02.854456+0000 mgr.y (mgr.14520) 585 : cluster [DBG] pgmap v1030: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: audit 2026-03-09T16:09:03.122560+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: audit 2026-03-09T16:09:03.122560+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: cluster 2026-03-09T16:09:03.137734+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: cluster 2026-03-09T16:09:03.137734+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: audit 2026-03-09T16:09:03.158698+0000 mon.c (mon.2) 607 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: audit 2026-03-09T16:09:03.158698+0000 mon.c (mon.2) 607 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: audit 2026-03-09T16:09:03.158988+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: audit 2026-03-09T16:09:03.158988+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: cluster 2026-03-09T16:09:04.054924+0000 mon.a (mon.0) 3529 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:04.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:04 vm09 bash[22983]: cluster 2026-03-09T16:09:04.054924+0000 mon.a (mon.0) 3529 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: cluster 2026-03-09T16:09:02.854456+0000 mgr.y (mgr.14520) 585 : cluster [DBG] pgmap v1030: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: cluster 2026-03-09T16:09:02.854456+0000 mgr.y (mgr.14520) 585 : cluster [DBG] pgmap v1030: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: audit 2026-03-09T16:09:03.122560+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: audit 2026-03-09T16:09:03.122560+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: cluster 2026-03-09T16:09:03.137734+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: cluster 2026-03-09T16:09:03.137734+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: audit 2026-03-09T16:09:03.158698+0000 mon.c (mon.2) 607 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: audit 2026-03-09T16:09:03.158698+0000 mon.c (mon.2) 607 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: audit 2026-03-09T16:09:03.158988+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: audit 2026-03-09T16:09:03.158988+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: cluster 2026-03-09T16:09:04.054924+0000 mon.a (mon.0) 3529 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:04 vm01 bash[28152]: cluster 2026-03-09T16:09:04.054924+0000 mon.a (mon.0) 3529 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: cluster 2026-03-09T16:09:02.854456+0000 mgr.y (mgr.14520) 585 : cluster [DBG] pgmap v1030: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: cluster 2026-03-09T16:09:02.854456+0000 mgr.y (mgr.14520) 585 : cluster [DBG] pgmap v1030: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: audit 2026-03-09T16:09:03.122560+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: audit 2026-03-09T16:09:03.122560+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: cluster 2026-03-09T16:09:03.137734+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: cluster 2026-03-09T16:09:03.137734+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: audit 2026-03-09T16:09:03.158698+0000 mon.c (mon.2) 607 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: audit 2026-03-09T16:09:03.158698+0000 mon.c (mon.2) 607 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: audit 2026-03-09T16:09:03.158988+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: audit 2026-03-09T16:09:03.158988+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: cluster 2026-03-09T16:09:04.054924+0000 mon.a (mon.0) 3529 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:04.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:04 vm01 bash[20728]: cluster 2026-03-09T16:09:04.054924+0000 mon.a (mon.0) 3529 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:05 vm01 bash[28152]: audit 2026-03-09T16:09:04.133405+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:05 vm01 bash[28152]: audit 2026-03-09T16:09:04.133405+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:05 vm01 bash[28152]: cluster 2026-03-09T16:09:04.136254+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:05 vm01 bash[28152]: cluster 2026-03-09T16:09:04.136254+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:05 vm01 bash[28152]: audit 2026-03-09T16:09:04.139274+0000 mon.c (mon.2) 608 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:05 vm01 bash[28152]: audit 2026-03-09T16:09:04.139274+0000 mon.c (mon.2) 608 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:05 vm01 bash[28152]: audit 2026-03-09T16:09:04.139978+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:05 vm01 bash[28152]: audit 2026-03-09T16:09:04.139978+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:05 vm01 bash[20728]: audit 2026-03-09T16:09:04.133405+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:05 vm01 bash[20728]: audit 2026-03-09T16:09:04.133405+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:05 vm01 bash[20728]: cluster 2026-03-09T16:09:04.136254+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:05 vm01 bash[20728]: cluster 2026-03-09T16:09:04.136254+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:05 vm01 bash[20728]: audit 2026-03-09T16:09:04.139274+0000 mon.c (mon.2) 608 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:05 vm01 bash[20728]: audit 2026-03-09T16:09:04.139274+0000 mon.c (mon.2) 608 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:05 vm01 bash[20728]: audit 2026-03-09T16:09:04.139978+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:05 vm01 bash[20728]: audit 2026-03-09T16:09:04.139978+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:05 vm09 bash[22983]: audit 2026-03-09T16:09:04.133405+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:05 vm09 bash[22983]: audit 2026-03-09T16:09:04.133405+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:05 vm09 bash[22983]: cluster 2026-03-09T16:09:04.136254+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T16:09:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:05 vm09 bash[22983]: cluster 2026-03-09T16:09:04.136254+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-09T16:09:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:05 vm09 bash[22983]: audit 2026-03-09T16:09:04.139274+0000 mon.c (mon.2) 608 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:05 vm09 bash[22983]: audit 2026-03-09T16:09:04.139274+0000 mon.c (mon.2) 608 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:05 vm09 bash[22983]: audit 2026-03-09T16:09:04.139978+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:05 vm09 bash[22983]: audit 2026-03-09T16:09:04.139978+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: cluster 2026-03-09T16:09:04.855017+0000 mgr.y (mgr.14520) 586 : cluster [DBG] pgmap v1033: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: cluster 2026-03-09T16:09:04.855017+0000 mgr.y (mgr.14520) 586 : cluster [DBG] pgmap v1033: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: audit 2026-03-09T16:09:05.137938+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: audit 2026-03-09T16:09:05.137938+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: cluster 2026-03-09T16:09:05.141301+0000 mon.a (mon.0) 3534 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: cluster 2026-03-09T16:09:05.141301+0000 mon.a (mon.0) 3534 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: audit 2026-03-09T16:09:05.145332+0000 mon.c (mon.2) 609 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: audit 2026-03-09T16:09:05.145332+0000 mon.c (mon.2) 609 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: audit 2026-03-09T16:09:05.158658+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:06 vm01 bash[28152]: audit 2026-03-09T16:09:05.158658+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: cluster 2026-03-09T16:09:04.855017+0000 mgr.y (mgr.14520) 586 : cluster [DBG] pgmap v1033: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:06.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: cluster 2026-03-09T16:09:04.855017+0000 mgr.y (mgr.14520) 586 : cluster [DBG] pgmap v1033: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:06.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: audit 2026-03-09T16:09:05.137938+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:06.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: audit 2026-03-09T16:09:05.137938+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:06.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: cluster 2026-03-09T16:09:05.141301+0000 mon.a (mon.0) 3534 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T16:09:06.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: cluster 2026-03-09T16:09:05.141301+0000 mon.a (mon.0) 3534 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T16:09:06.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: audit 2026-03-09T16:09:05.145332+0000 mon.c (mon.2) 609 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: audit 2026-03-09T16:09:05.145332+0000 mon.c (mon.2) 609 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: audit 2026-03-09T16:09:05.158658+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:06 vm01 bash[20728]: audit 2026-03-09T16:09:05.158658+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: cluster 2026-03-09T16:09:04.855017+0000 mgr.y (mgr.14520) 586 : cluster [DBG] pgmap v1033: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: cluster 2026-03-09T16:09:04.855017+0000 mgr.y (mgr.14520) 586 : cluster [DBG] pgmap v1033: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: audit 2026-03-09T16:09:05.137938+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: audit 2026-03-09T16:09:05.137938+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: cluster 2026-03-09T16:09:05.141301+0000 mon.a (mon.0) 3534 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: cluster 2026-03-09T16:09:05.141301+0000 mon.a (mon.0) 3534 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: audit 2026-03-09T16:09:05.145332+0000 mon.c (mon.2) 609 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: audit 2026-03-09T16:09:05.145332+0000 mon.c (mon.2) 609 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: audit 2026-03-09T16:09:05.158658+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:06 vm09 bash[22983]: audit 2026-03-09T16:09:05.158658+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]: dispatch 2026-03-09T16:09:07.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:09:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: cluster 2026-03-09T16:09:06.153747+0000 mon.a (mon.0) 3536 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: cluster 2026-03-09T16:09:06.153747+0000 mon.a (mon.0) 3536 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: audit 2026-03-09T16:09:06.164717+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]': finished 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: audit 2026-03-09T16:09:06.164717+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]': finished 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: cluster 2026-03-09T16:09:06.173342+0000 mon.a (mon.0) 3538 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: cluster 2026-03-09T16:09:06.173342+0000 mon.a (mon.0) 3538 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: audit 2026-03-09T16:09:06.248278+0000 mon.c (mon.2) 610 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: audit 2026-03-09T16:09:06.248278+0000 mon.c (mon.2) 610 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: audit 2026-03-09T16:09:06.248691+0000 mon.a (mon.0) 3539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: audit 2026-03-09T16:09:06.248691+0000 mon.a (mon.0) 3539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: audit 2026-03-09T16:09:06.819885+0000 mgr.y (mgr.14520) 587 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: audit 2026-03-09T16:09:06.819885+0000 mgr.y (mgr.14520) 587 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: cluster 2026-03-09T16:09:06.855339+0000 mgr.y (mgr.14520) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:07 vm01 bash[28152]: cluster 2026-03-09T16:09:06.855339+0000 mgr.y (mgr.14520) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: cluster 2026-03-09T16:09:06.153747+0000 mon.a (mon.0) 3536 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:07.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: cluster 2026-03-09T16:09:06.153747+0000 mon.a (mon.0) 3536 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: audit 2026-03-09T16:09:06.164717+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]': finished 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: audit 2026-03-09T16:09:06.164717+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]': finished 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: cluster 2026-03-09T16:09:06.173342+0000 mon.a (mon.0) 3538 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: cluster 2026-03-09T16:09:06.173342+0000 mon.a (mon.0) 3538 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: audit 2026-03-09T16:09:06.248278+0000 mon.c (mon.2) 610 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: audit 2026-03-09T16:09:06.248278+0000 mon.c (mon.2) 610 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: audit 2026-03-09T16:09:06.248691+0000 mon.a (mon.0) 3539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: audit 2026-03-09T16:09:06.248691+0000 mon.a (mon.0) 3539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: audit 2026-03-09T16:09:06.819885+0000 mgr.y (mgr.14520) 587 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: audit 2026-03-09T16:09:06.819885+0000 mgr.y (mgr.14520) 587 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: cluster 2026-03-09T16:09:06.855339+0000 mgr.y (mgr.14520) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:07.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:07 vm01 bash[20728]: cluster 2026-03-09T16:09:06.855339+0000 mgr.y (mgr.14520) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: cluster 2026-03-09T16:09:06.153747+0000 mon.a (mon.0) 3536 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: cluster 2026-03-09T16:09:06.153747+0000 mon.a (mon.0) 3536 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: audit 2026-03-09T16:09:06.164717+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]': finished 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: audit 2026-03-09T16:09:06.164717+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-136", "mode": "writeback"}]': finished 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: cluster 2026-03-09T16:09:06.173342+0000 mon.a (mon.0) 3538 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: cluster 2026-03-09T16:09:06.173342+0000 mon.a (mon.0) 3538 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: audit 2026-03-09T16:09:06.248278+0000 mon.c (mon.2) 610 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: audit 2026-03-09T16:09:06.248278+0000 mon.c (mon.2) 610 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: audit 2026-03-09T16:09:06.248691+0000 mon.a (mon.0) 3539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: audit 2026-03-09T16:09:06.248691+0000 mon.a (mon.0) 3539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: audit 2026-03-09T16:09:06.819885+0000 mgr.y (mgr.14520) 587 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: audit 2026-03-09T16:09:06.819885+0000 mgr.y (mgr.14520) 587 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: cluster 2026-03-09T16:09:06.855339+0000 mgr.y (mgr.14520) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:07.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:07 vm09 bash[22983]: cluster 2026-03-09T16:09:06.855339+0000 mgr.y (mgr.14520) 588 : cluster [DBG] pgmap v1036: 268 pgs: 268 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:08 vm01 bash[28152]: audit 2026-03-09T16:09:07.172604+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:08 vm01 bash[28152]: audit 2026-03-09T16:09:07.172604+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:08 vm01 bash[28152]: cluster 2026-03-09T16:09:07.176261+0000 mon.a (mon.0) 3541 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:08 vm01 bash[28152]: cluster 2026-03-09T16:09:07.176261+0000 mon.a (mon.0) 3541 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:08 vm01 bash[28152]: audit 2026-03-09T16:09:07.180569+0000 mon.c (mon.2) 611 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:08 vm01 bash[28152]: audit 2026-03-09T16:09:07.180569+0000 mon.c (mon.2) 611 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:08 vm01 bash[28152]: audit 2026-03-09T16:09:07.181426+0000 mon.a (mon.0) 3542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:08 vm01 bash[28152]: audit 2026-03-09T16:09:07.181426+0000 mon.a (mon.0) 3542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:08 vm01 bash[20728]: audit 2026-03-09T16:09:07.172604+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:08 vm01 bash[20728]: audit 2026-03-09T16:09:07.172604+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:08 vm01 bash[20728]: cluster 2026-03-09T16:09:07.176261+0000 mon.a (mon.0) 3541 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:08 vm01 bash[20728]: cluster 2026-03-09T16:09:07.176261+0000 mon.a (mon.0) 3541 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:08 vm01 bash[20728]: audit 2026-03-09T16:09:07.180569+0000 mon.c (mon.2) 611 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:08 vm01 bash[20728]: audit 2026-03-09T16:09:07.180569+0000 mon.c (mon.2) 611 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:08 vm01 bash[20728]: audit 2026-03-09T16:09:07.181426+0000 mon.a (mon.0) 3542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:08 vm01 bash[20728]: audit 2026-03-09T16:09:07.181426+0000 mon.a (mon.0) 3542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:08 vm09 bash[22983]: audit 2026-03-09T16:09:07.172604+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:08 vm09 bash[22983]: audit 2026-03-09T16:09:07.172604+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:08 vm09 bash[22983]: cluster 2026-03-09T16:09:07.176261+0000 mon.a (mon.0) 3541 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T16:09:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:08 vm09 bash[22983]: cluster 2026-03-09T16:09:07.176261+0000 mon.a (mon.0) 3541 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-09T16:09:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:08 vm09 bash[22983]: audit 2026-03-09T16:09:07.180569+0000 mon.c (mon.2) 611 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:08 vm09 bash[22983]: audit 2026-03-09T16:09:07.180569+0000 mon.c (mon.2) 611 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:08 vm09 bash[22983]: audit 2026-03-09T16:09:07.181426+0000 mon.a (mon.0) 3542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:08 vm09 bash[22983]: audit 2026-03-09T16:09:07.181426+0000 mon.a (mon.0) 3542 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]: dispatch 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: cluster 2026-03-09T16:09:08.173759+0000 mon.a (mon.0) 3543 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: cluster 2026-03-09T16:09:08.173759+0000 mon.a (mon.0) 3543 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: audit 2026-03-09T16:09:08.193568+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: audit 2026-03-09T16:09:08.193568+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: cluster 2026-03-09T16:09:08.207268+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: cluster 2026-03-09T16:09:08.207268+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: cluster 2026-03-09T16:09:08.855923+0000 mgr.y (mgr.14520) 589 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 KiB/s wr, 0 op/s 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: cluster 2026-03-09T16:09:08.855923+0000 mgr.y (mgr.14520) 589 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 KiB/s wr, 0 op/s 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: cluster 2026-03-09T16:09:09.055649+0000 mon.a (mon.0) 3546 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:09 vm01 bash[28152]: cluster 2026-03-09T16:09:09.055649+0000 mon.a (mon.0) 3546 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: cluster 2026-03-09T16:09:08.173759+0000 mon.a (mon.0) 3543 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: cluster 2026-03-09T16:09:08.173759+0000 mon.a (mon.0) 3543 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: audit 2026-03-09T16:09:08.193568+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: audit 2026-03-09T16:09:08.193568+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:09.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: cluster 2026-03-09T16:09:08.207268+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T16:09:09.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: cluster 2026-03-09T16:09:08.207268+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T16:09:09.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: cluster 2026-03-09T16:09:08.855923+0000 mgr.y (mgr.14520) 589 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 KiB/s wr, 0 op/s 2026-03-09T16:09:09.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: cluster 2026-03-09T16:09:08.855923+0000 mgr.y (mgr.14520) 589 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 KiB/s wr, 0 op/s 2026-03-09T16:09:09.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: cluster 2026-03-09T16:09:09.055649+0000 mon.a (mon.0) 3546 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:09.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:09 vm01 bash[20728]: cluster 2026-03-09T16:09:09.055649+0000 mon.a (mon.0) 3546 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: cluster 2026-03-09T16:09:08.173759+0000 mon.a (mon.0) 3543 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: cluster 2026-03-09T16:09:08.173759+0000 mon.a (mon.0) 3543 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: audit 2026-03-09T16:09:08.193568+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: audit 2026-03-09T16:09:08.193568+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-136"}]': finished 2026-03-09T16:09:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: cluster 2026-03-09T16:09:08.207268+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T16:09:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: cluster 2026-03-09T16:09:08.207268+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-09T16:09:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: cluster 2026-03-09T16:09:08.855923+0000 mgr.y (mgr.14520) 589 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 KiB/s wr, 0 op/s 2026-03-09T16:09:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: cluster 2026-03-09T16:09:08.855923+0000 mgr.y (mgr.14520) 589 : cluster [DBG] pgmap v1039: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 KiB/s wr, 0 op/s 2026-03-09T16:09:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: cluster 2026-03-09T16:09:09.055649+0000 mon.a (mon.0) 3546 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:09.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:09 vm09 bash[22983]: cluster 2026-03-09T16:09:09.055649+0000 mon.a (mon.0) 3546 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:09:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:10 vm09 bash[22983]: cluster 2026-03-09T16:09:09.218355+0000 mon.a (mon.0) 3547 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T16:09:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:10 vm09 bash[22983]: cluster 2026-03-09T16:09:09.218355+0000 mon.a (mon.0) 3547 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T16:09:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:10 vm01 bash[28152]: cluster 2026-03-09T16:09:09.218355+0000 mon.a (mon.0) 3547 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T16:09:10.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:10 vm01 bash[28152]: cluster 2026-03-09T16:09:09.218355+0000 mon.a (mon.0) 3547 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T16:09:10.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:10 vm01 bash[20728]: cluster 2026-03-09T16:09:09.218355+0000 mon.a (mon.0) 3547 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T16:09:10.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:10 vm01 bash[20728]: cluster 2026-03-09T16:09:09.218355+0000 mon.a (mon.0) 3547 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-09T16:09:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:11 vm09 bash[22983]: cluster 2026-03-09T16:09:10.329784+0000 mon.a (mon.0) 3548 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T16:09:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:11 vm09 bash[22983]: cluster 2026-03-09T16:09:10.329784+0000 mon.a (mon.0) 3548 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T16:09:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:11 vm09 bash[22983]: audit 2026-03-09T16:09:10.331497+0000 mon.c (mon.2) 612 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:11 vm09 bash[22983]: audit 2026-03-09T16:09:10.331497+0000 mon.c (mon.2) 612 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:11 vm09 bash[22983]: audit 2026-03-09T16:09:10.333685+0000 mon.a (mon.0) 3549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:11 vm09 bash[22983]: audit 2026-03-09T16:09:10.333685+0000 mon.a (mon.0) 3549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:11 vm09 bash[22983]: cluster 2026-03-09T16:09:10.856449+0000 mgr.y (mgr.14520) 590 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:09:11.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:11 vm09 bash[22983]: cluster 2026-03-09T16:09:10.856449+0000 mgr.y (mgr.14520) 590 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:11 vm01 bash[28152]: cluster 2026-03-09T16:09:10.329784+0000 mon.a (mon.0) 3548 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:11 vm01 bash[28152]: cluster 2026-03-09T16:09:10.329784+0000 mon.a (mon.0) 3548 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:11 vm01 bash[28152]: audit 2026-03-09T16:09:10.331497+0000 mon.c (mon.2) 612 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:11 vm01 bash[28152]: audit 2026-03-09T16:09:10.331497+0000 mon.c (mon.2) 612 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:11 vm01 bash[28152]: audit 2026-03-09T16:09:10.333685+0000 mon.a (mon.0) 3549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:11 vm01 bash[28152]: audit 2026-03-09T16:09:10.333685+0000 mon.a (mon.0) 3549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:11 vm01 bash[28152]: cluster 2026-03-09T16:09:10.856449+0000 mgr.y (mgr.14520) 590 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:11 vm01 bash[28152]: cluster 2026-03-09T16:09:10.856449+0000 mgr.y (mgr.14520) 590 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:11 vm01 bash[20728]: cluster 2026-03-09T16:09:10.329784+0000 mon.a (mon.0) 3548 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T16:09:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:11 vm01 bash[20728]: cluster 2026-03-09T16:09:10.329784+0000 mon.a (mon.0) 3548 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-09T16:09:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:11 vm01 bash[20728]: audit 2026-03-09T16:09:10.331497+0000 mon.c (mon.2) 612 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:11 vm01 bash[20728]: audit 2026-03-09T16:09:10.331497+0000 mon.c (mon.2) 612 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:11 vm01 bash[20728]: audit 2026-03-09T16:09:10.333685+0000 mon.a (mon.0) 3549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:11 vm01 bash[20728]: audit 2026-03-09T16:09:10.333685+0000 mon.a (mon.0) 3549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:11 vm01 bash[20728]: cluster 2026-03-09T16:09:10.856449+0000 mgr.y (mgr.14520) 590 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:09:11.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:11 vm01 bash[20728]: cluster 2026-03-09T16:09:10.856449+0000 mgr.y (mgr.14520) 590 : cluster [DBG] pgmap v1042: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T16:09:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:12 vm09 bash[22983]: audit 2026-03-09T16:09:11.336512+0000 mon.a (mon.0) 3550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:12 vm09 bash[22983]: audit 2026-03-09T16:09:11.336512+0000 mon.a (mon.0) 3550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:12 vm09 bash[22983]: cluster 2026-03-09T16:09:11.341415+0000 mon.a (mon.0) 3551 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T16:09:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:12 vm09 bash[22983]: cluster 2026-03-09T16:09:11.341415+0000 mon.a (mon.0) 3551 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T16:09:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:12 vm09 bash[22983]: audit 2026-03-09T16:09:11.406718+0000 mon.c (mon.2) 613 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:12 vm09 bash[22983]: audit 2026-03-09T16:09:11.406718+0000 mon.c (mon.2) 613 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:12 vm09 bash[22983]: audit 2026-03-09T16:09:11.407010+0000 mon.a (mon.0) 3552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:12 vm09 bash[22983]: audit 2026-03-09T16:09:11.407010+0000 mon.a (mon.0) 3552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:12 vm01 bash[28152]: audit 2026-03-09T16:09:11.336512+0000 mon.a (mon.0) 3550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:12 vm01 bash[28152]: audit 2026-03-09T16:09:11.336512+0000 mon.a (mon.0) 3550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:12 vm01 bash[28152]: cluster 2026-03-09T16:09:11.341415+0000 mon.a (mon.0) 3551 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:12 vm01 bash[28152]: cluster 2026-03-09T16:09:11.341415+0000 mon.a (mon.0) 3551 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:12 vm01 bash[28152]: audit 2026-03-09T16:09:11.406718+0000 mon.c (mon.2) 613 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:12 vm01 bash[28152]: audit 2026-03-09T16:09:11.406718+0000 mon.c (mon.2) 613 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:12 vm01 bash[28152]: audit 2026-03-09T16:09:11.407010+0000 mon.a (mon.0) 3552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:12 vm01 bash[28152]: audit 2026-03-09T16:09:11.407010+0000 mon.a (mon.0) 3552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:12 vm01 bash[20728]: audit 2026-03-09T16:09:11.336512+0000 mon.a (mon.0) 3550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:12 vm01 bash[20728]: audit 2026-03-09T16:09:11.336512+0000 mon.a (mon.0) 3550 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:12 vm01 bash[20728]: cluster 2026-03-09T16:09:11.341415+0000 mon.a (mon.0) 3551 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:12 vm01 bash[20728]: cluster 2026-03-09T16:09:11.341415+0000 mon.a (mon.0) 3551 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:12 vm01 bash[20728]: audit 2026-03-09T16:09:11.406718+0000 mon.c (mon.2) 613 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:12 vm01 bash[20728]: audit 2026-03-09T16:09:11.406718+0000 mon.c (mon.2) 613 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:12 vm01 bash[20728]: audit 2026-03-09T16:09:11.407010+0000 mon.a (mon.0) 3552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:12.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:12 vm01 bash[20728]: audit 2026-03-09T16:09:11.407010+0000 mon.a (mon.0) 3552 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:13.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:09:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:09:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: audit 2026-03-09T16:09:12.376392+0000 mon.a (mon.0) 3553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: audit 2026-03-09T16:09:12.376392+0000 mon.a (mon.0) 3553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: cluster 2026-03-09T16:09:12.378639+0000 mon.a (mon.0) 3554 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: cluster 2026-03-09T16:09:12.378639+0000 mon.a (mon.0) 3554 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: audit 2026-03-09T16:09:12.381523+0000 mon.c (mon.2) 614 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: audit 2026-03-09T16:09:12.381523+0000 mon.c (mon.2) 614 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: audit 2026-03-09T16:09:12.381812+0000 mon.a (mon.0) 3555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: audit 2026-03-09T16:09:12.381812+0000 mon.a (mon.0) 3555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: cluster 2026-03-09T16:09:12.857429+0000 mgr.y (mgr.14520) 591 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:13 vm01 bash[28152]: cluster 2026-03-09T16:09:12.857429+0000 mgr.y (mgr.14520) 591 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: audit 2026-03-09T16:09:12.376392+0000 mon.a (mon.0) 3553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: audit 2026-03-09T16:09:12.376392+0000 mon.a (mon.0) 3553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: cluster 2026-03-09T16:09:12.378639+0000 mon.a (mon.0) 3554 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: cluster 2026-03-09T16:09:12.378639+0000 mon.a (mon.0) 3554 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: audit 2026-03-09T16:09:12.381523+0000 mon.c (mon.2) 614 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: audit 2026-03-09T16:09:12.381523+0000 mon.c (mon.2) 614 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: audit 2026-03-09T16:09:12.381812+0000 mon.a (mon.0) 3555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: audit 2026-03-09T16:09:12.381812+0000 mon.a (mon.0) 3555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: cluster 2026-03-09T16:09:12.857429+0000 mgr.y (mgr.14520) 591 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:13.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:13 vm01 bash[20728]: cluster 2026-03-09T16:09:12.857429+0000 mgr.y (mgr.14520) 591 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:13.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: audit 2026-03-09T16:09:12.376392+0000 mon.a (mon.0) 3553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: audit 2026-03-09T16:09:12.376392+0000 mon.a (mon.0) 3553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: cluster 2026-03-09T16:09:12.378639+0000 mon.a (mon.0) 3554 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T16:09:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: cluster 2026-03-09T16:09:12.378639+0000 mon.a (mon.0) 3554 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-09T16:09:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: audit 2026-03-09T16:09:12.381523+0000 mon.c (mon.2) 614 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: audit 2026-03-09T16:09:12.381523+0000 mon.c (mon.2) 614 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: audit 2026-03-09T16:09:12.381812+0000 mon.a (mon.0) 3555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: audit 2026-03-09T16:09:12.381812+0000 mon.a (mon.0) 3555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: cluster 2026-03-09T16:09:12.857429+0000 mgr.y (mgr.14520) 591 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:13.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:13 vm09 bash[22983]: cluster 2026-03-09T16:09:12.857429+0000 mgr.y (mgr.14520) 591 : cluster [DBG] pgmap v1045: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:14 vm01 bash[28152]: audit 2026-03-09T16:09:13.388501+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:14 vm01 bash[28152]: audit 2026-03-09T16:09:13.388501+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:14 vm01 bash[28152]: audit 2026-03-09T16:09:13.401460+0000 mon.c (mon.2) 615 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:14 vm01 bash[28152]: audit 2026-03-09T16:09:13.401460+0000 mon.c (mon.2) 615 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:14 vm01 bash[28152]: cluster 2026-03-09T16:09:13.407957+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:14 vm01 bash[28152]: cluster 2026-03-09T16:09:13.407957+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:14 vm01 bash[28152]: audit 2026-03-09T16:09:13.410727+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:14 vm01 bash[28152]: audit 2026-03-09T16:09:13.410727+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:14 vm01 bash[20728]: audit 2026-03-09T16:09:13.388501+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:14 vm01 bash[20728]: audit 2026-03-09T16:09:13.388501+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:14 vm01 bash[20728]: audit 2026-03-09T16:09:13.401460+0000 mon.c (mon.2) 615 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:14 vm01 bash[20728]: audit 2026-03-09T16:09:13.401460+0000 mon.c (mon.2) 615 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:14 vm01 bash[20728]: cluster 2026-03-09T16:09:13.407957+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:14 vm01 bash[20728]: cluster 2026-03-09T16:09:13.407957+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T16:09:14.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:14 vm01 bash[20728]: audit 2026-03-09T16:09:13.410727+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:14 vm01 bash[20728]: audit 2026-03-09T16:09:13.410727+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:14 vm09 bash[22983]: audit 2026-03-09T16:09:13.388501+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:14 vm09 bash[22983]: audit 2026-03-09T16:09:13.388501+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:14 vm09 bash[22983]: audit 2026-03-09T16:09:13.401460+0000 mon.c (mon.2) 615 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:14 vm09 bash[22983]: audit 2026-03-09T16:09:13.401460+0000 mon.c (mon.2) 615 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:14 vm09 bash[22983]: cluster 2026-03-09T16:09:13.407957+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T16:09:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:14 vm09 bash[22983]: cluster 2026-03-09T16:09:13.407957+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-09T16:09:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:14 vm09 bash[22983]: audit 2026-03-09T16:09:13.410727+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:14.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:14 vm09 bash[22983]: audit 2026-03-09T16:09:13.410727+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.392527+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.392527+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: cluster 2026-03-09T16:09:14.397256+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T16:09:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: cluster 2026-03-09T16:09:14.397256+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T16:09:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.402169+0000 mon.c (mon.2) 616 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.402169+0000 mon.c (mon.2) 616 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.424138+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.424138+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.496254+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.496254+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.498291+0000 mon.a (mon.0) 3563 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: audit 2026-03-09T16:09:14.498291+0000 mon.a (mon.0) 3563 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: cluster 2026-03-09T16:09:14.857879+0000 mgr.y (mgr.14520) 592 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:15 vm01 bash[28152]: cluster 2026-03-09T16:09:14.857879+0000 mgr.y (mgr.14520) 592 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.392527+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.392527+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: cluster 2026-03-09T16:09:14.397256+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: cluster 2026-03-09T16:09:14.397256+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.402169+0000 mon.c (mon.2) 616 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.402169+0000 mon.c (mon.2) 616 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.424138+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.424138+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.496254+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.496254+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.498291+0000 mon.a (mon.0) 3563 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: audit 2026-03-09T16:09:14.498291+0000 mon.a (mon.0) 3563 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: cluster 2026-03-09T16:09:14.857879+0000 mgr.y (mgr.14520) 592 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:15.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:15 vm01 bash[20728]: cluster 2026-03-09T16:09:14.857879+0000 mgr.y (mgr.14520) 592 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:15.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.392527+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.392527+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: cluster 2026-03-09T16:09:14.397256+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: cluster 2026-03-09T16:09:14.397256+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.402169+0000 mon.c (mon.2) 616 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.402169+0000 mon.c (mon.2) 616 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.424138+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.424138+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.496254+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.496254+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.498291+0000 mon.a (mon.0) 3563 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: audit 2026-03-09T16:09:14.498291+0000 mon.a (mon.0) 3563 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: cluster 2026-03-09T16:09:14.857879+0000 mgr.y (mgr.14520) 592 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:15.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:15 vm09 bash[22983]: cluster 2026-03-09T16:09:14.857879+0000 mgr.y (mgr.14520) 592 : cluster [DBG] pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:16.824 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:16 vm09 bash[22983]: audit 2026-03-09T16:09:15.433440+0000 mon.a (mon.0) 3564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:09:16.824 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:16 vm09 bash[22983]: audit 2026-03-09T16:09:15.433440+0000 mon.a (mon.0) 3564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:09:16.824 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:16 vm09 bash[22983]: cluster 2026-03-09T16:09:15.435960+0000 mon.a (mon.0) 3565 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T16:09:16.824 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:16 vm09 bash[22983]: cluster 2026-03-09T16:09:15.435960+0000 mon.a (mon.0) 3565 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T16:09:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:16 vm01 bash[28152]: audit 2026-03-09T16:09:15.433440+0000 mon.a (mon.0) 3564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:09:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:16 vm01 bash[28152]: audit 2026-03-09T16:09:15.433440+0000 mon.a (mon.0) 3564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:09:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:16 vm01 bash[28152]: cluster 2026-03-09T16:09:15.435960+0000 mon.a (mon.0) 3565 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T16:09:16.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:16 vm01 bash[28152]: cluster 2026-03-09T16:09:15.435960+0000 mon.a (mon.0) 3565 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T16:09:16.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:16 vm01 bash[20728]: audit 2026-03-09T16:09:15.433440+0000 mon.a (mon.0) 3564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:09:16.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:16 vm01 bash[20728]: audit 2026-03-09T16:09:15.433440+0000 mon.a (mon.0) 3564 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T16:09:16.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:16 vm01 bash[20728]: cluster 2026-03-09T16:09:15.435960+0000 mon.a (mon.0) 3565 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T16:09:16.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:16 vm01 bash[20728]: cluster 2026-03-09T16:09:15.435960+0000 mon.a (mon.0) 3565 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-09T16:09:17.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:09:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.455125+0000 mon.c (mon.2) 617 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.455125+0000 mon.c (mon.2) 617 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.455424+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.455424+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.455837+0000 mon.c (mon.2) 618 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.455837+0000 mon.c (mon.2) 618 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.456001+0000 mon.a (mon.0) 3567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.456001+0000 mon.a (mon.0) 3567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.828083+0000 mgr.y (mgr.14520) 593 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: audit 2026-03-09T16:09:16.828083+0000 mgr.y (mgr.14520) 593 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: cluster 2026-03-09T16:09:16.858255+0000 mgr.y (mgr.14520) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:17.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:17 vm09 bash[22983]: cluster 2026-03-09T16:09:16.858255+0000 mgr.y (mgr.14520) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.455125+0000 mon.c (mon.2) 617 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.455125+0000 mon.c (mon.2) 617 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.455424+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.455424+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.455837+0000 mon.c (mon.2) 618 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.455837+0000 mon.c (mon.2) 618 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.456001+0000 mon.a (mon.0) 3567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.456001+0000 mon.a (mon.0) 3567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.828083+0000 mgr.y (mgr.14520) 593 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: audit 2026-03-09T16:09:16.828083+0000 mgr.y (mgr.14520) 593 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: cluster 2026-03-09T16:09:16.858255+0000 mgr.y (mgr.14520) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:17 vm01 bash[28152]: cluster 2026-03-09T16:09:16.858255+0000 mgr.y (mgr.14520) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.455125+0000 mon.c (mon.2) 617 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.455125+0000 mon.c (mon.2) 617 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.455424+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.455424+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.455837+0000 mon.c (mon.2) 618 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.455837+0000 mon.c (mon.2) 618 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.456001+0000 mon.a (mon.0) 3567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.456001+0000 mon.a (mon.0) 3567 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.828083+0000 mgr.y (mgr.14520) 593 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: audit 2026-03-09T16:09:16.828083+0000 mgr.y (mgr.14520) 593 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: cluster 2026-03-09T16:09:16.858255+0000 mgr.y (mgr.14520) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:17.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:17 vm01 bash[20728]: cluster 2026-03-09T16:09:16.858255+0000 mgr.y (mgr.14520) 594 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:18.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:18 vm09 bash[22983]: audit 2026-03-09T16:09:17.439380+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]': finished 2026-03-09T16:09:18.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:18 vm09 bash[22983]: audit 2026-03-09T16:09:17.439380+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]': finished 2026-03-09T16:09:18.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:18 vm09 bash[22983]: cluster 2026-03-09T16:09:17.443031+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T16:09:18.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:18 vm09 bash[22983]: cluster 2026-03-09T16:09:17.443031+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T16:09:18.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:18 vm01 bash[28152]: audit 2026-03-09T16:09:17.439380+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]': finished 2026-03-09T16:09:18.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:18 vm01 bash[28152]: audit 2026-03-09T16:09:17.439380+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]': finished 2026-03-09T16:09:18.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:18 vm01 bash[28152]: cluster 2026-03-09T16:09:17.443031+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T16:09:18.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:18 vm01 bash[28152]: cluster 2026-03-09T16:09:17.443031+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T16:09:18.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:18 vm01 bash[20728]: audit 2026-03-09T16:09:17.439380+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]': finished 2026-03-09T16:09:18.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:18 vm01 bash[20728]: audit 2026-03-09T16:09:17.439380+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-138"}]': finished 2026-03-09T16:09:18.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:18 vm01 bash[20728]: cluster 2026-03-09T16:09:17.443031+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T16:09:18.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:18 vm01 bash[20728]: cluster 2026-03-09T16:09:17.443031+0000 mon.a (mon.0) 3569 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-09T16:09:19.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:19 vm09 bash[22983]: cluster 2026-03-09T16:09:18.459982+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T16:09:19.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:19 vm09 bash[22983]: cluster 2026-03-09T16:09:18.459982+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T16:09:19.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:19 vm09 bash[22983]: cluster 2026-03-09T16:09:18.858776+0000 mgr.y (mgr.14520) 595 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:19.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:19 vm09 bash[22983]: cluster 2026-03-09T16:09:18.858776+0000 mgr.y (mgr.14520) 595 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:19 vm01 bash[28152]: cluster 2026-03-09T16:09:18.459982+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T16:09:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:19 vm01 bash[28152]: cluster 2026-03-09T16:09:18.459982+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T16:09:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:19 vm01 bash[28152]: cluster 2026-03-09T16:09:18.858776+0000 mgr.y (mgr.14520) 595 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:19.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:19 vm01 bash[28152]: cluster 2026-03-09T16:09:18.858776+0000 mgr.y (mgr.14520) 595 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:19 vm01 bash[20728]: cluster 2026-03-09T16:09:18.459982+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T16:09:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:19 vm01 bash[20728]: cluster 2026-03-09T16:09:18.459982+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-09T16:09:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:19 vm01 bash[20728]: cluster 2026-03-09T16:09:18.858776+0000 mgr.y (mgr.14520) 595 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:19.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:19 vm01 bash[20728]: cluster 2026-03-09T16:09:18.858776+0000 mgr.y (mgr.14520) 595 : cluster [DBG] pgmap v1053: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:20.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:20 vm09 bash[22983]: cluster 2026-03-09T16:09:19.485977+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T16:09:20.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:20 vm09 bash[22983]: cluster 2026-03-09T16:09:19.485977+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T16:09:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:20 vm09 bash[22983]: audit 2026-03-09T16:09:19.517236+0000 mon.c (mon.2) 619 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:20 vm09 bash[22983]: audit 2026-03-09T16:09:19.517236+0000 mon.c (mon.2) 619 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:20 vm09 bash[22983]: audit 2026-03-09T16:09:19.517607+0000 mon.a (mon.0) 3572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:20 vm09 bash[22983]: audit 2026-03-09T16:09:19.517607+0000 mon.a (mon.0) 3572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:20 vm01 bash[28152]: cluster 2026-03-09T16:09:19.485977+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:20 vm01 bash[28152]: cluster 2026-03-09T16:09:19.485977+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:20 vm01 bash[28152]: audit 2026-03-09T16:09:19.517236+0000 mon.c (mon.2) 619 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:20 vm01 bash[28152]: audit 2026-03-09T16:09:19.517236+0000 mon.c (mon.2) 619 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:20 vm01 bash[28152]: audit 2026-03-09T16:09:19.517607+0000 mon.a (mon.0) 3572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:20 vm01 bash[28152]: audit 2026-03-09T16:09:19.517607+0000 mon.a (mon.0) 3572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:20 vm01 bash[20728]: cluster 2026-03-09T16:09:19.485977+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:20 vm01 bash[20728]: cluster 2026-03-09T16:09:19.485977+0000 mon.a (mon.0) 3571 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:20 vm01 bash[20728]: audit 2026-03-09T16:09:19.517236+0000 mon.c (mon.2) 619 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:20 vm01 bash[20728]: audit 2026-03-09T16:09:19.517236+0000 mon.c (mon.2) 619 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:20 vm01 bash[20728]: audit 2026-03-09T16:09:19.517607+0000 mon.a (mon.0) 3572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:20.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:20 vm01 bash[20728]: audit 2026-03-09T16:09:19.517607+0000 mon.a (mon.0) 3572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:20.485977+0000 mon.a (mon.0) 3573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:20.485977+0000 mon.a (mon.0) 3573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: cluster 2026-03-09T16:09:20.508553+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: cluster 2026-03-09T16:09:20.508553+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:20.551894+0000 mon.c (mon.2) 620 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:20.551894+0000 mon.c (mon.2) 620 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:20.552299+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:20.552299+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: cluster 2026-03-09T16:09:20.859103+0000 mgr.y (mgr.14520) 596 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: cluster 2026-03-09T16:09:20.859103+0000 mgr.y (mgr.14520) 596 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:21.490266+0000 mon.a (mon.0) 3576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:21.490266+0000 mon.a (mon.0) 3576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:21.496810+0000 mon.c (mon.2) 621 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:21.496810+0000 mon.c (mon.2) 621 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: cluster 2026-03-09T16:09:21.498897+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: cluster 2026-03-09T16:09:21.498897+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:21.499761+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:21 vm09 bash[22983]: audit 2026-03-09T16:09:21.499761+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:20.485977+0000 mon.a (mon.0) 3573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:20.485977+0000 mon.a (mon.0) 3573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: cluster 2026-03-09T16:09:20.508553+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: cluster 2026-03-09T16:09:20.508553+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:20.551894+0000 mon.c (mon.2) 620 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:20.551894+0000 mon.c (mon.2) 620 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:20.552299+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:20.552299+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: cluster 2026-03-09T16:09:20.859103+0000 mgr.y (mgr.14520) 596 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: cluster 2026-03-09T16:09:20.859103+0000 mgr.y (mgr.14520) 596 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:21.490266+0000 mon.a (mon.0) 3576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:21.490266+0000 mon.a (mon.0) 3576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:21.496810+0000 mon.c (mon.2) 621 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:21.496810+0000 mon.c (mon.2) 621 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: cluster 2026-03-09T16:09:21.498897+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: cluster 2026-03-09T16:09:21.498897+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:21.499761+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:21 vm01 bash[28152]: audit 2026-03-09T16:09:21.499761+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:20.485977+0000 mon.a (mon.0) 3573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:20.485977+0000 mon.a (mon.0) 3573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: cluster 2026-03-09T16:09:20.508553+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: cluster 2026-03-09T16:09:20.508553+0000 mon.a (mon.0) 3574 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:20.551894+0000 mon.c (mon.2) 620 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:20.551894+0000 mon.c (mon.2) 620 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:20.552299+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:20.552299+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: cluster 2026-03-09T16:09:20.859103+0000 mgr.y (mgr.14520) 596 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: cluster 2026-03-09T16:09:20.859103+0000 mgr.y (mgr.14520) 596 : cluster [DBG] pgmap v1056: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:21.490266+0000 mon.a (mon.0) 3576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:21.490266+0000 mon.a (mon.0) 3576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:21.496810+0000 mon.c (mon.2) 621 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:21.496810+0000 mon.c (mon.2) 621 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: cluster 2026-03-09T16:09:21.498897+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: cluster 2026-03-09T16:09:21.498897+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:21.499761+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:21.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:21 vm01 bash[20728]: audit 2026-03-09T16:09:21.499761+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T16:09:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:09:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:09:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:09:24.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: audit 2026-03-09T16:09:22.493586+0000 mon.a (mon.0) 3579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:09:24.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: audit 2026-03-09T16:09:22.493586+0000 mon.a (mon.0) 3579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:09:24.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: audit 2026-03-09T16:09:22.504371+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: audit 2026-03-09T16:09:22.504371+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: cluster 2026-03-09T16:09:22.507151+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T16:09:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: cluster 2026-03-09T16:09:22.507151+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T16:09:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: audit 2026-03-09T16:09:22.507991+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: audit 2026-03-09T16:09:22.507991+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: cluster 2026-03-09T16:09:22.859881+0000 mgr.y (mgr.14520) 597 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:24.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:23 vm09 bash[22983]: cluster 2026-03-09T16:09:22.859881+0000 mgr.y (mgr.14520) 597 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: audit 2026-03-09T16:09:22.493586+0000 mon.a (mon.0) 3579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:09:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: audit 2026-03-09T16:09:22.493586+0000 mon.a (mon.0) 3579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:09:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: audit 2026-03-09T16:09:22.504371+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: audit 2026-03-09T16:09:22.504371+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: cluster 2026-03-09T16:09:22.507151+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T16:09:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: cluster 2026-03-09T16:09:22.507151+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T16:09:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: audit 2026-03-09T16:09:22.507991+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: audit 2026-03-09T16:09:22.507991+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: cluster 2026-03-09T16:09:22.859881+0000 mgr.y (mgr.14520) 597 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:23 vm01 bash[28152]: cluster 2026-03-09T16:09:22.859881+0000 mgr.y (mgr.14520) 597 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: audit 2026-03-09T16:09:22.493586+0000 mon.a (mon.0) 3579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: audit 2026-03-09T16:09:22.493586+0000 mon.a (mon.0) 3579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: audit 2026-03-09T16:09:22.504371+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: audit 2026-03-09T16:09:22.504371+0000 mon.c (mon.2) 622 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: cluster 2026-03-09T16:09:22.507151+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: cluster 2026-03-09T16:09:22.507151+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: audit 2026-03-09T16:09:22.507991+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: audit 2026-03-09T16:09:22.507991+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: cluster 2026-03-09T16:09:22.859881+0000 mgr.y (mgr.14520) 597 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:24.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:23 vm01 bash[20728]: cluster 2026-03-09T16:09:22.859881+0000 mgr.y (mgr.14520) 597 : cluster [DBG] pgmap v1059: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:25.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:24 vm09 bash[22983]: audit 2026-03-09T16:09:23.730112+0000 mon.a (mon.0) 3582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:09:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:24 vm09 bash[22983]: audit 2026-03-09T16:09:23.730112+0000 mon.a (mon.0) 3582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:09:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:24 vm09 bash[22983]: cluster 2026-03-09T16:09:23.738858+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T16:09:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:24 vm09 bash[22983]: cluster 2026-03-09T16:09:23.738858+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T16:09:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:24 vm09 bash[22983]: audit 2026-03-09T16:09:23.749514+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:24 vm09 bash[22983]: audit 2026-03-09T16:09:23.749514+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:24 vm09 bash[22983]: audit 2026-03-09T16:09:23.751322+0000 mon.a (mon.0) 3584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:24 vm09 bash[22983]: audit 2026-03-09T16:09:23.751322+0000 mon.a (mon.0) 3584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:24 vm01 bash[28152]: audit 2026-03-09T16:09:23.730112+0000 mon.a (mon.0) 3582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:24 vm01 bash[28152]: audit 2026-03-09T16:09:23.730112+0000 mon.a (mon.0) 3582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:24 vm01 bash[28152]: cluster 2026-03-09T16:09:23.738858+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:24 vm01 bash[28152]: cluster 2026-03-09T16:09:23.738858+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:24 vm01 bash[28152]: audit 2026-03-09T16:09:23.749514+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:24 vm01 bash[28152]: audit 2026-03-09T16:09:23.749514+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:24 vm01 bash[28152]: audit 2026-03-09T16:09:23.751322+0000 mon.a (mon.0) 3584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:24 vm01 bash[28152]: audit 2026-03-09T16:09:23.751322+0000 mon.a (mon.0) 3584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:24 vm01 bash[20728]: audit 2026-03-09T16:09:23.730112+0000 mon.a (mon.0) 3582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:24 vm01 bash[20728]: audit 2026-03-09T16:09:23.730112+0000 mon.a (mon.0) 3582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:24 vm01 bash[20728]: cluster 2026-03-09T16:09:23.738858+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:24 vm01 bash[20728]: cluster 2026-03-09T16:09:23.738858+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:24 vm01 bash[20728]: audit 2026-03-09T16:09:23.749514+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:24 vm01 bash[20728]: audit 2026-03-09T16:09:23.749514+0000 mon.c (mon.2) 623 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:24 vm01 bash[20728]: audit 2026-03-09T16:09:23.751322+0000 mon.a (mon.0) 3584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:25.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:24 vm01 bash[20728]: audit 2026-03-09T16:09:23.751322+0000 mon.a (mon.0) 3584 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: audit 2026-03-09T16:09:24.808057+0000 mon.a (mon.0) 3585 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: audit 2026-03-09T16:09:24.808057+0000 mon.a (mon.0) 3585 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: cluster 2026-03-09T16:09:24.811949+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: cluster 2026-03-09T16:09:24.811949+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: audit 2026-03-09T16:09:24.822935+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: audit 2026-03-09T16:09:24.822935+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: audit 2026-03-09T16:09:24.824468+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: audit 2026-03-09T16:09:24.824468+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: cluster 2026-03-09T16:09:24.860380+0000 mgr.y (mgr.14520) 598 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:26 vm09 bash[22983]: cluster 2026-03-09T16:09:24.860380+0000 mgr.y (mgr.14520) 598 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: audit 2026-03-09T16:09:24.808057+0000 mon.a (mon.0) 3585 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: audit 2026-03-09T16:09:24.808057+0000 mon.a (mon.0) 3585 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: cluster 2026-03-09T16:09:24.811949+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: cluster 2026-03-09T16:09:24.811949+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: audit 2026-03-09T16:09:24.822935+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: audit 2026-03-09T16:09:24.822935+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: audit 2026-03-09T16:09:24.824468+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: audit 2026-03-09T16:09:24.824468+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: cluster 2026-03-09T16:09:24.860380+0000 mgr.y (mgr.14520) 598 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:26.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:26 vm01 bash[28152]: cluster 2026-03-09T16:09:24.860380+0000 mgr.y (mgr.14520) 598 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: audit 2026-03-09T16:09:24.808057+0000 mon.a (mon.0) 3585 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: audit 2026-03-09T16:09:24.808057+0000 mon.a (mon.0) 3585 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: cluster 2026-03-09T16:09:24.811949+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: cluster 2026-03-09T16:09:24.811949+0000 mon.a (mon.0) 3586 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: audit 2026-03-09T16:09:24.822935+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: audit 2026-03-09T16:09:24.822935+0000 mon.c (mon.2) 624 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: audit 2026-03-09T16:09:24.824468+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: audit 2026-03-09T16:09:24.824468+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: cluster 2026-03-09T16:09:24.860380+0000 mgr.y (mgr.14520) 598 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:26.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:26 vm01 bash[20728]: cluster 2026-03-09T16:09:24.860380+0000 mgr.y (mgr.14520) 598 : cluster [DBG] pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:27.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:09:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:09:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:27 vm01 bash[28152]: audit 2026-03-09T16:09:25.990382+0000 mon.a (mon.0) 3588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:09:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:27 vm01 bash[28152]: audit 2026-03-09T16:09:25.990382+0000 mon.a (mon.0) 3588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:09:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:27 vm01 bash[28152]: cluster 2026-03-09T16:09:26.037371+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T16:09:27.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:27 vm01 bash[28152]: cluster 2026-03-09T16:09:26.037371+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T16:09:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:27 vm01 bash[20728]: audit 2026-03-09T16:09:25.990382+0000 mon.a (mon.0) 3588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:09:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:27 vm01 bash[20728]: audit 2026-03-09T16:09:25.990382+0000 mon.a (mon.0) 3588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:09:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:27 vm01 bash[20728]: cluster 2026-03-09T16:09:26.037371+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T16:09:27.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:27 vm01 bash[20728]: cluster 2026-03-09T16:09:26.037371+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T16:09:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:27 vm09 bash[22983]: audit 2026-03-09T16:09:25.990382+0000 mon.a (mon.0) 3588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:09:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:27 vm09 bash[22983]: audit 2026-03-09T16:09:25.990382+0000 mon.a (mon.0) 3588 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T16:09:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:27 vm09 bash[22983]: cluster 2026-03-09T16:09:26.037371+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T16:09:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:27 vm09 bash[22983]: cluster 2026-03-09T16:09:26.037371+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-09T16:09:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:28 vm01 bash[28152]: audit 2026-03-09T16:09:26.838909+0000 mgr.y (mgr.14520) 599 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:28 vm01 bash[28152]: audit 2026-03-09T16:09:26.838909+0000 mgr.y (mgr.14520) 599 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:28 vm01 bash[28152]: cluster 2026-03-09T16:09:26.860698+0000 mgr.y (mgr.14520) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:28 vm01 bash[28152]: cluster 2026-03-09T16:09:26.860698+0000 mgr.y (mgr.14520) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:28 vm01 bash[20728]: audit 2026-03-09T16:09:26.838909+0000 mgr.y (mgr.14520) 599 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:28 vm01 bash[20728]: audit 2026-03-09T16:09:26.838909+0000 mgr.y (mgr.14520) 599 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:28 vm01 bash[20728]: cluster 2026-03-09T16:09:26.860698+0000 mgr.y (mgr.14520) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:28 vm01 bash[20728]: cluster 2026-03-09T16:09:26.860698+0000 mgr.y (mgr.14520) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:28 vm09 bash[22983]: audit 2026-03-09T16:09:26.838909+0000 mgr.y (mgr.14520) 599 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:28 vm09 bash[22983]: audit 2026-03-09T16:09:26.838909+0000 mgr.y (mgr.14520) 599 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:28 vm09 bash[22983]: cluster 2026-03-09T16:09:26.860698+0000 mgr.y (mgr.14520) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:28 vm09 bash[22983]: cluster 2026-03-09T16:09:26.860698+0000 mgr.y (mgr.14520) 600 : cluster [DBG] pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:09:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:29 vm09 bash[22983]: cluster 2026-03-09T16:09:28.861233+0000 mgr.y (mgr.14520) 601 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T16:09:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:29 vm09 bash[22983]: cluster 2026-03-09T16:09:28.861233+0000 mgr.y (mgr.14520) 601 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T16:09:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:29 vm01 bash[28152]: cluster 2026-03-09T16:09:28.861233+0000 mgr.y (mgr.14520) 601 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T16:09:29.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:29 vm01 bash[28152]: cluster 2026-03-09T16:09:28.861233+0000 mgr.y (mgr.14520) 601 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T16:09:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:29 vm01 bash[20728]: cluster 2026-03-09T16:09:28.861233+0000 mgr.y (mgr.14520) 601 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T16:09:29.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:29 vm01 bash[20728]: cluster 2026-03-09T16:09:28.861233+0000 mgr.y (mgr.14520) 601 : cluster [DBG] pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-09T16:09:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:30 vm09 bash[22983]: audit 2026-03-09T16:09:29.506658+0000 mon.a (mon.0) 3590 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:30 vm09 bash[22983]: audit 2026-03-09T16:09:29.506658+0000 mon.a (mon.0) 3590 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:30 vm01 bash[28152]: audit 2026-03-09T16:09:29.506658+0000 mon.a (mon.0) 3590 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:30.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:30 vm01 bash[28152]: audit 2026-03-09T16:09:29.506658+0000 mon.a (mon.0) 3590 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:30.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:30 vm01 bash[20728]: audit 2026-03-09T16:09:29.506658+0000 mon.a (mon.0) 3590 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:30.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:30 vm01 bash[20728]: audit 2026-03-09T16:09:29.506658+0000 mon.a (mon.0) 3590 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:31 vm09 bash[22983]: cluster 2026-03-09T16:09:30.861953+0000 mgr.y (mgr.14520) 602 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 3.4 KiB/s wr, 1 op/s 2026-03-09T16:09:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:31 vm09 bash[22983]: cluster 2026-03-09T16:09:30.861953+0000 mgr.y (mgr.14520) 602 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 3.4 KiB/s wr, 1 op/s 2026-03-09T16:09:31.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:31 vm01 bash[28152]: cluster 2026-03-09T16:09:30.861953+0000 mgr.y (mgr.14520) 602 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 3.4 KiB/s wr, 1 op/s 2026-03-09T16:09:31.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:31 vm01 bash[28152]: cluster 2026-03-09T16:09:30.861953+0000 mgr.y (mgr.14520) 602 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 3.4 KiB/s wr, 1 op/s 2026-03-09T16:09:31.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:31 vm01 bash[20728]: cluster 2026-03-09T16:09:30.861953+0000 mgr.y (mgr.14520) 602 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 3.4 KiB/s wr, 1 op/s 2026-03-09T16:09:31.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:31 vm01 bash[20728]: cluster 2026-03-09T16:09:30.861953+0000 mgr.y (mgr.14520) 602 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 3.4 KiB/s wr, 1 op/s 2026-03-09T16:09:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:09:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:09:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:09:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:34 vm09 bash[22983]: cluster 2026-03-09T16:09:32.862312+0000 mgr.y (mgr.14520) 603 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 635 B/s rd, 3.0 KiB/s wr, 0 op/s 2026-03-09T16:09:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:34 vm09 bash[22983]: cluster 2026-03-09T16:09:32.862312+0000 mgr.y (mgr.14520) 603 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 635 B/s rd, 3.0 KiB/s wr, 0 op/s 2026-03-09T16:09:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:34 vm01 bash[28152]: cluster 2026-03-09T16:09:32.862312+0000 mgr.y (mgr.14520) 603 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 635 B/s rd, 3.0 KiB/s wr, 0 op/s 2026-03-09T16:09:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:34 vm01 bash[28152]: cluster 2026-03-09T16:09:32.862312+0000 mgr.y (mgr.14520) 603 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 635 B/s rd, 3.0 KiB/s wr, 0 op/s 2026-03-09T16:09:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:34 vm01 bash[20728]: cluster 2026-03-09T16:09:32.862312+0000 mgr.y (mgr.14520) 603 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 635 B/s rd, 3.0 KiB/s wr, 0 op/s 2026-03-09T16:09:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:34 vm01 bash[20728]: cluster 2026-03-09T16:09:32.862312+0000 mgr.y (mgr.14520) 603 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 635 B/s rd, 3.0 KiB/s wr, 0 op/s 2026-03-09T16:09:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:36 vm09 bash[22983]: cluster 2026-03-09T16:09:34.863432+0000 mgr.y (mgr.14520) 604 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T16:09:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:36 vm09 bash[22983]: cluster 2026-03-09T16:09:34.863432+0000 mgr.y (mgr.14520) 604 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T16:09:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:36 vm01 bash[28152]: cluster 2026-03-09T16:09:34.863432+0000 mgr.y (mgr.14520) 604 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T16:09:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:36 vm01 bash[28152]: cluster 2026-03-09T16:09:34.863432+0000 mgr.y (mgr.14520) 604 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T16:09:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:36 vm01 bash[20728]: cluster 2026-03-09T16:09:34.863432+0000 mgr.y (mgr.14520) 604 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T16:09:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:36 vm01 bash[20728]: cluster 2026-03-09T16:09:34.863432+0000 mgr.y (mgr.14520) 604 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 6.6 KiB/s wr, 1 op/s 2026-03-09T16:09:37.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:09:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:09:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:38 vm09 bash[22983]: audit 2026-03-09T16:09:36.844665+0000 mgr.y (mgr.14520) 605 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:38 vm09 bash[22983]: audit 2026-03-09T16:09:36.844665+0000 mgr.y (mgr.14520) 605 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:38 vm09 bash[22983]: cluster 2026-03-09T16:09:36.863788+0000 mgr.y (mgr.14520) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 945 B/s rd, 6.1 KiB/s wr, 1 op/s 2026-03-09T16:09:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:38 vm09 bash[22983]: cluster 2026-03-09T16:09:36.863788+0000 mgr.y (mgr.14520) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 945 B/s rd, 6.1 KiB/s wr, 1 op/s 2026-03-09T16:09:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:38 vm01 bash[28152]: audit 2026-03-09T16:09:36.844665+0000 mgr.y (mgr.14520) 605 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:38 vm01 bash[28152]: audit 2026-03-09T16:09:36.844665+0000 mgr.y (mgr.14520) 605 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:38 vm01 bash[28152]: cluster 2026-03-09T16:09:36.863788+0000 mgr.y (mgr.14520) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 945 B/s rd, 6.1 KiB/s wr, 1 op/s 2026-03-09T16:09:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:38 vm01 bash[28152]: cluster 2026-03-09T16:09:36.863788+0000 mgr.y (mgr.14520) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 945 B/s rd, 6.1 KiB/s wr, 1 op/s 2026-03-09T16:09:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:38 vm01 bash[20728]: audit 2026-03-09T16:09:36.844665+0000 mgr.y (mgr.14520) 605 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:38 vm01 bash[20728]: audit 2026-03-09T16:09:36.844665+0000 mgr.y (mgr.14520) 605 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:38 vm01 bash[20728]: cluster 2026-03-09T16:09:36.863788+0000 mgr.y (mgr.14520) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 945 B/s rd, 6.1 KiB/s wr, 1 op/s 2026-03-09T16:09:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:38 vm01 bash[20728]: cluster 2026-03-09T16:09:36.863788+0000 mgr.y (mgr.14520) 606 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 945 B/s rd, 6.1 KiB/s wr, 1 op/s 2026-03-09T16:09:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:39 vm09 bash[22983]: audit 2026-03-09T16:09:38.200826+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:39 vm09 bash[22983]: audit 2026-03-09T16:09:38.200826+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:39 vm09 bash[22983]: audit 2026-03-09T16:09:38.201253+0000 mon.a (mon.0) 3591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:39 vm09 bash[22983]: audit 2026-03-09T16:09:38.201253+0000 mon.a (mon.0) 3591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:39 vm09 bash[22983]: audit 2026-03-09T16:09:38.201985+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:39 vm09 bash[22983]: audit 2026-03-09T16:09:38.201985+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:39 vm09 bash[22983]: audit 2026-03-09T16:09:38.202244+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:39 vm09 bash[22983]: audit 2026-03-09T16:09:38.202244+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:39 vm01 bash[28152]: audit 2026-03-09T16:09:38.200826+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:39 vm01 bash[28152]: audit 2026-03-09T16:09:38.200826+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:39 vm01 bash[28152]: audit 2026-03-09T16:09:38.201253+0000 mon.a (mon.0) 3591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:39 vm01 bash[28152]: audit 2026-03-09T16:09:38.201253+0000 mon.a (mon.0) 3591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:39 vm01 bash[28152]: audit 2026-03-09T16:09:38.201985+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:39 vm01 bash[28152]: audit 2026-03-09T16:09:38.201985+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:39 vm01 bash[28152]: audit 2026-03-09T16:09:38.202244+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:39 vm01 bash[28152]: audit 2026-03-09T16:09:38.202244+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:39 vm01 bash[20728]: audit 2026-03-09T16:09:38.200826+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:39 vm01 bash[20728]: audit 2026-03-09T16:09:38.200826+0000 mon.c (mon.2) 625 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:39 vm01 bash[20728]: audit 2026-03-09T16:09:38.201253+0000 mon.a (mon.0) 3591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:39 vm01 bash[20728]: audit 2026-03-09T16:09:38.201253+0000 mon.a (mon.0) 3591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:39 vm01 bash[20728]: audit 2026-03-09T16:09:38.201985+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:39 vm01 bash[20728]: audit 2026-03-09T16:09:38.201985+0000 mon.c (mon.2) 626 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:39 vm01 bash[20728]: audit 2026-03-09T16:09:38.202244+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:39 vm01 bash[20728]: audit 2026-03-09T16:09:38.202244+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]: dispatch 2026-03-09T16:09:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:40 vm09 bash[22983]: cluster 2026-03-09T16:09:38.864541+0000 mgr.y (mgr.14520) 607 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.4 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 9.0 KiB/s wr, 2 op/s 2026-03-09T16:09:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:40 vm09 bash[22983]: cluster 2026-03-09T16:09:38.864541+0000 mgr.y (mgr.14520) 607 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.4 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 9.0 KiB/s wr, 2 op/s 2026-03-09T16:09:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:40 vm09 bash[22983]: audit 2026-03-09T16:09:39.115433+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]': finished 2026-03-09T16:09:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:40 vm09 bash[22983]: audit 2026-03-09T16:09:39.115433+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]': finished 2026-03-09T16:09:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:40 vm09 bash[22983]: cluster 2026-03-09T16:09:39.132523+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T16:09:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:40 vm09 bash[22983]: cluster 2026-03-09T16:09:39.132523+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:40 vm01 bash[28152]: cluster 2026-03-09T16:09:38.864541+0000 mgr.y (mgr.14520) 607 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.4 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 9.0 KiB/s wr, 2 op/s 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:40 vm01 bash[28152]: cluster 2026-03-09T16:09:38.864541+0000 mgr.y (mgr.14520) 607 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.4 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 9.0 KiB/s wr, 2 op/s 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:40 vm01 bash[28152]: audit 2026-03-09T16:09:39.115433+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]': finished 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:40 vm01 bash[28152]: audit 2026-03-09T16:09:39.115433+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]': finished 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:40 vm01 bash[28152]: cluster 2026-03-09T16:09:39.132523+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:40 vm01 bash[28152]: cluster 2026-03-09T16:09:39.132523+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:40 vm01 bash[20728]: cluster 2026-03-09T16:09:38.864541+0000 mgr.y (mgr.14520) 607 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.4 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 9.0 KiB/s wr, 2 op/s 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:40 vm01 bash[20728]: cluster 2026-03-09T16:09:38.864541+0000 mgr.y (mgr.14520) 607 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.4 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 9.0 KiB/s wr, 2 op/s 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:40 vm01 bash[20728]: audit 2026-03-09T16:09:39.115433+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]': finished 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:40 vm01 bash[20728]: audit 2026-03-09T16:09:39.115433+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-140"}]': finished 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:40 vm01 bash[20728]: cluster 2026-03-09T16:09:39.132523+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T16:09:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:40 vm01 bash[20728]: cluster 2026-03-09T16:09:39.132523+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-09T16:09:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:41 vm01 bash[28152]: cluster 2026-03-09T16:09:40.138392+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T16:09:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:41 vm01 bash[28152]: cluster 2026-03-09T16:09:40.138392+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T16:09:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:41 vm01 bash[20728]: cluster 2026-03-09T16:09:40.138392+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T16:09:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:41 vm01 bash[20728]: cluster 2026-03-09T16:09:40.138392+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T16:09:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:41 vm09 bash[22983]: cluster 2026-03-09T16:09:40.138392+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T16:09:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:41 vm09 bash[22983]: cluster 2026-03-09T16:09:40.138392+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:42 vm01 bash[28152]: cluster 2026-03-09T16:09:40.864941+0000 mgr.y (mgr.14520) 608 : cluster [DBG] pgmap v1073: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:42 vm01 bash[28152]: cluster 2026-03-09T16:09:40.864941+0000 mgr.y (mgr.14520) 608 : cluster [DBG] pgmap v1073: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:42 vm01 bash[28152]: cluster 2026-03-09T16:09:41.177313+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:42 vm01 bash[28152]: cluster 2026-03-09T16:09:41.177313+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:42 vm01 bash[28152]: audit 2026-03-09T16:09:41.196340+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:42 vm01 bash[28152]: audit 2026-03-09T16:09:41.196340+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:42 vm01 bash[28152]: audit 2026-03-09T16:09:41.196665+0000 mon.a (mon.0) 3597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:42 vm01 bash[28152]: audit 2026-03-09T16:09:41.196665+0000 mon.a (mon.0) 3597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:42 vm01 bash[20728]: cluster 2026-03-09T16:09:40.864941+0000 mgr.y (mgr.14520) 608 : cluster [DBG] pgmap v1073: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:42 vm01 bash[20728]: cluster 2026-03-09T16:09:40.864941+0000 mgr.y (mgr.14520) 608 : cluster [DBG] pgmap v1073: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:42 vm01 bash[20728]: cluster 2026-03-09T16:09:41.177313+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T16:09:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:42 vm01 bash[20728]: cluster 2026-03-09T16:09:41.177313+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T16:09:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:42 vm01 bash[20728]: audit 2026-03-09T16:09:41.196340+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:42 vm01 bash[20728]: audit 2026-03-09T16:09:41.196340+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:42 vm01 bash[20728]: audit 2026-03-09T16:09:41.196665+0000 mon.a (mon.0) 3597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:42 vm01 bash[20728]: audit 2026-03-09T16:09:41.196665+0000 mon.a (mon.0) 3597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:42 vm09 bash[22983]: cluster 2026-03-09T16:09:40.864941+0000 mgr.y (mgr.14520) 608 : cluster [DBG] pgmap v1073: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:42 vm09 bash[22983]: cluster 2026-03-09T16:09:40.864941+0000 mgr.y (mgr.14520) 608 : cluster [DBG] pgmap v1073: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:42 vm09 bash[22983]: cluster 2026-03-09T16:09:41.177313+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T16:09:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:42 vm09 bash[22983]: cluster 2026-03-09T16:09:41.177313+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-09T16:09:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:42 vm09 bash[22983]: audit 2026-03-09T16:09:41.196340+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:42 vm09 bash[22983]: audit 2026-03-09T16:09:41.196340+0000 mon.c (mon.2) 627 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:42 vm09 bash[22983]: audit 2026-03-09T16:09:41.196665+0000 mon.a (mon.0) 3597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:42 vm09 bash[22983]: audit 2026-03-09T16:09:41.196665+0000 mon.a (mon.0) 3597 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:43.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:09:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:09:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:09:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:42.174691+0000 mon.a (mon.0) 3598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:42.174691+0000 mon.a (mon.0) 3598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: cluster 2026-03-09T16:09:42.184444+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T16:09:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: cluster 2026-03-09T16:09:42.184444+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T16:09:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:42.268246+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:42.268246+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:42.268474+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:42.268474+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:42.835005+0000 mon.a (mon.0) 3601 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:09:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:42.835005+0000 mon.a (mon.0) 3601 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:09:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: cluster 2026-03-09T16:09:42.865342+0000 mgr.y (mgr.14520) 609 : cluster [DBG] pgmap v1076: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: cluster 2026-03-09T16:09:42.865342+0000 mgr.y (mgr.14520) 609 : cluster [DBG] pgmap v1076: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:43.165421+0000 mon.a (mon.0) 3602 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:43.165421+0000 mon.a (mon.0) 3602 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:43.174142+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:43 vm09 bash[22983]: audit 2026-03-09T16:09:43.174142+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:42.174691+0000 mon.a (mon.0) 3598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:42.174691+0000 mon.a (mon.0) 3598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: cluster 2026-03-09T16:09:42.184444+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: cluster 2026-03-09T16:09:42.184444+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:42.268246+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:42.268246+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:42.268474+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:42.268474+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:42.835005+0000 mon.a (mon.0) 3601 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:42.835005+0000 mon.a (mon.0) 3601 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: cluster 2026-03-09T16:09:42.865342+0000 mgr.y (mgr.14520) 609 : cluster [DBG] pgmap v1076: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: cluster 2026-03-09T16:09:42.865342+0000 mgr.y (mgr.14520) 609 : cluster [DBG] pgmap v1076: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:43.165421+0000 mon.a (mon.0) 3602 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:43.165421+0000 mon.a (mon.0) 3602 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:43.174142+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:43 vm01 bash[28152]: audit 2026-03-09T16:09:43.174142+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:42.174691+0000 mon.a (mon.0) 3598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:42.174691+0000 mon.a (mon.0) 3598 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: cluster 2026-03-09T16:09:42.184444+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: cluster 2026-03-09T16:09:42.184444+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-09T16:09:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:42.268246+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:42.268246+0000 mon.c (mon.2) 628 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:42.268474+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:42.268474+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:42.835005+0000 mon.a (mon.0) 3601 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:42.835005+0000 mon.a (mon.0) 3601 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: cluster 2026-03-09T16:09:42.865342+0000 mgr.y (mgr.14520) 609 : cluster [DBG] pgmap v1076: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: cluster 2026-03-09T16:09:42.865342+0000 mgr.y (mgr.14520) 609 : cluster [DBG] pgmap v1076: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:43.165421+0000 mon.a (mon.0) 3602 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:43.165421+0000 mon.a (mon.0) 3602 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:43.174142+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:43.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:43 vm01 bash[20728]: audit 2026-03-09T16:09:43.174142+0000 mon.a (mon.0) 3603 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.188137+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.188137+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.201300+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.201300+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: cluster 2026-03-09T16:09:43.202769+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: cluster 2026-03-09T16:09:43.202769+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.217559+0000 mon.a (mon.0) 3606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.217559+0000 mon.a (mon.0) 3606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.535217+0000 mon.a (mon.0) 3607 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.535217+0000 mon.a (mon.0) 3607 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.535750+0000 mon.a (mon.0) 3608 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.535750+0000 mon.a (mon.0) 3608 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.548408+0000 mon.a (mon.0) 3609 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:44 vm09 bash[22983]: audit 2026-03-09T16:09:43.548408+0000 mon.a (mon.0) 3609 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.188137+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.188137+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.201300+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.201300+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: cluster 2026-03-09T16:09:43.202769+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: cluster 2026-03-09T16:09:43.202769+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.217559+0000 mon.a (mon.0) 3606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.217559+0000 mon.a (mon.0) 3606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.535217+0000 mon.a (mon.0) 3607 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.535217+0000 mon.a (mon.0) 3607 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.535750+0000 mon.a (mon.0) 3608 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.535750+0000 mon.a (mon.0) 3608 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.548408+0000 mon.a (mon.0) 3609 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:44 vm01 bash[28152]: audit 2026-03-09T16:09:43.548408+0000 mon.a (mon.0) 3609 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.188137+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.188137+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.201300+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.201300+0000 mon.c (mon.2) 629 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: cluster 2026-03-09T16:09:43.202769+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: cluster 2026-03-09T16:09:43.202769+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.217559+0000 mon.a (mon.0) 3606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.217559+0000 mon.a (mon.0) 3606 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.535217+0000 mon.a (mon.0) 3607 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.535217+0000 mon.a (mon.0) 3607 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.535750+0000 mon.a (mon.0) 3608 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.535750+0000 mon.a (mon.0) 3608 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.548408+0000 mon.a (mon.0) 3609 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:44 vm01 bash[20728]: audit 2026-03-09T16:09:43.548408+0000 mon.a (mon.0) 3609 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:09:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: audit 2026-03-09T16:09:44.211352+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: audit 2026-03-09T16:09:44.211352+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: audit 2026-03-09T16:09:44.219601+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: audit 2026-03-09T16:09:44.219601+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: cluster 2026-03-09T16:09:44.221362+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: cluster 2026-03-09T16:09:44.221362+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: audit 2026-03-09T16:09:44.222609+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: audit 2026-03-09T16:09:44.222609+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: audit 2026-03-09T16:09:44.513674+0000 mon.a (mon.0) 3613 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: audit 2026-03-09T16:09:44.513674+0000 mon.a (mon.0) 3613 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: cluster 2026-03-09T16:09:44.865719+0000 mgr.y (mgr.14520) 610 : cluster [DBG] pgmap v1079: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:45 vm09 bash[22983]: cluster 2026-03-09T16:09:44.865719+0000 mgr.y (mgr.14520) 610 : cluster [DBG] pgmap v1079: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: audit 2026-03-09T16:09:44.211352+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: audit 2026-03-09T16:09:44.211352+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: audit 2026-03-09T16:09:44.219601+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: audit 2026-03-09T16:09:44.219601+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: cluster 2026-03-09T16:09:44.221362+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: cluster 2026-03-09T16:09:44.221362+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: audit 2026-03-09T16:09:44.222609+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: audit 2026-03-09T16:09:44.222609+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: audit 2026-03-09T16:09:44.513674+0000 mon.a (mon.0) 3613 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: audit 2026-03-09T16:09:44.513674+0000 mon.a (mon.0) 3613 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: cluster 2026-03-09T16:09:44.865719+0000 mgr.y (mgr.14520) 610 : cluster [DBG] pgmap v1079: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:45 vm01 bash[28152]: cluster 2026-03-09T16:09:44.865719+0000 mgr.y (mgr.14520) 610 : cluster [DBG] pgmap v1079: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: audit 2026-03-09T16:09:44.211352+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: audit 2026-03-09T16:09:44.211352+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: audit 2026-03-09T16:09:44.219601+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: audit 2026-03-09T16:09:44.219601+0000 mon.c (mon.2) 630 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: cluster 2026-03-09T16:09:44.221362+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: cluster 2026-03-09T16:09:44.221362+0000 mon.a (mon.0) 3611 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: audit 2026-03-09T16:09:44.222609+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: audit 2026-03-09T16:09:44.222609+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]: dispatch 2026-03-09T16:09:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: audit 2026-03-09T16:09:44.513674+0000 mon.a (mon.0) 3613 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: audit 2026-03-09T16:09:44.513674+0000 mon.a (mon.0) 3613 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:09:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: cluster 2026-03-09T16:09:44.865719+0000 mgr.y (mgr.14520) 610 : cluster [DBG] pgmap v1079: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:45.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:45 vm01 bash[20728]: cluster 2026-03-09T16:09:44.865719+0000 mgr.y (mgr.14520) 610 : cluster [DBG] pgmap v1079: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: cluster 2026-03-09T16:09:45.212465+0000 mon.a (mon.0) 3614 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: cluster 2026-03-09T16:09:45.212465+0000 mon.a (mon.0) 3614 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: audit 2026-03-09T16:09:45.219864+0000 mon.a (mon.0) 3615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]': finished 2026-03-09T16:09:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: audit 2026-03-09T16:09:45.219864+0000 mon.a (mon.0) 3615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]': finished 2026-03-09T16:09:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: cluster 2026-03-09T16:09:45.224537+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T16:09:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: cluster 2026-03-09T16:09:45.224537+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T16:09:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: audit 2026-03-09T16:09:45.245186+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: audit 2026-03-09T16:09:45.245186+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: audit 2026-03-09T16:09:45.262667+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:46 vm09 bash[22983]: audit 2026-03-09T16:09:45.262667+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: cluster 2026-03-09T16:09:45.212465+0000 mon.a (mon.0) 3614 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: cluster 2026-03-09T16:09:45.212465+0000 mon.a (mon.0) 3614 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: audit 2026-03-09T16:09:45.219864+0000 mon.a (mon.0) 3615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]': finished 2026-03-09T16:09:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: audit 2026-03-09T16:09:45.219864+0000 mon.a (mon.0) 3615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]': finished 2026-03-09T16:09:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: cluster 2026-03-09T16:09:45.224537+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T16:09:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: cluster 2026-03-09T16:09:45.224537+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T16:09:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: audit 2026-03-09T16:09:45.245186+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: audit 2026-03-09T16:09:45.245186+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: audit 2026-03-09T16:09:45.262667+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:46 vm01 bash[28152]: audit 2026-03-09T16:09:45.262667+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: cluster 2026-03-09T16:09:45.212465+0000 mon.a (mon.0) 3614 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: cluster 2026-03-09T16:09:45.212465+0000 mon.a (mon.0) 3614 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: audit 2026-03-09T16:09:45.219864+0000 mon.a (mon.0) 3615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]': finished 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: audit 2026-03-09T16:09:45.219864+0000 mon.a (mon.0) 3615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-142", "mode": "writeback"}]': finished 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: cluster 2026-03-09T16:09:45.224537+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: cluster 2026-03-09T16:09:45.224537+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: audit 2026-03-09T16:09:45.245186+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: audit 2026-03-09T16:09:45.245186+0000 mon.c (mon.2) 631 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: audit 2026-03-09T16:09:45.262667+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:46.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:46 vm01 bash[20728]: audit 2026-03-09T16:09:45.262667+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:09:47.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:09:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:09:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: audit 2026-03-09T16:09:46.248781+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: audit 2026-03-09T16:09:46.248781+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: cluster 2026-03-09T16:09:46.253622+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T16:09:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: cluster 2026-03-09T16:09:46.253622+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T16:09:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: audit 2026-03-09T16:09:46.262762+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: audit 2026-03-09T16:09:46.262762+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: audit 2026-03-09T16:09:46.263033+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: audit 2026-03-09T16:09:46.263033+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: audit 2026-03-09T16:09:46.855506+0000 mgr.y (mgr.14520) 611 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: audit 2026-03-09T16:09:46.855506+0000 mgr.y (mgr.14520) 611 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: cluster 2026-03-09T16:09:46.866194+0000 mgr.y (mgr.14520) 612 : cluster [DBG] pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:47 vm09 bash[22983]: cluster 2026-03-09T16:09:46.866194+0000 mgr.y (mgr.14520) 612 : cluster [DBG] pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: audit 2026-03-09T16:09:46.248781+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: audit 2026-03-09T16:09:46.248781+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: cluster 2026-03-09T16:09:46.253622+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: cluster 2026-03-09T16:09:46.253622+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: audit 2026-03-09T16:09:46.262762+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: audit 2026-03-09T16:09:46.262762+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: audit 2026-03-09T16:09:46.263033+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: audit 2026-03-09T16:09:46.263033+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: audit 2026-03-09T16:09:46.855506+0000 mgr.y (mgr.14520) 611 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: audit 2026-03-09T16:09:46.855506+0000 mgr.y (mgr.14520) 611 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: cluster 2026-03-09T16:09:46.866194+0000 mgr.y (mgr.14520) 612 : cluster [DBG] pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:47 vm01 bash[28152]: cluster 2026-03-09T16:09:46.866194+0000 mgr.y (mgr.14520) 612 : cluster [DBG] pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: audit 2026-03-09T16:09:46.248781+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: audit 2026-03-09T16:09:46.248781+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: cluster 2026-03-09T16:09:46.253622+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: cluster 2026-03-09T16:09:46.253622+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: audit 2026-03-09T16:09:46.262762+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: audit 2026-03-09T16:09:46.262762+0000 mon.c (mon.2) 632 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: audit 2026-03-09T16:09:46.263033+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: audit 2026-03-09T16:09:46.263033+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: audit 2026-03-09T16:09:46.855506+0000 mgr.y (mgr.14520) 611 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: audit 2026-03-09T16:09:46.855506+0000 mgr.y (mgr.14520) 611 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: cluster 2026-03-09T16:09:46.866194+0000 mgr.y (mgr.14520) 612 : cluster [DBG] pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:47 vm01 bash[20728]: cluster 2026-03-09T16:09:46.866194+0000 mgr.y (mgr.14520) 612 : cluster [DBG] pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T16:09:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:48 vm09 bash[22983]: audit 2026-03-09T16:09:47.259006+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:48 vm09 bash[22983]: audit 2026-03-09T16:09:47.259006+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:48 vm09 bash[22983]: cluster 2026-03-09T16:09:47.264195+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T16:09:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:48 vm09 bash[22983]: cluster 2026-03-09T16:09:47.264195+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T16:09:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:48 vm09 bash[22983]: audit 2026-03-09T16:09:47.270566+0000 mon.c (mon.2) 633 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:48 vm09 bash[22983]: audit 2026-03-09T16:09:47.270566+0000 mon.c (mon.2) 633 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:48 vm09 bash[22983]: audit 2026-03-09T16:09:47.272172+0000 mon.a (mon.0) 3623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:48 vm09 bash[22983]: audit 2026-03-09T16:09:47.272172+0000 mon.a (mon.0) 3623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:48 vm01 bash[28152]: audit 2026-03-09T16:09:47.259006+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:48 vm01 bash[28152]: audit 2026-03-09T16:09:47.259006+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:48 vm01 bash[28152]: cluster 2026-03-09T16:09:47.264195+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:48 vm01 bash[28152]: cluster 2026-03-09T16:09:47.264195+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:48 vm01 bash[28152]: audit 2026-03-09T16:09:47.270566+0000 mon.c (mon.2) 633 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:48 vm01 bash[28152]: audit 2026-03-09T16:09:47.270566+0000 mon.c (mon.2) 633 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:48 vm01 bash[28152]: audit 2026-03-09T16:09:47.272172+0000 mon.a (mon.0) 3623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:48 vm01 bash[28152]: audit 2026-03-09T16:09:47.272172+0000 mon.a (mon.0) 3623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:48 vm01 bash[20728]: audit 2026-03-09T16:09:47.259006+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:48 vm01 bash[20728]: audit 2026-03-09T16:09:47.259006+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:09:48.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:48 vm01 bash[20728]: cluster 2026-03-09T16:09:47.264195+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T16:09:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:48 vm01 bash[20728]: cluster 2026-03-09T16:09:47.264195+0000 mon.a (mon.0) 3622 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-09T16:09:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:48 vm01 bash[20728]: audit 2026-03-09T16:09:47.270566+0000 mon.c (mon.2) 633 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:48 vm01 bash[20728]: audit 2026-03-09T16:09:47.270566+0000 mon.c (mon.2) 633 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:48 vm01 bash[20728]: audit 2026-03-09T16:09:47.272172+0000 mon.a (mon.0) 3623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:48.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:48 vm01 bash[20728]: audit 2026-03-09T16:09:47.272172+0000 mon.a (mon.0) 3623 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:09:49.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: cluster 2026-03-09T16:09:48.260806+0000 mon.a (mon.0) 3624 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: cluster 2026-03-09T16:09:48.260806+0000 mon.a (mon.0) 3624 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: audit 2026-03-09T16:09:48.275744+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: audit 2026-03-09T16:09:48.275744+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: audit 2026-03-09T16:09:48.288229+0000 mon.c (mon.2) 634 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: audit 2026-03-09T16:09:48.288229+0000 mon.c (mon.2) 634 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: cluster 2026-03-09T16:09:48.289617+0000 mon.a (mon.0) 3626 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: cluster 2026-03-09T16:09:48.289617+0000 mon.a (mon.0) 3626 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: audit 2026-03-09T16:09:48.290298+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: audit 2026-03-09T16:09:48.290298+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: cluster 2026-03-09T16:09:48.866861+0000 mgr.y (mgr.14520) 613 : cluster [DBG] pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:49.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:49 vm09 bash[22983]: cluster 2026-03-09T16:09:48.866861+0000 mgr.y (mgr.14520) 613 : cluster [DBG] pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: cluster 2026-03-09T16:09:48.260806+0000 mon.a (mon.0) 3624 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: cluster 2026-03-09T16:09:48.260806+0000 mon.a (mon.0) 3624 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: audit 2026-03-09T16:09:48.275744+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: audit 2026-03-09T16:09:48.275744+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: audit 2026-03-09T16:09:48.288229+0000 mon.c (mon.2) 634 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: audit 2026-03-09T16:09:48.288229+0000 mon.c (mon.2) 634 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: cluster 2026-03-09T16:09:48.289617+0000 mon.a (mon.0) 3626 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: cluster 2026-03-09T16:09:48.289617+0000 mon.a (mon.0) 3626 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: audit 2026-03-09T16:09:48.290298+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: audit 2026-03-09T16:09:48.290298+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: cluster 2026-03-09T16:09:48.866861+0000 mgr.y (mgr.14520) 613 : cluster [DBG] pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:49 vm01 bash[28152]: cluster 2026-03-09T16:09:48.866861+0000 mgr.y (mgr.14520) 613 : cluster [DBG] pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: cluster 2026-03-09T16:09:48.260806+0000 mon.a (mon.0) 3624 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:49.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: cluster 2026-03-09T16:09:48.260806+0000 mon.a (mon.0) 3624 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: audit 2026-03-09T16:09:48.275744+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: audit 2026-03-09T16:09:48.275744+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: audit 2026-03-09T16:09:48.288229+0000 mon.c (mon.2) 634 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: audit 2026-03-09T16:09:48.288229+0000 mon.c (mon.2) 634 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: cluster 2026-03-09T16:09:48.289617+0000 mon.a (mon.0) 3626 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: cluster 2026-03-09T16:09:48.289617+0000 mon.a (mon.0) 3626 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: audit 2026-03-09T16:09:48.290298+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: audit 2026-03-09T16:09:48.290298+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: cluster 2026-03-09T16:09:48.866861+0000 mgr.y (mgr.14520) 613 : cluster [DBG] pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:49.927 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:49 vm01 bash[20728]: cluster 2026-03-09T16:09:48.866861+0000 mgr.y (mgr.14520) 613 : cluster [DBG] pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:09:51.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:50 vm09 bash[22983]: audit 2026-03-09T16:09:49.676352+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:09:51.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:50 vm09 bash[22983]: audit 2026-03-09T16:09:49.676352+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:09:51.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:50 vm09 bash[22983]: cluster 2026-03-09T16:09:49.680957+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T16:09:51.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:50 vm09 bash[22983]: cluster 2026-03-09T16:09:49.680957+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T16:09:51.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:50 vm09 bash[22983]: audit 2026-03-09T16:09:49.695659+0000 mon.c (mon.2) 635 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:50 vm09 bash[22983]: audit 2026-03-09T16:09:49.695659+0000 mon.c (mon.2) 635 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:50 vm09 bash[22983]: audit 2026-03-09T16:09:49.696222+0000 mon.a (mon.0) 3630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:50 vm09 bash[22983]: audit 2026-03-09T16:09:49.696222+0000 mon.a (mon.0) 3630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:50 vm01 bash[28152]: audit 2026-03-09T16:09:49.676352+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:50 vm01 bash[28152]: audit 2026-03-09T16:09:49.676352+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:50 vm01 bash[28152]: cluster 2026-03-09T16:09:49.680957+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:50 vm01 bash[28152]: cluster 2026-03-09T16:09:49.680957+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:50 vm01 bash[28152]: audit 2026-03-09T16:09:49.695659+0000 mon.c (mon.2) 635 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:50 vm01 bash[28152]: audit 2026-03-09T16:09:49.695659+0000 mon.c (mon.2) 635 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:50 vm01 bash[28152]: audit 2026-03-09T16:09:49.696222+0000 mon.a (mon.0) 3630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:50 vm01 bash[28152]: audit 2026-03-09T16:09:49.696222+0000 mon.a (mon.0) 3630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:50 vm01 bash[20728]: audit 2026-03-09T16:09:49.676352+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:50 vm01 bash[20728]: audit 2026-03-09T16:09:49.676352+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:50 vm01 bash[20728]: cluster 2026-03-09T16:09:49.680957+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:50 vm01 bash[20728]: cluster 2026-03-09T16:09:49.680957+0000 mon.a (mon.0) 3629 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:50 vm01 bash[20728]: audit 2026-03-09T16:09:49.695659+0000 mon.c (mon.2) 635 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:50 vm01 bash[20728]: audit 2026-03-09T16:09:49.695659+0000 mon.c (mon.2) 635 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:50 vm01 bash[20728]: audit 2026-03-09T16:09:49.696222+0000 mon.a (mon.0) 3630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:51.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:50 vm01 bash[20728]: audit 2026-03-09T16:09:49.696222+0000 mon.a (mon.0) 3630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: audit 2026-03-09T16:09:50.707678+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: audit 2026-03-09T16:09:50.707678+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: audit 2026-03-09T16:09:50.712832+0000 mon.c (mon.2) 636 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: audit 2026-03-09T16:09:50.712832+0000 mon.c (mon.2) 636 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: cluster 2026-03-09T16:09:50.713344+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: cluster 2026-03-09T16:09:50.713344+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: audit 2026-03-09T16:09:50.715348+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: audit 2026-03-09T16:09:50.715348+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: cluster 2026-03-09T16:09:50.867259+0000 mgr.y (mgr.14520) 614 : cluster [DBG] pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: cluster 2026-03-09T16:09:50.867259+0000 mgr.y (mgr.14520) 614 : cluster [DBG] pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: audit 2026-03-09T16:09:51.711224+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: audit 2026-03-09T16:09:51.711224+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: cluster 2026-03-09T16:09:51.716244+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T16:09:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:51 vm09 bash[22983]: cluster 2026-03-09T16:09:51.716244+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: audit 2026-03-09T16:09:50.707678+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: audit 2026-03-09T16:09:50.707678+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: audit 2026-03-09T16:09:50.712832+0000 mon.c (mon.2) 636 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: audit 2026-03-09T16:09:50.712832+0000 mon.c (mon.2) 636 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: cluster 2026-03-09T16:09:50.713344+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: cluster 2026-03-09T16:09:50.713344+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: audit 2026-03-09T16:09:50.715348+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: audit 2026-03-09T16:09:50.715348+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: cluster 2026-03-09T16:09:50.867259+0000 mgr.y (mgr.14520) 614 : cluster [DBG] pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: cluster 2026-03-09T16:09:50.867259+0000 mgr.y (mgr.14520) 614 : cluster [DBG] pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: audit 2026-03-09T16:09:51.711224+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: audit 2026-03-09T16:09:51.711224+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: cluster 2026-03-09T16:09:51.716244+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:51 vm01 bash[28152]: cluster 2026-03-09T16:09:51.716244+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: audit 2026-03-09T16:09:50.707678+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: audit 2026-03-09T16:09:50.707678+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: audit 2026-03-09T16:09:50.712832+0000 mon.c (mon.2) 636 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: audit 2026-03-09T16:09:50.712832+0000 mon.c (mon.2) 636 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: cluster 2026-03-09T16:09:50.713344+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: cluster 2026-03-09T16:09:50.713344+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: audit 2026-03-09T16:09:50.715348+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: audit 2026-03-09T16:09:50.715348+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: cluster 2026-03-09T16:09:50.867259+0000 mgr.y (mgr.14520) 614 : cluster [DBG] pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: cluster 2026-03-09T16:09:50.867259+0000 mgr.y (mgr.14520) 614 : cluster [DBG] pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: audit 2026-03-09T16:09:51.711224+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: audit 2026-03-09T16:09:51.711224+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: cluster 2026-03-09T16:09:51.716244+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T16:09:52.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:51 vm01 bash[20728]: cluster 2026-03-09T16:09:51.716244+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-09T16:09:53.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:52 vm09 bash[22983]: audit 2026-03-09T16:09:51.788725+0000 mon.c (mon.2) 637 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:52 vm09 bash[22983]: audit 2026-03-09T16:09:51.788725+0000 mon.c (mon.2) 637 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:52 vm09 bash[22983]: audit 2026-03-09T16:09:51.789171+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:52 vm09 bash[22983]: audit 2026-03-09T16:09:51.789171+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:52 vm01 bash[28152]: audit 2026-03-09T16:09:51.788725+0000 mon.c (mon.2) 637 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:52 vm01 bash[28152]: audit 2026-03-09T16:09:51.788725+0000 mon.c (mon.2) 637 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:52 vm01 bash[28152]: audit 2026-03-09T16:09:51.789171+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:52 vm01 bash[28152]: audit 2026-03-09T16:09:51.789171+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:09:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:09:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:09:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:52 vm01 bash[20728]: audit 2026-03-09T16:09:51.788725+0000 mon.c (mon.2) 637 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:52 vm01 bash[20728]: audit 2026-03-09T16:09:51.788725+0000 mon.c (mon.2) 637 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:52 vm01 bash[20728]: audit 2026-03-09T16:09:51.789171+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:53.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:52 vm01 bash[20728]: audit 2026-03-09T16:09:51.789171+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:54.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: audit 2026-03-09T16:09:52.755406+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:54.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: audit 2026-03-09T16:09:52.755406+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:54.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: cluster 2026-03-09T16:09:52.758660+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T16:09:54.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: cluster 2026-03-09T16:09:52.758660+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T16:09:54.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: audit 2026-03-09T16:09:52.762551+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: audit 2026-03-09T16:09:52.762551+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: audit 2026-03-09T16:09:52.775848+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: audit 2026-03-09T16:09:52.775848+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: cluster 2026-03-09T16:09:52.867654+0000 mgr.y (mgr.14520) 615 : cluster [DBG] pgmap v1091: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:54.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:53 vm09 bash[22983]: cluster 2026-03-09T16:09:52.867654+0000 mgr.y (mgr.14520) 615 : cluster [DBG] pgmap v1091: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: audit 2026-03-09T16:09:52.755406+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: audit 2026-03-09T16:09:52.755406+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: cluster 2026-03-09T16:09:52.758660+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: cluster 2026-03-09T16:09:52.758660+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: audit 2026-03-09T16:09:52.762551+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: audit 2026-03-09T16:09:52.762551+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: audit 2026-03-09T16:09:52.775848+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: audit 2026-03-09T16:09:52.775848+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: cluster 2026-03-09T16:09:52.867654+0000 mgr.y (mgr.14520) 615 : cluster [DBG] pgmap v1091: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:53 vm01 bash[28152]: cluster 2026-03-09T16:09:52.867654+0000 mgr.y (mgr.14520) 615 : cluster [DBG] pgmap v1091: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: audit 2026-03-09T16:09:52.755406+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: audit 2026-03-09T16:09:52.755406+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: cluster 2026-03-09T16:09:52.758660+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: cluster 2026-03-09T16:09:52.758660+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: audit 2026-03-09T16:09:52.762551+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: audit 2026-03-09T16:09:52.762551+0000 mon.c (mon.2) 638 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: audit 2026-03-09T16:09:52.775848+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: audit 2026-03-09T16:09:52.775848+0000 mon.a (mon.0) 3639 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: cluster 2026-03-09T16:09:52.867654+0000 mgr.y (mgr.14520) 615 : cluster [DBG] pgmap v1091: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:54.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:53 vm01 bash[20728]: cluster 2026-03-09T16:09:52.867654+0000 mgr.y (mgr.14520) 615 : cluster [DBG] pgmap v1091: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:09:55.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.785505+0000 mon.a (mon.0) 3640 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.785505+0000 mon.a (mon.0) 3640 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: cluster 2026-03-09T16:09:53.796286+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: cluster 2026-03-09T16:09:53.796286+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.821339+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.821339+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.821820+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.821820+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.822488+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.822488+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.822707+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:54 vm09 bash[22983]: audit 2026-03-09T16:09:53.822707+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.785505+0000 mon.a (mon.0) 3640 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.785505+0000 mon.a (mon.0) 3640 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: cluster 2026-03-09T16:09:53.796286+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: cluster 2026-03-09T16:09:53.796286+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.821339+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.821339+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.821820+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.821820+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.822488+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.822488+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.822707+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:54 vm01 bash[28152]: audit 2026-03-09T16:09:53.822707+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.785505+0000 mon.a (mon.0) 3640 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.785505+0000 mon.a (mon.0) 3640 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]': finished 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: cluster 2026-03-09T16:09:53.796286+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: cluster 2026-03-09T16:09:53.796286+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.821339+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.821339+0000 mon.c (mon.2) 639 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.821820+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.821820+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:09:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.822488+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.822488+0000 mon.c (mon.2) 640 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.822707+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:55.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:54 vm01 bash[20728]: audit 2026-03-09T16:09:53.822707+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-142"}]: dispatch 2026-03-09T16:09:56.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:55 vm09 bash[22983]: cluster 2026-03-09T16:09:54.799083+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T16:09:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:55 vm09 bash[22983]: cluster 2026-03-09T16:09:54.799083+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T16:09:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:55 vm09 bash[22983]: cluster 2026-03-09T16:09:54.867970+0000 mgr.y (mgr.14520) 616 : cluster [DBG] pgmap v1094: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:56.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:55 vm09 bash[22983]: cluster 2026-03-09T16:09:54.867970+0000 mgr.y (mgr.14520) 616 : cluster [DBG] pgmap v1094: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:56.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:55 vm01 bash[28152]: cluster 2026-03-09T16:09:54.799083+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T16:09:56.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:55 vm01 bash[28152]: cluster 2026-03-09T16:09:54.799083+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T16:09:56.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:55 vm01 bash[28152]: cluster 2026-03-09T16:09:54.867970+0000 mgr.y (mgr.14520) 616 : cluster [DBG] pgmap v1094: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:56.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:55 vm01 bash[28152]: cluster 2026-03-09T16:09:54.867970+0000 mgr.y (mgr.14520) 616 : cluster [DBG] pgmap v1094: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:56.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:55 vm01 bash[20728]: cluster 2026-03-09T16:09:54.799083+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T16:09:56.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:55 vm01 bash[20728]: cluster 2026-03-09T16:09:54.799083+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-09T16:09:56.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:55 vm01 bash[20728]: cluster 2026-03-09T16:09:54.867970+0000 mgr.y (mgr.14520) 616 : cluster [DBG] pgmap v1094: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:56.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:55 vm01 bash[20728]: cluster 2026-03-09T16:09:54.867970+0000 mgr.y (mgr.14520) 616 : cluster [DBG] pgmap v1094: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:57.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:09:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:09:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:56 vm09 bash[22983]: cluster 2026-03-09T16:09:55.831471+0000 mon.a (mon.0) 3645 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T16:09:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:56 vm09 bash[22983]: cluster 2026-03-09T16:09:55.831471+0000 mon.a (mon.0) 3645 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T16:09:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:56 vm09 bash[22983]: audit 2026-03-09T16:09:55.834829+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:56 vm09 bash[22983]: audit 2026-03-09T16:09:55.834829+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:56 vm09 bash[22983]: audit 2026-03-09T16:09:55.835885+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:56 vm09 bash[22983]: audit 2026-03-09T16:09:55.835885+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:56 vm01 bash[28152]: cluster 2026-03-09T16:09:55.831471+0000 mon.a (mon.0) 3645 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:56 vm01 bash[28152]: cluster 2026-03-09T16:09:55.831471+0000 mon.a (mon.0) 3645 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:56 vm01 bash[28152]: audit 2026-03-09T16:09:55.834829+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:56 vm01 bash[28152]: audit 2026-03-09T16:09:55.834829+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:56 vm01 bash[28152]: audit 2026-03-09T16:09:55.835885+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:56 vm01 bash[28152]: audit 2026-03-09T16:09:55.835885+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:56 vm01 bash[20728]: cluster 2026-03-09T16:09:55.831471+0000 mon.a (mon.0) 3645 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:56 vm01 bash[20728]: cluster 2026-03-09T16:09:55.831471+0000 mon.a (mon.0) 3645 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:56 vm01 bash[20728]: audit 2026-03-09T16:09:55.834829+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:56 vm01 bash[20728]: audit 2026-03-09T16:09:55.834829+0000 mon.c (mon.2) 641 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:56 vm01 bash[20728]: audit 2026-03-09T16:09:55.835885+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:57.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:56 vm01 bash[20728]: audit 2026-03-09T16:09:55.835885+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:09:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: audit 2026-03-09T16:09:56.810033+0000 mon.a (mon.0) 3647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: audit 2026-03-09T16:09:56.810033+0000 mon.a (mon.0) 3647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: cluster 2026-03-09T16:09:56.819866+0000 mon.a (mon.0) 3648 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: cluster 2026-03-09T16:09:56.819866+0000 mon.a (mon.0) 3648 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: audit 2026-03-09T16:09:56.855234+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: audit 2026-03-09T16:09:56.855234+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: audit 2026-03-09T16:09:56.855641+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: audit 2026-03-09T16:09:56.855641+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: audit 2026-03-09T16:09:56.863086+0000 mgr.y (mgr.14520) 617 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: audit 2026-03-09T16:09:56.863086+0000 mgr.y (mgr.14520) 617 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: cluster 2026-03-09T16:09:56.868325+0000 mgr.y (mgr.14520) 618 : cluster [DBG] pgmap v1097: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:58.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:57 vm09 bash[22983]: cluster 2026-03-09T16:09:56.868325+0000 mgr.y (mgr.14520) 618 : cluster [DBG] pgmap v1097: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: audit 2026-03-09T16:09:56.810033+0000 mon.a (mon.0) 3647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: audit 2026-03-09T16:09:56.810033+0000 mon.a (mon.0) 3647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: cluster 2026-03-09T16:09:56.819866+0000 mon.a (mon.0) 3648 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: cluster 2026-03-09T16:09:56.819866+0000 mon.a (mon.0) 3648 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: audit 2026-03-09T16:09:56.855234+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: audit 2026-03-09T16:09:56.855234+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: audit 2026-03-09T16:09:56.855641+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: audit 2026-03-09T16:09:56.855641+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: audit 2026-03-09T16:09:56.863086+0000 mgr.y (mgr.14520) 617 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: audit 2026-03-09T16:09:56.863086+0000 mgr.y (mgr.14520) 617 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: cluster 2026-03-09T16:09:56.868325+0000 mgr.y (mgr.14520) 618 : cluster [DBG] pgmap v1097: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:57 vm01 bash[28152]: cluster 2026-03-09T16:09:56.868325+0000 mgr.y (mgr.14520) 618 : cluster [DBG] pgmap v1097: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: audit 2026-03-09T16:09:56.810033+0000 mon.a (mon.0) 3647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:58.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: audit 2026-03-09T16:09:56.810033+0000 mon.a (mon.0) 3647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: cluster 2026-03-09T16:09:56.819866+0000 mon.a (mon.0) 3648 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: cluster 2026-03-09T16:09:56.819866+0000 mon.a (mon.0) 3648 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: audit 2026-03-09T16:09:56.855234+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: audit 2026-03-09T16:09:56.855234+0000 mon.c (mon.2) 642 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: audit 2026-03-09T16:09:56.855641+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: audit 2026-03-09T16:09:56.855641+0000 mon.a (mon.0) 3649 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: audit 2026-03-09T16:09:56.863086+0000 mgr.y (mgr.14520) 617 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: audit 2026-03-09T16:09:56.863086+0000 mgr.y (mgr.14520) 617 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: cluster 2026-03-09T16:09:56.868325+0000 mgr.y (mgr.14520) 618 : cluster [DBG] pgmap v1097: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:58.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:57 vm01 bash[20728]: cluster 2026-03-09T16:09:56.868325+0000 mgr.y (mgr.14520) 618 : cluster [DBG] pgmap v1097: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:57.840858+0000 mon.a (mon.0) 3650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:57.840858+0000 mon.a (mon.0) 3650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: cluster 2026-03-09T16:09:57.843723+0000 mon.a (mon.0) 3651 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: cluster 2026-03-09T16:09:57.843723+0000 mon.a (mon.0) 3651 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:57.848441+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:57.848441+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:57.849122+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:57.849122+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:58.845106+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:58.845106+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:58.852940+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: audit 2026-03-09T16:09:58.852940+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: cluster 2026-03-09T16:09:58.858707+0000 mon.a (mon.0) 3654 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T16:09:59.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:58 vm09 bash[22983]: cluster 2026-03-09T16:09:58.858707+0000 mon.a (mon.0) 3654 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:57.840858+0000 mon.a (mon.0) 3650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:57.840858+0000 mon.a (mon.0) 3650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: cluster 2026-03-09T16:09:57.843723+0000 mon.a (mon.0) 3651 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: cluster 2026-03-09T16:09:57.843723+0000 mon.a (mon.0) 3651 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:57.848441+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:57.848441+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:57.849122+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:57.849122+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:58.845106+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:58.845106+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:58.852940+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: audit 2026-03-09T16:09:58.852940+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: cluster 2026-03-09T16:09:58.858707+0000 mon.a (mon.0) 3654 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T16:09:59.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:58 vm01 bash[28152]: cluster 2026-03-09T16:09:58.858707+0000 mon.a (mon.0) 3654 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:57.840858+0000 mon.a (mon.0) 3650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:57.840858+0000 mon.a (mon.0) 3650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: cluster 2026-03-09T16:09:57.843723+0000 mon.a (mon.0) 3651 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: cluster 2026-03-09T16:09:57.843723+0000 mon.a (mon.0) 3651 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:57.848441+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:57.848441+0000 mon.c (mon.2) 643 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:57.849122+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:57.849122+0000 mon.a (mon.0) 3652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:58.845106+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:58.845106+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:58.852940+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: audit 2026-03-09T16:09:58.852940+0000 mon.c (mon.2) 644 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: cluster 2026-03-09T16:09:58.858707+0000 mon.a (mon.0) 3654 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T16:09:59.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:58 vm01 bash[20728]: cluster 2026-03-09T16:09:58.858707+0000 mon.a (mon.0) 3654 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: audit 2026-03-09T16:09:58.859088+0000 mon.a (mon.0) 3655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: audit 2026-03-09T16:09:58.859088+0000 mon.a (mon.0) 3655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: cluster 2026-03-09T16:09:58.868736+0000 mgr.y (mgr.14520) 619 : cluster [DBG] pgmap v1100: 268 pgs: 15 unknown, 253 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: cluster 2026-03-09T16:09:58.868736+0000 mgr.y (mgr.14520) 619 : cluster [DBG] pgmap v1100: 268 pgs: 15 unknown, 253 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: audit 2026-03-09T16:09:59.522306+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: audit 2026-03-09T16:09:59.522306+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: cluster 2026-03-09T16:09:59.845328+0000 mon.a (mon.0) 3657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: cluster 2026-03-09T16:09:59.845328+0000 mon.a (mon.0) 3657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: audit 2026-03-09T16:09:59.848447+0000 mon.a (mon.0) 3658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]': finished 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: audit 2026-03-09T16:09:59.848447+0000 mon.a (mon.0) 3658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]': finished 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: cluster 2026-03-09T16:09:59.851523+0000 mon.a (mon.0) 3659 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T16:10:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:09:59 vm09 bash[22983]: cluster 2026-03-09T16:09:59.851523+0000 mon.a (mon.0) 3659 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: audit 2026-03-09T16:09:58.859088+0000 mon.a (mon.0) 3655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: audit 2026-03-09T16:09:58.859088+0000 mon.a (mon.0) 3655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: cluster 2026-03-09T16:09:58.868736+0000 mgr.y (mgr.14520) 619 : cluster [DBG] pgmap v1100: 268 pgs: 15 unknown, 253 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: cluster 2026-03-09T16:09:58.868736+0000 mgr.y (mgr.14520) 619 : cluster [DBG] pgmap v1100: 268 pgs: 15 unknown, 253 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: audit 2026-03-09T16:09:59.522306+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: audit 2026-03-09T16:09:59.522306+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: cluster 2026-03-09T16:09:59.845328+0000 mon.a (mon.0) 3657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: cluster 2026-03-09T16:09:59.845328+0000 mon.a (mon.0) 3657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: audit 2026-03-09T16:09:59.848447+0000 mon.a (mon.0) 3658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]': finished 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: audit 2026-03-09T16:09:59.848447+0000 mon.a (mon.0) 3658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]': finished 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: cluster 2026-03-09T16:09:59.851523+0000 mon.a (mon.0) 3659 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:09:59 vm01 bash[28152]: cluster 2026-03-09T16:09:59.851523+0000 mon.a (mon.0) 3659 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: audit 2026-03-09T16:09:58.859088+0000 mon.a (mon.0) 3655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: audit 2026-03-09T16:09:58.859088+0000 mon.a (mon.0) 3655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]: dispatch 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: cluster 2026-03-09T16:09:58.868736+0000 mgr.y (mgr.14520) 619 : cluster [DBG] pgmap v1100: 268 pgs: 15 unknown, 253 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: cluster 2026-03-09T16:09:58.868736+0000 mgr.y (mgr.14520) 619 : cluster [DBG] pgmap v1100: 268 pgs: 15 unknown, 253 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: audit 2026-03-09T16:09:59.522306+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: audit 2026-03-09T16:09:59.522306+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: cluster 2026-03-09T16:09:59.845328+0000 mon.a (mon.0) 3657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: cluster 2026-03-09T16:09:59.845328+0000 mon.a (mon.0) 3657 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: audit 2026-03-09T16:09:59.848447+0000 mon.a (mon.0) 3658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]': finished 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: audit 2026-03-09T16:09:59.848447+0000 mon.a (mon.0) 3658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-144", "mode": "readproxy"}]': finished 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: cluster 2026-03-09T16:09:59.851523+0000 mon.a (mon.0) 3659 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T16:10:00.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:09:59 vm01 bash[20728]: cluster 2026-03-09T16:09:59.851523+0000 mon.a (mon.0) 3659 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000097+0000 mon.a (mon.0) 3660 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000097+0000 mon.a (mon.0) 3660 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000110+0000 mon.a (mon.0) 3661 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000110+0000 mon.a (mon.0) 3661 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000115+0000 mon.a (mon.0) 3662 : cluster [WRN] pool 'test-rados-api-vm01-59821-144' with cache_mode readproxy needs hit_set_type to be set but it is not 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000115+0000 mon.a (mon.0) 3662 : cluster [WRN] pool 'test-rados-api-vm01-59821-144' with cache_mode readproxy needs hit_set_type to be set but it is not 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000119+0000 mon.a (mon.0) 3663 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000119+0000 mon.a (mon.0) 3663 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000123+0000 mon.a (mon.0) 3664 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000123+0000 mon.a (mon.0) 3664 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000138+0000 mon.a (mon.0) 3665 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000138+0000 mon.a (mon.0) 3665 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000142+0000 mon.a (mon.0) 3666 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000142+0000 mon.a (mon.0) 3666 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000146+0000 mon.a (mon.0) 3667 : cluster [WRN] application not enabled on pool 'test-rados-api-vm01-59821-111' 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000146+0000 mon.a (mon.0) 3667 : cluster [WRN] application not enabled on pool 'test-rados-api-vm01-59821-111' 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000149+0000 mon.a (mon.0) 3668 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:10:01.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:00 vm01 bash[28152]: cluster 2026-03-09T16:10:00.000149+0000 mon.a (mon.0) 3668 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000097+0000 mon.a (mon.0) 3660 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000097+0000 mon.a (mon.0) 3660 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000110+0000 mon.a (mon.0) 3661 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000110+0000 mon.a (mon.0) 3661 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000115+0000 mon.a (mon.0) 3662 : cluster [WRN] pool 'test-rados-api-vm01-59821-144' with cache_mode readproxy needs hit_set_type to be set but it is not 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000115+0000 mon.a (mon.0) 3662 : cluster [WRN] pool 'test-rados-api-vm01-59821-144' with cache_mode readproxy needs hit_set_type to be set but it is not 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000119+0000 mon.a (mon.0) 3663 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000119+0000 mon.a (mon.0) 3663 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000123+0000 mon.a (mon.0) 3664 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000123+0000 mon.a (mon.0) 3664 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000138+0000 mon.a (mon.0) 3665 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000138+0000 mon.a (mon.0) 3665 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000142+0000 mon.a (mon.0) 3666 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000142+0000 mon.a (mon.0) 3666 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000146+0000 mon.a (mon.0) 3667 : cluster [WRN] application not enabled on pool 'test-rados-api-vm01-59821-111' 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000146+0000 mon.a (mon.0) 3667 : cluster [WRN] application not enabled on pool 'test-rados-api-vm01-59821-111' 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000149+0000 mon.a (mon.0) 3668 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:10:01.177 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:00 vm01 bash[20728]: cluster 2026-03-09T16:10:00.000149+0000 mon.a (mon.0) 3668 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000097+0000 mon.a (mon.0) 3660 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000097+0000 mon.a (mon.0) 3660 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000110+0000 mon.a (mon.0) 3661 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000110+0000 mon.a (mon.0) 3661 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000115+0000 mon.a (mon.0) 3662 : cluster [WRN] pool 'test-rados-api-vm01-59821-144' with cache_mode readproxy needs hit_set_type to be set but it is not 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000115+0000 mon.a (mon.0) 3662 : cluster [WRN] pool 'test-rados-api-vm01-59821-144' with cache_mode readproxy needs hit_set_type to be set but it is not 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000119+0000 mon.a (mon.0) 3663 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000119+0000 mon.a (mon.0) 3663 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000123+0000 mon.a (mon.0) 3664 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000123+0000 mon.a (mon.0) 3664 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000138+0000 mon.a (mon.0) 3665 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000138+0000 mon.a (mon.0) 3665 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000142+0000 mon.a (mon.0) 3666 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000142+0000 mon.a (mon.0) 3666 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000146+0000 mon.a (mon.0) 3667 : cluster [WRN] application not enabled on pool 'test-rados-api-vm01-59821-111' 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000146+0000 mon.a (mon.0) 3667 : cluster [WRN] application not enabled on pool 'test-rados-api-vm01-59821-111' 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000149+0000 mon.a (mon.0) 3668 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:10:01.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:00 vm09 bash[22983]: cluster 2026-03-09T16:10:00.000149+0000 mon.a (mon.0) 3668 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:10:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:01 vm09 bash[22983]: cluster 2026-03-09T16:10:00.869288+0000 mgr.y (mgr.14520) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 1 op/s 2026-03-09T16:10:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:01 vm09 bash[22983]: cluster 2026-03-09T16:10:00.869288+0000 mgr.y (mgr.14520) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 1 op/s 2026-03-09T16:10:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:01 vm01 bash[28152]: cluster 2026-03-09T16:10:00.869288+0000 mgr.y (mgr.14520) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 1 op/s 2026-03-09T16:10:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:01 vm01 bash[28152]: cluster 2026-03-09T16:10:00.869288+0000 mgr.y (mgr.14520) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 1 op/s 2026-03-09T16:10:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:01 vm01 bash[20728]: cluster 2026-03-09T16:10:00.869288+0000 mgr.y (mgr.14520) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 1 op/s 2026-03-09T16:10:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:01 vm01 bash[20728]: cluster 2026-03-09T16:10:00.869288+0000 mgr.y (mgr.14520) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 1 op/s 2026-03-09T16:10:03.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:10:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:10:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:10:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:03 vm09 bash[22983]: cluster 2026-03-09T16:10:02.869656+0000 mgr.y (mgr.14520) 621 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T16:10:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:03 vm09 bash[22983]: cluster 2026-03-09T16:10:02.869656+0000 mgr.y (mgr.14520) 621 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T16:10:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:03 vm01 bash[28152]: cluster 2026-03-09T16:10:02.869656+0000 mgr.y (mgr.14520) 621 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T16:10:04.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:03 vm01 bash[28152]: cluster 2026-03-09T16:10:02.869656+0000 mgr.y (mgr.14520) 621 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T16:10:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:03 vm01 bash[20728]: cluster 2026-03-09T16:10:02.869656+0000 mgr.y (mgr.14520) 621 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T16:10:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:03 vm01 bash[20728]: cluster 2026-03-09T16:10:02.869656+0000 mgr.y (mgr.14520) 621 : cluster [DBG] pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 0 op/s 2026-03-09T16:10:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:06 vm01 bash[28152]: cluster 2026-03-09T16:10:04.870488+0000 mgr.y (mgr.14520) 622 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 145 B/s wr, 1 op/s 2026-03-09T16:10:06.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:06 vm01 bash[28152]: cluster 2026-03-09T16:10:04.870488+0000 mgr.y (mgr.14520) 622 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 145 B/s wr, 1 op/s 2026-03-09T16:10:06.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:06 vm01 bash[20728]: cluster 2026-03-09T16:10:04.870488+0000 mgr.y (mgr.14520) 622 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 145 B/s wr, 1 op/s 2026-03-09T16:10:06.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:06 vm01 bash[20728]: cluster 2026-03-09T16:10:04.870488+0000 mgr.y (mgr.14520) 622 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 145 B/s wr, 1 op/s 2026-03-09T16:10:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:06 vm09 bash[22983]: cluster 2026-03-09T16:10:04.870488+0000 mgr.y (mgr.14520) 622 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 145 B/s wr, 1 op/s 2026-03-09T16:10:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:06 vm09 bash[22983]: cluster 2026-03-09T16:10:04.870488+0000 mgr.y (mgr.14520) 622 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 145 B/s wr, 1 op/s 2026-03-09T16:10:07.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:10:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:10:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:07 vm09 bash[22983]: audit 2026-03-09T16:10:06.867465+0000 mgr.y (mgr.14520) 623 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:07 vm09 bash[22983]: audit 2026-03-09T16:10:06.867465+0000 mgr.y (mgr.14520) 623 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:07 vm09 bash[22983]: cluster 2026-03-09T16:10:06.870768+0000 mgr.y (mgr.14520) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:10:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:07 vm09 bash[22983]: cluster 2026-03-09T16:10:06.870768+0000 mgr.y (mgr.14520) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:10:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:07 vm01 bash[28152]: audit 2026-03-09T16:10:06.867465+0000 mgr.y (mgr.14520) 623 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:07 vm01 bash[28152]: audit 2026-03-09T16:10:06.867465+0000 mgr.y (mgr.14520) 623 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:07 vm01 bash[28152]: cluster 2026-03-09T16:10:06.870768+0000 mgr.y (mgr.14520) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:10:07.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:07 vm01 bash[28152]: cluster 2026-03-09T16:10:06.870768+0000 mgr.y (mgr.14520) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:10:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:07 vm01 bash[20728]: audit 2026-03-09T16:10:06.867465+0000 mgr.y (mgr.14520) 623 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:07 vm01 bash[20728]: audit 2026-03-09T16:10:06.867465+0000 mgr.y (mgr.14520) 623 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:07 vm01 bash[20728]: cluster 2026-03-09T16:10:06.870768+0000 mgr.y (mgr.14520) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:10:07.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:07 vm01 bash[20728]: cluster 2026-03-09T16:10:06.870768+0000 mgr.y (mgr.14520) 624 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T16:10:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:09 vm01 bash[28152]: cluster 2026-03-09T16:10:08.871388+0000 mgr.y (mgr.14520) 625 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:10.176 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:09 vm01 bash[28152]: cluster 2026-03-09T16:10:08.871388+0000 mgr.y (mgr.14520) 625 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:10.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:09 vm01 bash[20728]: cluster 2026-03-09T16:10:08.871388+0000 mgr.y (mgr.14520) 625 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:10.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:09 vm01 bash[20728]: cluster 2026-03-09T16:10:08.871388+0000 mgr.y (mgr.14520) 625 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:09 vm09 bash[22983]: cluster 2026-03-09T16:10:08.871388+0000 mgr.y (mgr.14520) 625 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:09 vm09 bash[22983]: cluster 2026-03-09T16:10:08.871388+0000 mgr.y (mgr.14520) 625 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:11.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:10 vm09 bash[22983]: audit 2026-03-09T16:10:09.980307+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:10 vm09 bash[22983]: audit 2026-03-09T16:10:09.980307+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:10 vm09 bash[22983]: audit 2026-03-09T16:10:09.980591+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:10 vm09 bash[22983]: audit 2026-03-09T16:10:09.980591+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:10 vm01 bash[28152]: audit 2026-03-09T16:10:09.980307+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:10 vm01 bash[28152]: audit 2026-03-09T16:10:09.980307+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:10 vm01 bash[28152]: audit 2026-03-09T16:10:09.980591+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:10 vm01 bash[28152]: audit 2026-03-09T16:10:09.980591+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:10 vm01 bash[20728]: audit 2026-03-09T16:10:09.980307+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:10 vm01 bash[20728]: audit 2026-03-09T16:10:09.980307+0000 mon.c (mon.2) 645 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:10 vm01 bash[20728]: audit 2026-03-09T16:10:09.980591+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:11.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:10 vm01 bash[20728]: audit 2026-03-09T16:10:09.980591+0000 mon.a (mon.0) 3669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: cluster 2026-03-09T16:10:10.871939+0000 mgr.y (mgr.14520) 626 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1022 B/s rd, 0 op/s 2026-03-09T16:10:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: cluster 2026-03-09T16:10:10.871939+0000 mgr.y (mgr.14520) 626 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1022 B/s rd, 0 op/s 2026-03-09T16:10:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: audit 2026-03-09T16:10:10.943778+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: audit 2026-03-09T16:10:10.943778+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: cluster 2026-03-09T16:10:10.950663+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T16:10:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: cluster 2026-03-09T16:10:10.950663+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T16:10:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: audit 2026-03-09T16:10:10.955112+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: audit 2026-03-09T16:10:10.955112+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: audit 2026-03-09T16:10:10.955599+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:11 vm09 bash[22983]: audit 2026-03-09T16:10:10.955599+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: cluster 2026-03-09T16:10:10.871939+0000 mgr.y (mgr.14520) 626 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1022 B/s rd, 0 op/s 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: cluster 2026-03-09T16:10:10.871939+0000 mgr.y (mgr.14520) 626 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1022 B/s rd, 0 op/s 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: audit 2026-03-09T16:10:10.943778+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: audit 2026-03-09T16:10:10.943778+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: cluster 2026-03-09T16:10:10.950663+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: cluster 2026-03-09T16:10:10.950663+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: audit 2026-03-09T16:10:10.955112+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: audit 2026-03-09T16:10:10.955112+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: audit 2026-03-09T16:10:10.955599+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:11 vm01 bash[28152]: audit 2026-03-09T16:10:10.955599+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: cluster 2026-03-09T16:10:10.871939+0000 mgr.y (mgr.14520) 626 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1022 B/s rd, 0 op/s 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: cluster 2026-03-09T16:10:10.871939+0000 mgr.y (mgr.14520) 626 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1022 B/s rd, 0 op/s 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: audit 2026-03-09T16:10:10.943778+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: audit 2026-03-09T16:10:10.943778+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: cluster 2026-03-09T16:10:10.950663+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: cluster 2026-03-09T16:10:10.950663+0000 mon.a (mon.0) 3671 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: audit 2026-03-09T16:10:10.955112+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: audit 2026-03-09T16:10:10.955112+0000 mon.c (mon.2) 646 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: audit 2026-03-09T16:10:10.955599+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:12.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:11 vm01 bash[20728]: audit 2026-03-09T16:10:10.955599+0000 mon.a (mon.0) 3672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: cluster 2026-03-09T16:10:11.944039+0000 mon.a (mon.0) 3673 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: cluster 2026-03-09T16:10:11.944039+0000 mon.a (mon.0) 3673 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:11.948200+0000 mon.a (mon.0) 3674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:11.948200+0000 mon.a (mon.0) 3674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: cluster 2026-03-09T16:10:11.956183+0000 mon.a (mon.0) 3675 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: cluster 2026-03-09T16:10:11.956183+0000 mon.a (mon.0) 3675 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:12.002926+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:12.002926+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:12.003349+0000 mon.a (mon.0) 3676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:12.003349+0000 mon.a (mon.0) 3676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:12.004330+0000 mon.c (mon.2) 648 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:12.004330+0000 mon.c (mon.2) 648 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:12.004653+0000 mon.a (mon.0) 3677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:12 vm01 bash[20728]: audit 2026-03-09T16:10:12.004653+0000 mon.a (mon.0) 3677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:10:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:10:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: cluster 2026-03-09T16:10:11.944039+0000 mon.a (mon.0) 3673 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: cluster 2026-03-09T16:10:11.944039+0000 mon.a (mon.0) 3673 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:11.948200+0000 mon.a (mon.0) 3674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:11.948200+0000 mon.a (mon.0) 3674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: cluster 2026-03-09T16:10:11.956183+0000 mon.a (mon.0) 3675 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: cluster 2026-03-09T16:10:11.956183+0000 mon.a (mon.0) 3675 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:12.002926+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:12.002926+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:12.003349+0000 mon.a (mon.0) 3676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:12.003349+0000 mon.a (mon.0) 3676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:12.004330+0000 mon.c (mon.2) 648 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:12.004330+0000 mon.c (mon.2) 648 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:12.004653+0000 mon.a (mon.0) 3677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.177 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:12 vm01 bash[28152]: audit 2026-03-09T16:10:12.004653+0000 mon.a (mon.0) 3677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: cluster 2026-03-09T16:10:11.944039+0000 mon.a (mon.0) 3673 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:13.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: cluster 2026-03-09T16:10:11.944039+0000 mon.a (mon.0) 3673 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:11.948200+0000 mon.a (mon.0) 3674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:11.948200+0000 mon.a (mon.0) 3674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]': finished 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: cluster 2026-03-09T16:10:11.956183+0000 mon.a (mon.0) 3675 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: cluster 2026-03-09T16:10:11.956183+0000 mon.a (mon.0) 3675 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:12.002926+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:12.002926+0000 mon.c (mon.2) 647 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:12.003349+0000 mon.a (mon.0) 3676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:12.003349+0000 mon.a (mon.0) 3676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:12.004330+0000 mon.c (mon.2) 648 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:12.004330+0000 mon.c (mon.2) 648 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:12.004653+0000 mon.a (mon.0) 3677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:13.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:12 vm09 bash[22983]: audit 2026-03-09T16:10:12.004653+0000 mon.a (mon.0) 3677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-144"}]: dispatch 2026-03-09T16:10:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:13 vm09 bash[22983]: cluster 2026-03-09T16:10:12.872321+0000 mgr.y (mgr.14520) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:10:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:13 vm09 bash[22983]: cluster 2026-03-09T16:10:12.872321+0000 mgr.y (mgr.14520) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:10:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:13 vm09 bash[22983]: cluster 2026-03-09T16:10:12.963883+0000 mon.a (mon.0) 3678 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:13 vm09 bash[22983]: cluster 2026-03-09T16:10:12.963883+0000 mon.a (mon.0) 3678 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:13 vm09 bash[22983]: cluster 2026-03-09T16:10:12.989773+0000 mon.a (mon.0) 3679 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T16:10:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:13 vm09 bash[22983]: cluster 2026-03-09T16:10:12.989773+0000 mon.a (mon.0) 3679 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:13 vm01 bash[28152]: cluster 2026-03-09T16:10:12.872321+0000 mgr.y (mgr.14520) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:13 vm01 bash[28152]: cluster 2026-03-09T16:10:12.872321+0000 mgr.y (mgr.14520) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:13 vm01 bash[28152]: cluster 2026-03-09T16:10:12.963883+0000 mon.a (mon.0) 3678 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:13 vm01 bash[28152]: cluster 2026-03-09T16:10:12.963883+0000 mon.a (mon.0) 3678 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:13 vm01 bash[28152]: cluster 2026-03-09T16:10:12.989773+0000 mon.a (mon.0) 3679 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:13 vm01 bash[28152]: cluster 2026-03-09T16:10:12.989773+0000 mon.a (mon.0) 3679 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:13 vm01 bash[20728]: cluster 2026-03-09T16:10:12.872321+0000 mgr.y (mgr.14520) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:13 vm01 bash[20728]: cluster 2026-03-09T16:10:12.872321+0000 mgr.y (mgr.14520) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:13 vm01 bash[20728]: cluster 2026-03-09T16:10:12.963883+0000 mon.a (mon.0) 3678 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:13 vm01 bash[20728]: cluster 2026-03-09T16:10:12.963883+0000 mon.a (mon.0) 3678 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:13 vm01 bash[20728]: cluster 2026-03-09T16:10:12.989773+0000 mon.a (mon.0) 3679 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T16:10:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:13 vm01 bash[20728]: cluster 2026-03-09T16:10:12.989773+0000 mon.a (mon.0) 3679 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-09T16:10:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:15 vm09 bash[22983]: cluster 2026-03-09T16:10:13.992269+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T16:10:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:15 vm09 bash[22983]: cluster 2026-03-09T16:10:13.992269+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T16:10:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:15 vm09 bash[22983]: audit 2026-03-09T16:10:14.014579+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:15 vm09 bash[22983]: audit 2026-03-09T16:10:14.014579+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:15 vm09 bash[22983]: audit 2026-03-09T16:10:14.017533+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:15 vm09 bash[22983]: audit 2026-03-09T16:10:14.017533+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:15 vm09 bash[22983]: audit 2026-03-09T16:10:14.529039+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:15 vm09 bash[22983]: audit 2026-03-09T16:10:14.529039+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:15 vm01 bash[28152]: cluster 2026-03-09T16:10:13.992269+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:15 vm01 bash[28152]: cluster 2026-03-09T16:10:13.992269+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:15 vm01 bash[28152]: audit 2026-03-09T16:10:14.014579+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:15 vm01 bash[28152]: audit 2026-03-09T16:10:14.014579+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:15 vm01 bash[28152]: audit 2026-03-09T16:10:14.017533+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:15 vm01 bash[28152]: audit 2026-03-09T16:10:14.017533+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:15 vm01 bash[28152]: audit 2026-03-09T16:10:14.529039+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:15 vm01 bash[28152]: audit 2026-03-09T16:10:14.529039+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:15 vm01 bash[20728]: cluster 2026-03-09T16:10:13.992269+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:15 vm01 bash[20728]: cluster 2026-03-09T16:10:13.992269+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:15 vm01 bash[20728]: audit 2026-03-09T16:10:14.014579+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:15 vm01 bash[20728]: audit 2026-03-09T16:10:14.014579+0000 mon.c (mon.2) 649 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:15 vm01 bash[20728]: audit 2026-03-09T16:10:14.017533+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:15 vm01 bash[20728]: audit 2026-03-09T16:10:14.017533+0000 mon.a (mon.0) 3681 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:15 vm01 bash[20728]: audit 2026-03-09T16:10:14.529039+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:15 vm01 bash[20728]: audit 2026-03-09T16:10:14.529039+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: cluster 2026-03-09T16:10:14.873031+0000 mgr.y (mgr.14520) 628 : cluster [DBG] pgmap v1113: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: cluster 2026-03-09T16:10:14.873031+0000 mgr.y (mgr.14520) 628 : cluster [DBG] pgmap v1113: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: audit 2026-03-09T16:10:14.992485+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: audit 2026-03-09T16:10:14.992485+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: cluster 2026-03-09T16:10:14.996160+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T16:10:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: cluster 2026-03-09T16:10:14.996160+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T16:10:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: audit 2026-03-09T16:10:15.054196+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: audit 2026-03-09T16:10:15.054196+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: audit 2026-03-09T16:10:15.054475+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:16 vm09 bash[22983]: audit 2026-03-09T16:10:15.054475+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: cluster 2026-03-09T16:10:14.873031+0000 mgr.y (mgr.14520) 628 : cluster [DBG] pgmap v1113: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: cluster 2026-03-09T16:10:14.873031+0000 mgr.y (mgr.14520) 628 : cluster [DBG] pgmap v1113: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: audit 2026-03-09T16:10:14.992485+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: audit 2026-03-09T16:10:14.992485+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: cluster 2026-03-09T16:10:14.996160+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: cluster 2026-03-09T16:10:14.996160+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: audit 2026-03-09T16:10:15.054196+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: audit 2026-03-09T16:10:15.054196+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: audit 2026-03-09T16:10:15.054475+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:16 vm01 bash[28152]: audit 2026-03-09T16:10:15.054475+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: cluster 2026-03-09T16:10:14.873031+0000 mgr.y (mgr.14520) 628 : cluster [DBG] pgmap v1113: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: cluster 2026-03-09T16:10:14.873031+0000 mgr.y (mgr.14520) 628 : cluster [DBG] pgmap v1113: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: audit 2026-03-09T16:10:14.992485+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: audit 2026-03-09T16:10:14.992485+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: cluster 2026-03-09T16:10:14.996160+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: cluster 2026-03-09T16:10:14.996160+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: audit 2026-03-09T16:10:15.054196+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: audit 2026-03-09T16:10:15.054196+0000 mon.c (mon.2) 650 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: audit 2026-03-09T16:10:15.054475+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:16 vm01 bash[20728]: audit 2026-03-09T16:10:15.054475+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:17.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:10:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:10:17.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:16.032541+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:17.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:16.032541+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:17.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: cluster 2026-03-09T16:10:16.038610+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T16:10:17.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: cluster 2026-03-09T16:10:16.038610+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:16.042287+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:16.042287+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:16.047725+0000 mon.a (mon.0) 3688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:16.047725+0000 mon.a (mon.0) 3688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:17.036859+0000 mon.a (mon.0) 3689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:17.036859+0000 mon.a (mon.0) 3689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:17.049599+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:17.049599+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: cluster 2026-03-09T16:10:17.051142+0000 mon.a (mon.0) 3690 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: cluster 2026-03-09T16:10:17.051142+0000 mon.a (mon.0) 3690 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:17.052190+0000 mon.a (mon.0) 3691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:17 vm09 bash[22983]: audit 2026-03-09T16:10:17.052190+0000 mon.a (mon.0) 3691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:16.032541+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:16.032541+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: cluster 2026-03-09T16:10:16.038610+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: cluster 2026-03-09T16:10:16.038610+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:16.042287+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:16.042287+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:16.047725+0000 mon.a (mon.0) 3688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:16.047725+0000 mon.a (mon.0) 3688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:17.036859+0000 mon.a (mon.0) 3689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:17.036859+0000 mon.a (mon.0) 3689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:17.049599+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:17.049599+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: cluster 2026-03-09T16:10:17.051142+0000 mon.a (mon.0) 3690 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: cluster 2026-03-09T16:10:17.051142+0000 mon.a (mon.0) 3690 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:17.052190+0000 mon.a (mon.0) 3691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:17 vm01 bash[28152]: audit 2026-03-09T16:10:17.052190+0000 mon.a (mon.0) 3691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:16.032541+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:16.032541+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: cluster 2026-03-09T16:10:16.038610+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T16:10:17.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: cluster 2026-03-09T16:10:16.038610+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:16.042287+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:16.042287+0000 mon.c (mon.2) 651 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:16.047725+0000 mon.a (mon.0) 3688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:16.047725+0000 mon.a (mon.0) 3688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:17.036859+0000 mon.a (mon.0) 3689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:17.036859+0000 mon.a (mon.0) 3689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm01-59821-111", "overlaypool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:17.049599+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:17.049599+0000 mon.c (mon.2) 652 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: cluster 2026-03-09T16:10:17.051142+0000 mon.a (mon.0) 3690 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: cluster 2026-03-09T16:10:17.051142+0000 mon.a (mon.0) 3690 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:17.052190+0000 mon.a (mon.0) 3691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:17.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:17 vm01 bash[20728]: audit 2026-03-09T16:10:17.052190+0000 mon.a (mon.0) 3691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]: dispatch 2026-03-09T16:10:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: cluster 2026-03-09T16:10:16.873374+0000 mgr.y (mgr.14520) 629 : cluster [DBG] pgmap v1116: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: cluster 2026-03-09T16:10:16.873374+0000 mgr.y (mgr.14520) 629 : cluster [DBG] pgmap v1116: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: audit 2026-03-09T16:10:16.878204+0000 mgr.y (mgr.14520) 630 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: audit 2026-03-09T16:10:16.878204+0000 mgr.y (mgr.14520) 630 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: cluster 2026-03-09T16:10:18.036665+0000 mon.a (mon.0) 3692 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: cluster 2026-03-09T16:10:18.036665+0000 mon.a (mon.0) 3692 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: audit 2026-03-09T16:10:18.040012+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]': finished 2026-03-09T16:10:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: audit 2026-03-09T16:10:18.040012+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]': finished 2026-03-09T16:10:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: cluster 2026-03-09T16:10:18.045371+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T16:10:18.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:18 vm09 bash[22983]: cluster 2026-03-09T16:10:18.045371+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: cluster 2026-03-09T16:10:16.873374+0000 mgr.y (mgr.14520) 629 : cluster [DBG] pgmap v1116: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: cluster 2026-03-09T16:10:16.873374+0000 mgr.y (mgr.14520) 629 : cluster [DBG] pgmap v1116: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: audit 2026-03-09T16:10:16.878204+0000 mgr.y (mgr.14520) 630 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: audit 2026-03-09T16:10:16.878204+0000 mgr.y (mgr.14520) 630 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: cluster 2026-03-09T16:10:18.036665+0000 mon.a (mon.0) 3692 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: cluster 2026-03-09T16:10:18.036665+0000 mon.a (mon.0) 3692 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: audit 2026-03-09T16:10:18.040012+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]': finished 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: audit 2026-03-09T16:10:18.040012+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]': finished 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: cluster 2026-03-09T16:10:18.045371+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:18 vm01 bash[28152]: cluster 2026-03-09T16:10:18.045371+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: cluster 2026-03-09T16:10:16.873374+0000 mgr.y (mgr.14520) 629 : cluster [DBG] pgmap v1116: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: cluster 2026-03-09T16:10:16.873374+0000 mgr.y (mgr.14520) 629 : cluster [DBG] pgmap v1116: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: audit 2026-03-09T16:10:16.878204+0000 mgr.y (mgr.14520) 630 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: audit 2026-03-09T16:10:16.878204+0000 mgr.y (mgr.14520) 630 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: cluster 2026-03-09T16:10:18.036665+0000 mon.a (mon.0) 3692 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: cluster 2026-03-09T16:10:18.036665+0000 mon.a (mon.0) 3692 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: audit 2026-03-09T16:10:18.040012+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]': finished 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: audit 2026-03-09T16:10:18.040012+0000 mon.a (mon.0) 3693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm01-59821-146", "mode": "writeback"}]': finished 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: cluster 2026-03-09T16:10:18.045371+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T16:10:18.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:18 vm01 bash[20728]: cluster 2026-03-09T16:10:18.045371+0000 mon.a (mon.0) 3694 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-09T16:10:19.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:19 vm09 bash[22983]: audit 2026-03-09T16:10:18.088153+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:19 vm09 bash[22983]: audit 2026-03-09T16:10:18.088153+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:19 vm09 bash[22983]: audit 2026-03-09T16:10:18.088432+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:19 vm09 bash[22983]: audit 2026-03-09T16:10:18.088432+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:19 vm01 bash[28152]: audit 2026-03-09T16:10:18.088153+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:19 vm01 bash[28152]: audit 2026-03-09T16:10:18.088153+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:19 vm01 bash[28152]: audit 2026-03-09T16:10:18.088432+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:19 vm01 bash[28152]: audit 2026-03-09T16:10:18.088432+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:19 vm01 bash[20728]: audit 2026-03-09T16:10:18.088153+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:19 vm01 bash[20728]: audit 2026-03-09T16:10:18.088153+0000 mon.c (mon.2) 653 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:19 vm01 bash[20728]: audit 2026-03-09T16:10:18.088432+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:19.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:19 vm01 bash[20728]: audit 2026-03-09T16:10:18.088432+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: cluster 2026-03-09T16:10:18.873907+0000 mgr.y (mgr.14520) 631 : cluster [DBG] pgmap v1119: 268 pgs: 18 creating+peering, 250 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: cluster 2026-03-09T16:10:18.873907+0000 mgr.y (mgr.14520) 631 : cluster [DBG] pgmap v1119: 268 pgs: 18 creating+peering, 250 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:19.069264+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:19.069264+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:19.072746+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:19.072746+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: cluster 2026-03-09T16:10:19.076253+0000 mon.a (mon.0) 3697 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: cluster 2026-03-09T16:10:19.076253+0000 mon.a (mon.0) 3697 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:19.077851+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:19.077851+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: cluster 2026-03-09T16:10:19.143006+0000 mon.a (mon.0) 3699 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: cluster 2026-03-09T16:10:19.143006+0000 mon.a (mon.0) 3699 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:20.073400+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:20.073400+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:20.077624+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: audit 2026-03-09T16:10:20.077624+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: cluster 2026-03-09T16:10:20.086008+0000 mon.a (mon.0) 3701 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T16:10:20.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:20 vm09 bash[22983]: cluster 2026-03-09T16:10:20.086008+0000 mon.a (mon.0) 3701 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: cluster 2026-03-09T16:10:18.873907+0000 mgr.y (mgr.14520) 631 : cluster [DBG] pgmap v1119: 268 pgs: 18 creating+peering, 250 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: cluster 2026-03-09T16:10:18.873907+0000 mgr.y (mgr.14520) 631 : cluster [DBG] pgmap v1119: 268 pgs: 18 creating+peering, 250 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:19.069264+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:19.069264+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:19.072746+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:19.072746+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: cluster 2026-03-09T16:10:19.076253+0000 mon.a (mon.0) 3697 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: cluster 2026-03-09T16:10:19.076253+0000 mon.a (mon.0) 3697 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:19.077851+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:19.077851+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: cluster 2026-03-09T16:10:19.143006+0000 mon.a (mon.0) 3699 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: cluster 2026-03-09T16:10:19.143006+0000 mon.a (mon.0) 3699 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:20.073400+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:20.073400+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:20.077624+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: audit 2026-03-09T16:10:20.077624+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: cluster 2026-03-09T16:10:20.086008+0000 mon.a (mon.0) 3701 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:20 vm01 bash[28152]: cluster 2026-03-09T16:10:20.086008+0000 mon.a (mon.0) 3701 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: cluster 2026-03-09T16:10:18.873907+0000 mgr.y (mgr.14520) 631 : cluster [DBG] pgmap v1119: 268 pgs: 18 creating+peering, 250 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: cluster 2026-03-09T16:10:18.873907+0000 mgr.y (mgr.14520) 631 : cluster [DBG] pgmap v1119: 268 pgs: 18 creating+peering, 250 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:19.069264+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:19.069264+0000 mon.a (mon.0) 3696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:19.072746+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:19.072746+0000 mon.c (mon.2) 654 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: cluster 2026-03-09T16:10:19.076253+0000 mon.a (mon.0) 3697 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: cluster 2026-03-09T16:10:19.076253+0000 mon.a (mon.0) 3697 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:19.077851+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:19.077851+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: cluster 2026-03-09T16:10:19.143006+0000 mon.a (mon.0) 3699 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: cluster 2026-03-09T16:10:19.143006+0000 mon.a (mon.0) 3699 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:20.073400+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:20.073400+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:20.077624+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: audit 2026-03-09T16:10:20.077624+0000 mon.c (mon.2) 655 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: cluster 2026-03-09T16:10:20.086008+0000 mon.a (mon.0) 3701 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T16:10:20.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:20 vm01 bash[20728]: cluster 2026-03-09T16:10:20.086008+0000 mon.a (mon.0) 3701 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-09T16:10:21.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:21 vm09 bash[22983]: audit 2026-03-09T16:10:20.086547+0000 mon.a (mon.0) 3702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:21.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:21 vm09 bash[22983]: audit 2026-03-09T16:10:20.086547+0000 mon.a (mon.0) 3702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:21 vm09 bash[22983]: cluster 2026-03-09T16:10:21.073280+0000 mon.a (mon.0) 3703 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:21.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:21 vm09 bash[22983]: cluster 2026-03-09T16:10:21.073280+0000 mon.a (mon.0) 3703 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:21 vm01 bash[28152]: audit 2026-03-09T16:10:20.086547+0000 mon.a (mon.0) 3702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:21 vm01 bash[28152]: audit 2026-03-09T16:10:20.086547+0000 mon.a (mon.0) 3702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:21 vm01 bash[28152]: cluster 2026-03-09T16:10:21.073280+0000 mon.a (mon.0) 3703 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:21.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:21 vm01 bash[28152]: cluster 2026-03-09T16:10:21.073280+0000 mon.a (mon.0) 3703 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:21 vm01 bash[20728]: audit 2026-03-09T16:10:20.086547+0000 mon.a (mon.0) 3702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:21 vm01 bash[20728]: audit 2026-03-09T16:10:20.086547+0000 mon.a (mon.0) 3702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T16:10:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:21 vm01 bash[20728]: cluster 2026-03-09T16:10:21.073280+0000 mon.a (mon.0) 3703 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:21.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:21 vm01 bash[20728]: cluster 2026-03-09T16:10:21.073280+0000 mon.a (mon.0) 3703 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T16:10:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: cluster 2026-03-09T16:10:20.874272+0000 mgr.y (mgr.14520) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:10:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: cluster 2026-03-09T16:10:20.874272+0000 mgr.y (mgr.14520) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:10:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:21.088891+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:10:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:21.088891+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:10:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: cluster 2026-03-09T16:10:21.093047+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: cluster 2026-03-09T16:10:21.093047+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:21.094204+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:21.094204+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:21.094788+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:21.094788+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:22.092567+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:22.092567+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:22.096912+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:22.096912+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: cluster 2026-03-09T16:10:22.099144+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: cluster 2026-03-09T16:10:22.099144+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:22.099781+0000 mon.a (mon.0) 3709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:22 vm09 bash[22983]: audit 2026-03-09T16:10:22.099781+0000 mon.a (mon.0) 3709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: cluster 2026-03-09T16:10:20.874272+0000 mgr.y (mgr.14520) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:10:22.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: cluster 2026-03-09T16:10:20.874272+0000 mgr.y (mgr.14520) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:10:22.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:21.088891+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:10:22.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:21.088891+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:10:22.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: cluster 2026-03-09T16:10:21.093047+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T16:10:22.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: cluster 2026-03-09T16:10:21.093047+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T16:10:22.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:21.094204+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:21.094204+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:21.094788+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:21.094788+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:22.092567+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:22.092567+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:22.096912+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:22.096912+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: cluster 2026-03-09T16:10:22.099144+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: cluster 2026-03-09T16:10:22.099144+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:22.099781+0000 mon.a (mon.0) 3709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:22 vm01 bash[20728]: audit 2026-03-09T16:10:22.099781+0000 mon.a (mon.0) 3709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: cluster 2026-03-09T16:10:20.874272+0000 mgr.y (mgr.14520) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: cluster 2026-03-09T16:10:20.874272+0000 mgr.y (mgr.14520) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:21.088891+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:21.088891+0000 mon.a (mon.0) 3704 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: cluster 2026-03-09T16:10:21.093047+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: cluster 2026-03-09T16:10:21.093047+0000 mon.a (mon.0) 3705 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:21.094204+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:21.094204+0000 mon.c (mon.2) 656 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:21.094788+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:21.094788+0000 mon.a (mon.0) 3706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:22.092567+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:22.092567+0000 mon.a (mon.0) 3707 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:22.096912+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:22.096912+0000 mon.c (mon.2) 657 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: cluster 2026-03-09T16:10:22.099144+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: cluster 2026-03-09T16:10:22.099144+0000 mon.a (mon.0) 3708 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:22.099781+0000 mon.a (mon.0) 3709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:22.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:22 vm01 bash[28152]: audit 2026-03-09T16:10:22.099781+0000 mon.a (mon.0) 3709 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T16:10:23.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:10:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:10:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:10:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:24 vm09 bash[22983]: cluster 2026-03-09T16:10:22.874646+0000 mgr.y (mgr.14520) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:24 vm09 bash[22983]: cluster 2026-03-09T16:10:22.874646+0000 mgr.y (mgr.14520) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:24 vm09 bash[22983]: audit 2026-03-09T16:10:23.097793+0000 mon.a (mon.0) 3710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:10:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:24 vm09 bash[22983]: audit 2026-03-09T16:10:23.097793+0000 mon.a (mon.0) 3710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:10:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:24 vm09 bash[22983]: cluster 2026-03-09T16:10:23.101993+0000 mon.a (mon.0) 3711 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T16:10:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:24 vm09 bash[22983]: cluster 2026-03-09T16:10:23.101993+0000 mon.a (mon.0) 3711 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:24 vm01 bash[28152]: cluster 2026-03-09T16:10:22.874646+0000 mgr.y (mgr.14520) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:24 vm01 bash[28152]: cluster 2026-03-09T16:10:22.874646+0000 mgr.y (mgr.14520) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:24 vm01 bash[28152]: audit 2026-03-09T16:10:23.097793+0000 mon.a (mon.0) 3710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:24 vm01 bash[28152]: audit 2026-03-09T16:10:23.097793+0000 mon.a (mon.0) 3710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:24 vm01 bash[28152]: cluster 2026-03-09T16:10:23.101993+0000 mon.a (mon.0) 3711 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:24 vm01 bash[28152]: cluster 2026-03-09T16:10:23.101993+0000 mon.a (mon.0) 3711 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:24 vm01 bash[20728]: cluster 2026-03-09T16:10:22.874646+0000 mgr.y (mgr.14520) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:24 vm01 bash[20728]: cluster 2026-03-09T16:10:22.874646+0000 mgr.y (mgr.14520) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:24 vm01 bash[20728]: audit 2026-03-09T16:10:23.097793+0000 mon.a (mon.0) 3710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:24 vm01 bash[20728]: audit 2026-03-09T16:10:23.097793+0000 mon.a (mon.0) 3710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:24 vm01 bash[20728]: cluster 2026-03-09T16:10:23.101993+0000 mon.a (mon.0) 3711 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T16:10:24.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:24 vm01 bash[20728]: cluster 2026-03-09T16:10:23.101993+0000 mon.a (mon.0) 3711 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-09T16:10:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:25 vm01 bash[28152]: cluster 2026-03-09T16:10:24.875410+0000 mgr.y (mgr.14520) 634 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:25 vm01 bash[28152]: cluster 2026-03-09T16:10:24.875410+0000 mgr.y (mgr.14520) 634 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:25 vm01 bash[28152]: cluster 2026-03-09T16:10:25.106985+0000 mon.a (mon.0) 3712 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:10:25.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:25 vm01 bash[28152]: cluster 2026-03-09T16:10:25.106985+0000 mon.a (mon.0) 3712 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:10:25.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:25 vm01 bash[20728]: cluster 2026-03-09T16:10:24.875410+0000 mgr.y (mgr.14520) 634 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:25.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:25 vm01 bash[20728]: cluster 2026-03-09T16:10:24.875410+0000 mgr.y (mgr.14520) 634 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:25.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:25 vm01 bash[20728]: cluster 2026-03-09T16:10:25.106985+0000 mon.a (mon.0) 3712 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:10:25.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:25 vm01 bash[20728]: cluster 2026-03-09T16:10:25.106985+0000 mon.a (mon.0) 3712 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:10:25.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:25 vm09 bash[22983]: cluster 2026-03-09T16:10:24.875410+0000 mgr.y (mgr.14520) 634 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:25.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:25 vm09 bash[22983]: cluster 2026-03-09T16:10:24.875410+0000 mgr.y (mgr.14520) 634 : cluster [DBG] pgmap v1127: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:25.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:25 vm09 bash[22983]: cluster 2026-03-09T16:10:25.106985+0000 mon.a (mon.0) 3712 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:10:25.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:25 vm09 bash[22983]: cluster 2026-03-09T16:10:25.106985+0000 mon.a (mon.0) 3712 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T16:10:27.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:10:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:10:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:27 vm09 bash[22983]: cluster 2026-03-09T16:10:26.875765+0000 mgr.y (mgr.14520) 635 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:28.393 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:27 vm09 bash[22983]: cluster 2026-03-09T16:10:26.875765+0000 mgr.y (mgr.14520) 635 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:28.393 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:27 vm09 bash[22983]: audit 2026-03-09T16:10:26.888933+0000 mgr.y (mgr.14520) 636 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:28.393 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:27 vm09 bash[22983]: audit 2026-03-09T16:10:26.888933+0000 mgr.y (mgr.14520) 636 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:27 vm01 bash[28152]: cluster 2026-03-09T16:10:26.875765+0000 mgr.y (mgr.14520) 635 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:27 vm01 bash[28152]: cluster 2026-03-09T16:10:26.875765+0000 mgr.y (mgr.14520) 635 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:27 vm01 bash[28152]: audit 2026-03-09T16:10:26.888933+0000 mgr.y (mgr.14520) 636 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:28.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:27 vm01 bash[28152]: audit 2026-03-09T16:10:26.888933+0000 mgr.y (mgr.14520) 636 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:27 vm01 bash[20728]: cluster 2026-03-09T16:10:26.875765+0000 mgr.y (mgr.14520) 635 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:27 vm01 bash[20728]: cluster 2026-03-09T16:10:26.875765+0000 mgr.y (mgr.14520) 635 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:27 vm01 bash[20728]: audit 2026-03-09T16:10:26.888933+0000 mgr.y (mgr.14520) 636 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:28.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:27 vm01 bash[20728]: audit 2026-03-09T16:10:26.888933+0000 mgr.y (mgr.14520) 636 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:29 vm09 bash[22983]: cluster 2026-03-09T16:10:28.876564+0000 mgr.y (mgr.14520) 637 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:10:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:29 vm09 bash[22983]: cluster 2026-03-09T16:10:28.876564+0000 mgr.y (mgr.14520) 637 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:10:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:29 vm09 bash[22983]: audit 2026-03-09T16:10:29.541469+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:29 vm09 bash[22983]: audit 2026-03-09T16:10:29.541469+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:29 vm09 bash[22983]: audit 2026-03-09T16:10:29.542477+0000 mon.a (mon.0) 3714 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:29 vm09 bash[22983]: audit 2026-03-09T16:10:29.542477+0000 mon.a (mon.0) 3714 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:29 vm01 bash[28152]: cluster 2026-03-09T16:10:28.876564+0000 mgr.y (mgr.14520) 637 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:10:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:29 vm01 bash[28152]: cluster 2026-03-09T16:10:28.876564+0000 mgr.y (mgr.14520) 637 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:10:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:29 vm01 bash[28152]: audit 2026-03-09T16:10:29.541469+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:29 vm01 bash[28152]: audit 2026-03-09T16:10:29.541469+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:29 vm01 bash[28152]: audit 2026-03-09T16:10:29.542477+0000 mon.a (mon.0) 3714 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:29 vm01 bash[28152]: audit 2026-03-09T16:10:29.542477+0000 mon.a (mon.0) 3714 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:29 vm01 bash[20728]: cluster 2026-03-09T16:10:28.876564+0000 mgr.y (mgr.14520) 637 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:10:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:29 vm01 bash[20728]: cluster 2026-03-09T16:10:28.876564+0000 mgr.y (mgr.14520) 637 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T16:10:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:29 vm01 bash[20728]: audit 2026-03-09T16:10:29.541469+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:29 vm01 bash[20728]: audit 2026-03-09T16:10:29.541469+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:29 vm01 bash[20728]: audit 2026-03-09T16:10:29.542477+0000 mon.a (mon.0) 3714 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:29 vm01 bash[20728]: audit 2026-03-09T16:10:29.542477+0000 mon.a (mon.0) 3714 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:31 vm09 bash[22983]: cluster 2026-03-09T16:10:30.877154+0000 mgr.y (mgr.14520) 638 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:31 vm09 bash[22983]: cluster 2026-03-09T16:10:30.877154+0000 mgr.y (mgr.14520) 638 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:32.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:31 vm01 bash[28152]: cluster 2026-03-09T16:10:30.877154+0000 mgr.y (mgr.14520) 638 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:32.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:31 vm01 bash[28152]: cluster 2026-03-09T16:10:30.877154+0000 mgr.y (mgr.14520) 638 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:31 vm01 bash[20728]: cluster 2026-03-09T16:10:30.877154+0000 mgr.y (mgr.14520) 638 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:32.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:31 vm01 bash[20728]: cluster 2026-03-09T16:10:30.877154+0000 mgr.y (mgr.14520) 638 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:33.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:10:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:10:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:10:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:33 vm09 bash[22983]: cluster 2026-03-09T16:10:32.877468+0000 mgr.y (mgr.14520) 639 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:33 vm09 bash[22983]: cluster 2026-03-09T16:10:32.877468+0000 mgr.y (mgr.14520) 639 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:33 vm09 bash[22983]: audit 2026-03-09T16:10:33.115010+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:33 vm09 bash[22983]: audit 2026-03-09T16:10:33.115010+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:33 vm09 bash[22983]: audit 2026-03-09T16:10:33.115231+0000 mon.a (mon.0) 3715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:33 vm09 bash[22983]: audit 2026-03-09T16:10:33.115231+0000 mon.a (mon.0) 3715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:33 vm01 bash[28152]: cluster 2026-03-09T16:10:32.877468+0000 mgr.y (mgr.14520) 639 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:33 vm01 bash[28152]: cluster 2026-03-09T16:10:32.877468+0000 mgr.y (mgr.14520) 639 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:33 vm01 bash[28152]: audit 2026-03-09T16:10:33.115010+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:33 vm01 bash[28152]: audit 2026-03-09T16:10:33.115010+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:33 vm01 bash[28152]: audit 2026-03-09T16:10:33.115231+0000 mon.a (mon.0) 3715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:33 vm01 bash[28152]: audit 2026-03-09T16:10:33.115231+0000 mon.a (mon.0) 3715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:33 vm01 bash[20728]: cluster 2026-03-09T16:10:32.877468+0000 mgr.y (mgr.14520) 639 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:33 vm01 bash[20728]: cluster 2026-03-09T16:10:32.877468+0000 mgr.y (mgr.14520) 639 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:33 vm01 bash[20728]: audit 2026-03-09T16:10:33.115010+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:33 vm01 bash[20728]: audit 2026-03-09T16:10:33.115010+0000 mon.c (mon.2) 658 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:33 vm01 bash[20728]: audit 2026-03-09T16:10:33.115231+0000 mon.a (mon.0) 3715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:33 vm01 bash[20728]: audit 2026-03-09T16:10:33.115231+0000 mon.a (mon.0) 3715 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:35.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:34 vm09 bash[22983]: audit 2026-03-09T16:10:33.959906+0000 mon.a (mon.0) 3716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:34 vm09 bash[22983]: audit 2026-03-09T16:10:33.959906+0000 mon.a (mon.0) 3716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:34 vm09 bash[22983]: cluster 2026-03-09T16:10:33.968930+0000 mon.a (mon.0) 3717 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T16:10:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:34 vm09 bash[22983]: cluster 2026-03-09T16:10:33.968930+0000 mon.a (mon.0) 3717 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T16:10:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:34 vm09 bash[22983]: audit 2026-03-09T16:10:33.978570+0000 mon.c (mon.2) 659 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:34 vm09 bash[22983]: audit 2026-03-09T16:10:33.978570+0000 mon.c (mon.2) 659 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:34 vm09 bash[22983]: audit 2026-03-09T16:10:33.979142+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:34 vm09 bash[22983]: audit 2026-03-09T16:10:33.979142+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:34 vm01 bash[28152]: audit 2026-03-09T16:10:33.959906+0000 mon.a (mon.0) 3716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:34 vm01 bash[28152]: audit 2026-03-09T16:10:33.959906+0000 mon.a (mon.0) 3716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:34 vm01 bash[28152]: cluster 2026-03-09T16:10:33.968930+0000 mon.a (mon.0) 3717 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:34 vm01 bash[28152]: cluster 2026-03-09T16:10:33.968930+0000 mon.a (mon.0) 3717 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:34 vm01 bash[28152]: audit 2026-03-09T16:10:33.978570+0000 mon.c (mon.2) 659 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:34 vm01 bash[28152]: audit 2026-03-09T16:10:33.978570+0000 mon.c (mon.2) 659 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:34 vm01 bash[28152]: audit 2026-03-09T16:10:33.979142+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:34 vm01 bash[28152]: audit 2026-03-09T16:10:33.979142+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:34 vm01 bash[20728]: audit 2026-03-09T16:10:33.959906+0000 mon.a (mon.0) 3716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:34 vm01 bash[20728]: audit 2026-03-09T16:10:33.959906+0000 mon.a (mon.0) 3716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:34 vm01 bash[20728]: cluster 2026-03-09T16:10:33.968930+0000 mon.a (mon.0) 3717 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:34 vm01 bash[20728]: cluster 2026-03-09T16:10:33.968930+0000 mon.a (mon.0) 3717 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:34 vm01 bash[20728]: audit 2026-03-09T16:10:33.978570+0000 mon.c (mon.2) 659 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:34 vm01 bash[20728]: audit 2026-03-09T16:10:33.978570+0000 mon.c (mon.2) 659 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:34 vm01 bash[20728]: audit 2026-03-09T16:10:33.979142+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:35.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:34 vm01 bash[20728]: audit 2026-03-09T16:10:33.979142+0000 mon.a (mon.0) 3718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: cluster 2026-03-09T16:10:34.878022+0000 mgr.y (mgr.14520) 640 : cluster [DBG] pgmap v1133: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: cluster 2026-03-09T16:10:34.878022+0000 mgr.y (mgr.14520) 640 : cluster [DBG] pgmap v1133: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:34.972828+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:34.972828+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: cluster 2026-03-09T16:10:34.994731+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: cluster 2026-03-09T16:10:34.994731+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:35.030369+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:35.030369+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:35.030682+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:35.030682+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:35.031084+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:35.031084+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:35.031320+0000 mon.a (mon.0) 3722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:35 vm09 bash[22983]: audit 2026-03-09T16:10:35.031320+0000 mon.a (mon.0) 3722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:35 vm01 bash[28152]: cluster 2026-03-09T16:10:34.878022+0000 mgr.y (mgr.14520) 640 : cluster [DBG] pgmap v1133: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:35 vm01 bash[28152]: cluster 2026-03-09T16:10:34.878022+0000 mgr.y (mgr.14520) 640 : cluster [DBG] pgmap v1133: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:34.972828+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:34.972828+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: cluster 2026-03-09T16:10:34.994731+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: cluster 2026-03-09T16:10:34.994731+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:35.030369+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:35.030369+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:35.030682+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:35.030682+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:35.031084+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:35.031084+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:35.031320+0000 mon.a (mon.0) 3722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:36 vm01 bash[28152]: audit 2026-03-09T16:10:35.031320+0000 mon.a (mon.0) 3722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:35 vm01 bash[20728]: cluster 2026-03-09T16:10:34.878022+0000 mgr.y (mgr.14520) 640 : cluster [DBG] pgmap v1133: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:35 vm01 bash[20728]: cluster 2026-03-09T16:10:34.878022+0000 mgr.y (mgr.14520) 640 : cluster [DBG] pgmap v1133: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:35 vm01 bash[20728]: audit 2026-03-09T16:10:34.972828+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:35 vm01 bash[20728]: audit 2026-03-09T16:10:34.972828+0000 mon.a (mon.0) 3719 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]': finished 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: cluster 2026-03-09T16:10:34.994731+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: cluster 2026-03-09T16:10:34.994731+0000 mon.a (mon.0) 3720 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: audit 2026-03-09T16:10:35.030369+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: audit 2026-03-09T16:10:35.030369+0000 mon.c (mon.2) 660 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: audit 2026-03-09T16:10:35.030682+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: audit 2026-03-09T16:10:35.030682+0000 mon.a (mon.0) 3721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: audit 2026-03-09T16:10:35.031084+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: audit 2026-03-09T16:10:35.031084+0000 mon.c (mon.2) 661 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: audit 2026-03-09T16:10:35.031320+0000 mon.a (mon.0) 3722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:36.427 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:36 vm01 bash[20728]: audit 2026-03-09T16:10:35.031320+0000 mon.a (mon.0) 3722 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-146"}]: dispatch 2026-03-09T16:10:37.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:10:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:10:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:37 vm09 bash[22983]: cluster 2026-03-09T16:10:35.983538+0000 mon.a (mon.0) 3723 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T16:10:37.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:37 vm09 bash[22983]: cluster 2026-03-09T16:10:35.983538+0000 mon.a (mon.0) 3723 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T16:10:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:37 vm01 bash[28152]: cluster 2026-03-09T16:10:35.983538+0000 mon.a (mon.0) 3723 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T16:10:37.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:37 vm01 bash[28152]: cluster 2026-03-09T16:10:35.983538+0000 mon.a (mon.0) 3723 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T16:10:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:37 vm01 bash[20728]: cluster 2026-03-09T16:10:35.983538+0000 mon.a (mon.0) 3723 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T16:10:37.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:37 vm01 bash[20728]: cluster 2026-03-09T16:10:35.983538+0000 mon.a (mon.0) 3723 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-09T16:10:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: cluster 2026-03-09T16:10:36.878307+0000 mgr.y (mgr.14520) 641 : cluster [DBG] pgmap v1136: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T16:10:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: cluster 2026-03-09T16:10:36.878307+0000 mgr.y (mgr.14520) 641 : cluster [DBG] pgmap v1136: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T16:10:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: audit 2026-03-09T16:10:36.893069+0000 mgr.y (mgr.14520) 642 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: audit 2026-03-09T16:10:36.893069+0000 mgr.y (mgr.14520) 642 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: cluster 2026-03-09T16:10:36.996133+0000 mon.a (mon.0) 3724 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:10:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: cluster 2026-03-09T16:10:36.996133+0000 mon.a (mon.0) 3724 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:10:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: cluster 2026-03-09T16:10:37.027248+0000 mon.a (mon.0) 3725 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T16:10:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: cluster 2026-03-09T16:10:37.027248+0000 mon.a (mon.0) 3725 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T16:10:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: audit 2026-03-09T16:10:37.041843+0000 mon.c (mon.2) 662 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: audit 2026-03-09T16:10:37.041843+0000 mon.c (mon.2) 662 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: audit 2026-03-09T16:10:37.042395+0000 mon.a (mon.0) 3726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:38 vm09 bash[22983]: audit 2026-03-09T16:10:37.042395+0000 mon.a (mon.0) 3726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: cluster 2026-03-09T16:10:36.878307+0000 mgr.y (mgr.14520) 641 : cluster [DBG] pgmap v1136: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: cluster 2026-03-09T16:10:36.878307+0000 mgr.y (mgr.14520) 641 : cluster [DBG] pgmap v1136: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: audit 2026-03-09T16:10:36.893069+0000 mgr.y (mgr.14520) 642 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: audit 2026-03-09T16:10:36.893069+0000 mgr.y (mgr.14520) 642 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: cluster 2026-03-09T16:10:36.996133+0000 mon.a (mon.0) 3724 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: cluster 2026-03-09T16:10:36.996133+0000 mon.a (mon.0) 3724 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: cluster 2026-03-09T16:10:37.027248+0000 mon.a (mon.0) 3725 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: cluster 2026-03-09T16:10:37.027248+0000 mon.a (mon.0) 3725 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: audit 2026-03-09T16:10:37.041843+0000 mon.c (mon.2) 662 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: audit 2026-03-09T16:10:37.041843+0000 mon.c (mon.2) 662 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: audit 2026-03-09T16:10:37.042395+0000 mon.a (mon.0) 3726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:38 vm01 bash[28152]: audit 2026-03-09T16:10:37.042395+0000 mon.a (mon.0) 3726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: cluster 2026-03-09T16:10:36.878307+0000 mgr.y (mgr.14520) 641 : cluster [DBG] pgmap v1136: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: cluster 2026-03-09T16:10:36.878307+0000 mgr.y (mgr.14520) 641 : cluster [DBG] pgmap v1136: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: audit 2026-03-09T16:10:36.893069+0000 mgr.y (mgr.14520) 642 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: audit 2026-03-09T16:10:36.893069+0000 mgr.y (mgr.14520) 642 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: cluster 2026-03-09T16:10:36.996133+0000 mon.a (mon.0) 3724 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: cluster 2026-03-09T16:10:36.996133+0000 mon.a (mon.0) 3724 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: cluster 2026-03-09T16:10:37.027248+0000 mon.a (mon.0) 3725 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: cluster 2026-03-09T16:10:37.027248+0000 mon.a (mon.0) 3725 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: audit 2026-03-09T16:10:37.041843+0000 mon.c (mon.2) 662 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: audit 2026-03-09T16:10:37.041843+0000 mon.c (mon.2) 662 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: audit 2026-03-09T16:10:37.042395+0000 mon.a (mon.0) 3726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:38 vm01 bash[20728]: audit 2026-03-09T16:10:37.042395+0000 mon.a (mon.0) 3726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:39 vm09 bash[22983]: audit 2026-03-09T16:10:38.089181+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:39 vm09 bash[22983]: audit 2026-03-09T16:10:38.089181+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:39 vm09 bash[22983]: cluster 2026-03-09T16:10:38.104391+0000 mon.a (mon.0) 3728 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T16:10:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:39 vm09 bash[22983]: cluster 2026-03-09T16:10:38.104391+0000 mon.a (mon.0) 3728 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T16:10:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:39 vm09 bash[22983]: audit 2026-03-09T16:10:38.150860+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:39 vm09 bash[22983]: audit 2026-03-09T16:10:38.150860+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:39 vm09 bash[22983]: audit 2026-03-09T16:10:38.151152+0000 mon.a (mon.0) 3729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:39 vm09 bash[22983]: audit 2026-03-09T16:10:38.151152+0000 mon.a (mon.0) 3729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:39 vm01 bash[28152]: audit 2026-03-09T16:10:38.089181+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:39 vm01 bash[28152]: audit 2026-03-09T16:10:38.089181+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:39 vm01 bash[28152]: cluster 2026-03-09T16:10:38.104391+0000 mon.a (mon.0) 3728 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:39 vm01 bash[28152]: cluster 2026-03-09T16:10:38.104391+0000 mon.a (mon.0) 3728 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:39 vm01 bash[28152]: audit 2026-03-09T16:10:38.150860+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:39 vm01 bash[28152]: audit 2026-03-09T16:10:38.150860+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:39 vm01 bash[28152]: audit 2026-03-09T16:10:38.151152+0000 mon.a (mon.0) 3729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:39 vm01 bash[28152]: audit 2026-03-09T16:10:38.151152+0000 mon.a (mon.0) 3729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:39 vm01 bash[20728]: audit 2026-03-09T16:10:38.089181+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:39 vm01 bash[20728]: audit 2026-03-09T16:10:38.089181+0000 mon.a (mon.0) 3727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:39 vm01 bash[20728]: cluster 2026-03-09T16:10:38.104391+0000 mon.a (mon.0) 3728 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:39 vm01 bash[20728]: cluster 2026-03-09T16:10:38.104391+0000 mon.a (mon.0) 3728 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:39 vm01 bash[20728]: audit 2026-03-09T16:10:38.150860+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:39 vm01 bash[20728]: audit 2026-03-09T16:10:38.150860+0000 mon.c (mon.2) 663 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:39 vm01 bash[20728]: audit 2026-03-09T16:10:38.151152+0000 mon.a (mon.0) 3729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:39.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:39 vm01 bash[20728]: audit 2026-03-09T16:10:38.151152+0000 mon.a (mon.0) 3729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T16:10:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: cluster 2026-03-09T16:10:38.879072+0000 mgr.y (mgr.14520) 643 : cluster [DBG] pgmap v1139: 268 pgs: 17 unknown, 251 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: cluster 2026-03-09T16:10:38.879072+0000 mgr.y (mgr.14520) 643 : cluster [DBG] pgmap v1139: 268 pgs: 17 unknown, 251 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: audit 2026-03-09T16:10:39.098313+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: audit 2026-03-09T16:10:39.098313+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: cluster 2026-03-09T16:10:39.109943+0000 mon.a (mon.0) 3731 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T16:10:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: cluster 2026-03-09T16:10:39.109943+0000 mon.a (mon.0) 3731 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T16:10:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: audit 2026-03-09T16:10:39.121412+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: audit 2026-03-09T16:10:39.121412+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: audit 2026-03-09T16:10:39.121951+0000 mon.a (mon.0) 3732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:40 vm09 bash[22983]: audit 2026-03-09T16:10:39.121951+0000 mon.a (mon.0) 3732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: cluster 2026-03-09T16:10:38.879072+0000 mgr.y (mgr.14520) 643 : cluster [DBG] pgmap v1139: 268 pgs: 17 unknown, 251 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: cluster 2026-03-09T16:10:38.879072+0000 mgr.y (mgr.14520) 643 : cluster [DBG] pgmap v1139: 268 pgs: 17 unknown, 251 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: audit 2026-03-09T16:10:39.098313+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: audit 2026-03-09T16:10:39.098313+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: cluster 2026-03-09T16:10:39.109943+0000 mon.a (mon.0) 3731 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: cluster 2026-03-09T16:10:39.109943+0000 mon.a (mon.0) 3731 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: audit 2026-03-09T16:10:39.121412+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: audit 2026-03-09T16:10:39.121412+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: audit 2026-03-09T16:10:39.121951+0000 mon.a (mon.0) 3732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:40 vm01 bash[28152]: audit 2026-03-09T16:10:39.121951+0000 mon.a (mon.0) 3732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: cluster 2026-03-09T16:10:38.879072+0000 mgr.y (mgr.14520) 643 : cluster [DBG] pgmap v1139: 268 pgs: 17 unknown, 251 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: cluster 2026-03-09T16:10:38.879072+0000 mgr.y (mgr.14520) 643 : cluster [DBG] pgmap v1139: 268 pgs: 17 unknown, 251 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: audit 2026-03-09T16:10:39.098313+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: audit 2026-03-09T16:10:39.098313+0000 mon.a (mon.0) 3730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: cluster 2026-03-09T16:10:39.109943+0000 mon.a (mon.0) 3731 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: cluster 2026-03-09T16:10:39.109943+0000 mon.a (mon.0) 3731 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: audit 2026-03-09T16:10:39.121412+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: audit 2026-03-09T16:10:39.121412+0000 mon.c (mon.2) 664 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: audit 2026-03-09T16:10:39.121951+0000 mon.a (mon.0) 3732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:40.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:40 vm01 bash[20728]: audit 2026-03-09T16:10:39.121951+0000 mon.a (mon.0) 3732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.107713+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]': finished 2026-03-09T16:10:41.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.107713+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]': finished 2026-03-09T16:10:41.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: cluster 2026-03-09T16:10:40.119954+0000 mon.a (mon.0) 3734 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T16:10:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: cluster 2026-03-09T16:10:40.119954+0000 mon.a (mon.0) 3734 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T16:10:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.160163+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.160163+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.160632+0000 mon.a (mon.0) 3735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.160632+0000 mon.a (mon.0) 3735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.161236+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.161236+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.161582+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:41 vm09 bash[22983]: audit 2026-03-09T16:10:40.161582+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.107713+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]': finished 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.107713+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]': finished 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: cluster 2026-03-09T16:10:40.119954+0000 mon.a (mon.0) 3734 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: cluster 2026-03-09T16:10:40.119954+0000 mon.a (mon.0) 3734 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.160163+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.160163+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.160632+0000 mon.a (mon.0) 3735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.160632+0000 mon.a (mon.0) 3735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.161236+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.161236+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.161582+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:41 vm01 bash[28152]: audit 2026-03-09T16:10:40.161582+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.107713+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]': finished 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.107713+0000 mon.a (mon.0) 3733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]': finished 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: cluster 2026-03-09T16:10:40.119954+0000 mon.a (mon.0) 3734 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: cluster 2026-03-09T16:10:40.119954+0000 mon.a (mon.0) 3734 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.160163+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.160163+0000 mon.c (mon.2) 665 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.160632+0000 mon.a (mon.0) 3735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.160632+0000 mon.a (mon.0) 3735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.161236+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.161236+0000 mon.c (mon.2) 666 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.161582+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:41.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:41 vm01 bash[20728]: audit 2026-03-09T16:10:40.161582+0000 mon.a (mon.0) 3736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-148"}]: dispatch 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:42 vm01 bash[28152]: cluster 2026-03-09T16:10:40.879429+0000 mgr.y (mgr.14520) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:42 vm01 bash[28152]: cluster 2026-03-09T16:10:40.879429+0000 mgr.y (mgr.14520) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:42 vm01 bash[28152]: cluster 2026-03-09T16:10:41.126257+0000 mon.a (mon.0) 3737 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:42 vm01 bash[28152]: cluster 2026-03-09T16:10:41.126257+0000 mon.a (mon.0) 3737 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:42 vm01 bash[28152]: cluster 2026-03-09T16:10:41.151180+0000 mon.a (mon.0) 3738 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:42 vm01 bash[28152]: cluster 2026-03-09T16:10:41.151180+0000 mon.a (mon.0) 3738 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:42 vm01 bash[20728]: cluster 2026-03-09T16:10:40.879429+0000 mgr.y (mgr.14520) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:42 vm01 bash[20728]: cluster 2026-03-09T16:10:40.879429+0000 mgr.y (mgr.14520) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:42 vm01 bash[20728]: cluster 2026-03-09T16:10:41.126257+0000 mon.a (mon.0) 3737 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:42 vm01 bash[20728]: cluster 2026-03-09T16:10:41.126257+0000 mon.a (mon.0) 3737 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:42 vm01 bash[20728]: cluster 2026-03-09T16:10:41.151180+0000 mon.a (mon.0) 3738 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T16:10:42.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:42 vm01 bash[20728]: cluster 2026-03-09T16:10:41.151180+0000 mon.a (mon.0) 3738 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T16:10:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:42 vm09 bash[22983]: cluster 2026-03-09T16:10:40.879429+0000 mgr.y (mgr.14520) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:42 vm09 bash[22983]: cluster 2026-03-09T16:10:40.879429+0000 mgr.y (mgr.14520) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T16:10:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:42 vm09 bash[22983]: cluster 2026-03-09T16:10:41.126257+0000 mon.a (mon.0) 3737 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:42 vm09 bash[22983]: cluster 2026-03-09T16:10:41.126257+0000 mon.a (mon.0) 3737 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:42 vm09 bash[22983]: cluster 2026-03-09T16:10:41.151180+0000 mon.a (mon.0) 3738 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T16:10:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:42 vm09 bash[22983]: cluster 2026-03-09T16:10:41.151180+0000 mon.a (mon.0) 3738 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-09T16:10:43.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:10:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:10:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:10:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:43 vm09 bash[22983]: cluster 2026-03-09T16:10:42.161171+0000 mon.a (mon.0) 3739 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T16:10:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:43 vm09 bash[22983]: cluster 2026-03-09T16:10:42.161171+0000 mon.a (mon.0) 3739 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T16:10:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:43 vm09 bash[22983]: audit 2026-03-09T16:10:42.177963+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:43 vm09 bash[22983]: audit 2026-03-09T16:10:42.177963+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:43 vm09 bash[22983]: audit 2026-03-09T16:10:42.180447+0000 mon.a (mon.0) 3740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:43 vm09 bash[22983]: audit 2026-03-09T16:10:42.180447+0000 mon.a (mon.0) 3740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:43 vm01 bash[28152]: cluster 2026-03-09T16:10:42.161171+0000 mon.a (mon.0) 3739 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:43 vm01 bash[28152]: cluster 2026-03-09T16:10:42.161171+0000 mon.a (mon.0) 3739 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:43 vm01 bash[28152]: audit 2026-03-09T16:10:42.177963+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:43 vm01 bash[28152]: audit 2026-03-09T16:10:42.177963+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:43 vm01 bash[28152]: audit 2026-03-09T16:10:42.180447+0000 mon.a (mon.0) 3740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:43 vm01 bash[28152]: audit 2026-03-09T16:10:42.180447+0000 mon.a (mon.0) 3740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:43 vm01 bash[20728]: cluster 2026-03-09T16:10:42.161171+0000 mon.a (mon.0) 3739 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:43 vm01 bash[20728]: cluster 2026-03-09T16:10:42.161171+0000 mon.a (mon.0) 3739 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:43 vm01 bash[20728]: audit 2026-03-09T16:10:42.177963+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:43 vm01 bash[20728]: audit 2026-03-09T16:10:42.177963+0000 mon.c (mon.2) 667 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:43 vm01 bash[20728]: audit 2026-03-09T16:10:42.180447+0000 mon.a (mon.0) 3740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:43.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:43 vm01 bash[20728]: audit 2026-03-09T16:10:42.180447+0000 mon.a (mon.0) 3740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: cluster 2026-03-09T16:10:42.879798+0000 mgr.y (mgr.14520) 645 : cluster [DBG] pgmap v1145: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:10:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: cluster 2026-03-09T16:10:42.879798+0000 mgr.y (mgr.14520) 645 : cluster [DBG] pgmap v1145: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:10:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.188925+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.188925+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: cluster 2026-03-09T16:10:43.194797+0000 mon.a (mon.0) 3742 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T16:10:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: cluster 2026-03-09T16:10:43.194797+0000 mon.a (mon.0) 3742 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T16:10:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.250407+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.250407+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.250652+0000 mon.a (mon.0) 3743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.250652+0000 mon.a (mon.0) 3743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.251538+0000 mon.c (mon.2) 669 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.251538+0000 mon.c (mon.2) 669 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.251736+0000 mon.a (mon.0) 3744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.251736+0000 mon.a (mon.0) 3744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.592936+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.592936+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.916129+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.916129+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.916651+0000 mon.a (mon.0) 3747 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.916651+0000 mon.a (mon.0) 3747 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.922315+0000 mon.a (mon.0) 3748 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:44 vm09 bash[22983]: audit 2026-03-09T16:10:43.922315+0000 mon.a (mon.0) 3748 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: cluster 2026-03-09T16:10:42.879798+0000 mgr.y (mgr.14520) 645 : cluster [DBG] pgmap v1145: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: cluster 2026-03-09T16:10:42.879798+0000 mgr.y (mgr.14520) 645 : cluster [DBG] pgmap v1145: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.188925+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.188925+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: cluster 2026-03-09T16:10:43.194797+0000 mon.a (mon.0) 3742 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: cluster 2026-03-09T16:10:43.194797+0000 mon.a (mon.0) 3742 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.250407+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.250407+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.250652+0000 mon.a (mon.0) 3743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.250652+0000 mon.a (mon.0) 3743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.251538+0000 mon.c (mon.2) 669 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.251538+0000 mon.c (mon.2) 669 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.251736+0000 mon.a (mon.0) 3744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.251736+0000 mon.a (mon.0) 3744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.592936+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.592936+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.916129+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.916129+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.916651+0000 mon.a (mon.0) 3747 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.916651+0000 mon.a (mon.0) 3747 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.922315+0000 mon.a (mon.0) 3748 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:44 vm01 bash[28152]: audit 2026-03-09T16:10:43.922315+0000 mon.a (mon.0) 3748 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: cluster 2026-03-09T16:10:42.879798+0000 mgr.y (mgr.14520) 645 : cluster [DBG] pgmap v1145: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: cluster 2026-03-09T16:10:42.879798+0000 mgr.y (mgr.14520) 645 : cluster [DBG] pgmap v1145: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.188925+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.188925+0000 mon.a (mon.0) 3741 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: cluster 2026-03-09T16:10:43.194797+0000 mon.a (mon.0) 3742 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: cluster 2026-03-09T16:10:43.194797+0000 mon.a (mon.0) 3742 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.250407+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.250407+0000 mon.c (mon.2) 668 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.250652+0000 mon.a (mon.0) 3743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.250652+0000 mon.a (mon.0) 3743 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.251538+0000 mon.c (mon.2) 669 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.251538+0000 mon.c (mon.2) 669 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.251736+0000 mon.a (mon.0) 3744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.251736+0000 mon.a (mon.0) 3744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-150"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.592936+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.592936+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.916129+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.916129+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.916651+0000 mon.a (mon.0) 3747 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:10:44.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.916651+0000 mon.a (mon.0) 3747 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:10:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.922315+0000 mon.a (mon.0) 3748 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:44.677 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:44 vm01 bash[20728]: audit 2026-03-09T16:10:43.922315+0000 mon.a (mon.0) 3748 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:45 vm09 bash[22983]: cluster 2026-03-09T16:10:44.206144+0000 mon.a (mon.0) 3749 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T16:10:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:45 vm09 bash[22983]: cluster 2026-03-09T16:10:44.206144+0000 mon.a (mon.0) 3749 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T16:10:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:45 vm09 bash[22983]: audit 2026-03-09T16:10:44.555665+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:45 vm09 bash[22983]: audit 2026-03-09T16:10:44.555665+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:45 vm09 bash[22983]: audit 2026-03-09T16:10:44.558045+0000 mon.a (mon.0) 3751 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:45 vm09 bash[22983]: audit 2026-03-09T16:10:44.558045+0000 mon.a (mon.0) 3751 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:45 vm09 bash[22983]: cluster 2026-03-09T16:10:44.880294+0000 mgr.y (mgr.14520) 646 : cluster [DBG] pgmap v1148: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:10:45.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:45 vm09 bash[22983]: cluster 2026-03-09T16:10:44.880294+0000 mgr.y (mgr.14520) 646 : cluster [DBG] pgmap v1148: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:45 vm01 bash[28152]: cluster 2026-03-09T16:10:44.206144+0000 mon.a (mon.0) 3749 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:45 vm01 bash[28152]: cluster 2026-03-09T16:10:44.206144+0000 mon.a (mon.0) 3749 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:45 vm01 bash[28152]: audit 2026-03-09T16:10:44.555665+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:45 vm01 bash[28152]: audit 2026-03-09T16:10:44.555665+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:45 vm01 bash[28152]: audit 2026-03-09T16:10:44.558045+0000 mon.a (mon.0) 3751 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:45 vm01 bash[28152]: audit 2026-03-09T16:10:44.558045+0000 mon.a (mon.0) 3751 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:45 vm01 bash[28152]: cluster 2026-03-09T16:10:44.880294+0000 mgr.y (mgr.14520) 646 : cluster [DBG] pgmap v1148: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:45 vm01 bash[28152]: cluster 2026-03-09T16:10:44.880294+0000 mgr.y (mgr.14520) 646 : cluster [DBG] pgmap v1148: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:45 vm01 bash[20728]: cluster 2026-03-09T16:10:44.206144+0000 mon.a (mon.0) 3749 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:45 vm01 bash[20728]: cluster 2026-03-09T16:10:44.206144+0000 mon.a (mon.0) 3749 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:45 vm01 bash[20728]: audit 2026-03-09T16:10:44.555665+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:45 vm01 bash[20728]: audit 2026-03-09T16:10:44.555665+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:45 vm01 bash[20728]: audit 2026-03-09T16:10:44.558045+0000 mon.a (mon.0) 3751 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:45 vm01 bash[20728]: audit 2026-03-09T16:10:44.558045+0000 mon.a (mon.0) 3751 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:45 vm01 bash[20728]: cluster 2026-03-09T16:10:44.880294+0000 mgr.y (mgr.14520) 646 : cluster [DBG] pgmap v1148: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:10:45.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:45 vm01 bash[20728]: cluster 2026-03-09T16:10:44.880294+0000 mgr.y (mgr.14520) 646 : cluster [DBG] pgmap v1148: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:10:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:46 vm09 bash[22983]: cluster 2026-03-09T16:10:45.219888+0000 mon.a (mon.0) 3752 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T16:10:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:46 vm09 bash[22983]: cluster 2026-03-09T16:10:45.219888+0000 mon.a (mon.0) 3752 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T16:10:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:46 vm09 bash[22983]: audit 2026-03-09T16:10:45.222469+0000 mon.c (mon.2) 670 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:46 vm09 bash[22983]: audit 2026-03-09T16:10:45.222469+0000 mon.c (mon.2) 670 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:46 vm09 bash[22983]: audit 2026-03-09T16:10:45.234486+0000 mon.a (mon.0) 3753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:46 vm09 bash[22983]: audit 2026-03-09T16:10:45.234486+0000 mon.a (mon.0) 3753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:46 vm01 bash[28152]: cluster 2026-03-09T16:10:45.219888+0000 mon.a (mon.0) 3752 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T16:10:46.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:46 vm01 bash[28152]: cluster 2026-03-09T16:10:45.219888+0000 mon.a (mon.0) 3752 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:46 vm01 bash[28152]: audit 2026-03-09T16:10:45.222469+0000 mon.c (mon.2) 670 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:46 vm01 bash[28152]: audit 2026-03-09T16:10:45.222469+0000 mon.c (mon.2) 670 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:46 vm01 bash[28152]: audit 2026-03-09T16:10:45.234486+0000 mon.a (mon.0) 3753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:46 vm01 bash[28152]: audit 2026-03-09T16:10:45.234486+0000 mon.a (mon.0) 3753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:46 vm01 bash[20728]: cluster 2026-03-09T16:10:45.219888+0000 mon.a (mon.0) 3752 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:46 vm01 bash[20728]: cluster 2026-03-09T16:10:45.219888+0000 mon.a (mon.0) 3752 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:46 vm01 bash[20728]: audit 2026-03-09T16:10:45.222469+0000 mon.c (mon.2) 670 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:46 vm01 bash[20728]: audit 2026-03-09T16:10:45.222469+0000 mon.c (mon.2) 670 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:46 vm01 bash[20728]: audit 2026-03-09T16:10:45.234486+0000 mon.a (mon.0) 3753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:46.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:46 vm01 bash[20728]: audit 2026-03-09T16:10:45.234486+0000 mon.a (mon.0) 3753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:47.259 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:10:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.212565+0000 mon.a (mon.0) 3754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.212565+0000 mon.a (mon.0) 3754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: cluster 2026-03-09T16:10:46.230368+0000 mon.a (mon.0) 3755 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: cluster 2026-03-09T16:10:46.230368+0000 mon.a (mon.0) 3755 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.286922+0000 mon.c (mon.2) 671 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.286922+0000 mon.c (mon.2) 671 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.287282+0000 mon.a (mon.0) 3756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.287282+0000 mon.a (mon.0) 3756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.287872+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.287872+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.288183+0000 mon.a (mon.0) 3757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.288183+0000 mon.a (mon.0) 3757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: cluster 2026-03-09T16:10:46.880569+0000 mgr.y (mgr.14520) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: cluster 2026-03-09T16:10:46.880569+0000 mgr.y (mgr.14520) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.895171+0000 mgr.y (mgr.14520) 648 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: audit 2026-03-09T16:10:46.895171+0000 mgr.y (mgr.14520) 648 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: cluster 2026-03-09T16:10:47.237841+0000 mon.a (mon.0) 3758 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T16:10:47.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:47 vm09 bash[22983]: cluster 2026-03-09T16:10:47.237841+0000 mon.a (mon.0) 3758 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T16:10:47.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.212565+0000 mon.a (mon.0) 3754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:47.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.212565+0000 mon.a (mon.0) 3754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:47.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: cluster 2026-03-09T16:10:46.230368+0000 mon.a (mon.0) 3755 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T16:10:47.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: cluster 2026-03-09T16:10:46.230368+0000 mon.a (mon.0) 3755 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.286922+0000 mon.c (mon.2) 671 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.286922+0000 mon.c (mon.2) 671 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.287282+0000 mon.a (mon.0) 3756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.287282+0000 mon.a (mon.0) 3756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.287872+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.287872+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.288183+0000 mon.a (mon.0) 3757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.288183+0000 mon.a (mon.0) 3757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: cluster 2026-03-09T16:10:46.880569+0000 mgr.y (mgr.14520) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: cluster 2026-03-09T16:10:46.880569+0000 mgr.y (mgr.14520) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.895171+0000 mgr.y (mgr.14520) 648 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: audit 2026-03-09T16:10:46.895171+0000 mgr.y (mgr.14520) 648 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: cluster 2026-03-09T16:10:47.237841+0000 mon.a (mon.0) 3758 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:47 vm01 bash[28152]: cluster 2026-03-09T16:10:47.237841+0000 mon.a (mon.0) 3758 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.212565+0000 mon.a (mon.0) 3754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.212565+0000 mon.a (mon.0) 3754 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: cluster 2026-03-09T16:10:46.230368+0000 mon.a (mon.0) 3755 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: cluster 2026-03-09T16:10:46.230368+0000 mon.a (mon.0) 3755 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.286922+0000 mon.c (mon.2) 671 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.286922+0000 mon.c (mon.2) 671 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.287282+0000 mon.a (mon.0) 3756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.287282+0000 mon.a (mon.0) 3756 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.287872+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.287872+0000 mon.c (mon.2) 672 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.288183+0000 mon.a (mon.0) 3757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.288183+0000 mon.a (mon.0) 3757 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-152"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: cluster 2026-03-09T16:10:46.880569+0000 mgr.y (mgr.14520) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: cluster 2026-03-09T16:10:46.880569+0000 mgr.y (mgr.14520) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.895171+0000 mgr.y (mgr.14520) 648 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: audit 2026-03-09T16:10:46.895171+0000 mgr.y (mgr.14520) 648 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: cluster 2026-03-09T16:10:47.237841+0000 mon.a (mon.0) 3758 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T16:10:47.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:47 vm01 bash[20728]: cluster 2026-03-09T16:10:47.237841+0000 mon.a (mon.0) 3758 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: cluster 2026-03-09T16:10:48.243463+0000 mon.a (mon.0) 3759 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: cluster 2026-03-09T16:10:48.243463+0000 mon.a (mon.0) 3759 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: audit 2026-03-09T16:10:48.257531+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: audit 2026-03-09T16:10:48.257531+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: audit 2026-03-09T16:10:48.257778+0000 mon.a (mon.0) 3760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: audit 2026-03-09T16:10:48.257778+0000 mon.a (mon.0) 3760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: cluster 2026-03-09T16:10:48.881060+0000 mgr.y (mgr.14520) 649 : cluster [DBG] pgmap v1154: 268 pgs: 13 creating+peering, 19 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: cluster 2026-03-09T16:10:48.881060+0000 mgr.y (mgr.14520) 649 : cluster [DBG] pgmap v1154: 268 pgs: 13 creating+peering, 19 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: cluster 2026-03-09T16:10:49.146991+0000 mon.a (mon.0) 3761 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:49 vm09 bash[22983]: cluster 2026-03-09T16:10:49.146991+0000 mon.a (mon.0) 3761 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: cluster 2026-03-09T16:10:48.243463+0000 mon.a (mon.0) 3759 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: cluster 2026-03-09T16:10:48.243463+0000 mon.a (mon.0) 3759 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: audit 2026-03-09T16:10:48.257531+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: audit 2026-03-09T16:10:48.257531+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: audit 2026-03-09T16:10:48.257778+0000 mon.a (mon.0) 3760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: audit 2026-03-09T16:10:48.257778+0000 mon.a (mon.0) 3760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: cluster 2026-03-09T16:10:48.881060+0000 mgr.y (mgr.14520) 649 : cluster [DBG] pgmap v1154: 268 pgs: 13 creating+peering, 19 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: cluster 2026-03-09T16:10:48.881060+0000 mgr.y (mgr.14520) 649 : cluster [DBG] pgmap v1154: 268 pgs: 13 creating+peering, 19 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: cluster 2026-03-09T16:10:49.146991+0000 mon.a (mon.0) 3761 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:49 vm01 bash[28152]: cluster 2026-03-09T16:10:49.146991+0000 mon.a (mon.0) 3761 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: cluster 2026-03-09T16:10:48.243463+0000 mon.a (mon.0) 3759 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: cluster 2026-03-09T16:10:48.243463+0000 mon.a (mon.0) 3759 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: audit 2026-03-09T16:10:48.257531+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: audit 2026-03-09T16:10:48.257531+0000 mon.c (mon.2) 673 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: audit 2026-03-09T16:10:48.257778+0000 mon.a (mon.0) 3760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: audit 2026-03-09T16:10:48.257778+0000 mon.a (mon.0) 3760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: cluster 2026-03-09T16:10:48.881060+0000 mgr.y (mgr.14520) 649 : cluster [DBG] pgmap v1154: 268 pgs: 13 creating+peering, 19 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: cluster 2026-03-09T16:10:48.881060+0000 mgr.y (mgr.14520) 649 : cluster [DBG] pgmap v1154: 268 pgs: 13 creating+peering, 19 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: cluster 2026-03-09T16:10:49.146991+0000 mon.a (mon.0) 3761 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:49.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:49 vm01 bash[20728]: cluster 2026-03-09T16:10:49.146991+0000 mon.a (mon.0) 3761 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.247392+0000 mon.a (mon.0) 3762 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.247392+0000 mon.a (mon.0) 3762 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: cluster 2026-03-09T16:10:49.253977+0000 mon.a (mon.0) 3763 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: cluster 2026-03-09T16:10:49.253977+0000 mon.a (mon.0) 3763 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.277527+0000 mon.c (mon.2) 674 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.277527+0000 mon.c (mon.2) 674 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.277809+0000 mon.a (mon.0) 3764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.277809+0000 mon.a (mon.0) 3764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.300942+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.300942+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.301183+0000 mon.a (mon.0) 3765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.301183+0000 mon.a (mon.0) 3765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.302057+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.302057+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.302266+0000 mon.a (mon.0) 3766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:50 vm09 bash[22983]: audit 2026-03-09T16:10:49.302266+0000 mon.a (mon.0) 3766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.247392+0000 mon.a (mon.0) 3762 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.247392+0000 mon.a (mon.0) 3762 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: cluster 2026-03-09T16:10:49.253977+0000 mon.a (mon.0) 3763 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: cluster 2026-03-09T16:10:49.253977+0000 mon.a (mon.0) 3763 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.277527+0000 mon.c (mon.2) 674 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.277527+0000 mon.c (mon.2) 674 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.277809+0000 mon.a (mon.0) 3764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.277809+0000 mon.a (mon.0) 3764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.300942+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.300942+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.301183+0000 mon.a (mon.0) 3765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.301183+0000 mon.a (mon.0) 3765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.302057+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.302057+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.302266+0000 mon.a (mon.0) 3766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:50 vm01 bash[28152]: audit 2026-03-09T16:10:49.302266+0000 mon.a (mon.0) 3766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.247392+0000 mon.a (mon.0) 3762 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.247392+0000 mon.a (mon.0) 3762 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm01-59821-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: cluster 2026-03-09T16:10:49.253977+0000 mon.a (mon.0) 3763 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: cluster 2026-03-09T16:10:49.253977+0000 mon.a (mon.0) 3763 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.277527+0000 mon.c (mon.2) 674 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.277527+0000 mon.c (mon.2) 674 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.277809+0000 mon.a (mon.0) 3764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.277809+0000 mon.a (mon.0) 3764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm01-59821-111","var": "dedup_tier","val": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.300942+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.300942+0000 mon.c (mon.2) 675 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.301183+0000 mon.a (mon.0) 3765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.301183+0000 mon.a (mon.0) 3765 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.302057+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.302057+0000 mon.c (mon.2) 676 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.302266+0000 mon.a (mon.0) 3766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:50.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:50 vm01 bash[20728]: audit 2026-03-09T16:10:49.302266+0000 mon.a (mon.0) 3766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm01-59821-111", "tierpool": "test-rados-api-vm01-59821-154"}]: dispatch 2026-03-09T16:10:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:51 vm09 bash[22983]: cluster 2026-03-09T16:10:50.260640+0000 mon.a (mon.0) 3767 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T16:10:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:51 vm09 bash[22983]: cluster 2026-03-09T16:10:50.260640+0000 mon.a (mon.0) 3767 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T16:10:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:51 vm09 bash[22983]: cluster 2026-03-09T16:10:50.881425+0000 mgr.y (mgr.14520) 650 : cluster [DBG] pgmap v1157: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T16:10:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:51 vm09 bash[22983]: cluster 2026-03-09T16:10:50.881425+0000 mgr.y (mgr.14520) 650 : cluster [DBG] pgmap v1157: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T16:10:51.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:51 vm01 bash[28152]: cluster 2026-03-09T16:10:50.260640+0000 mon.a (mon.0) 3767 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T16:10:51.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:51 vm01 bash[28152]: cluster 2026-03-09T16:10:50.260640+0000 mon.a (mon.0) 3767 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T16:10:51.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:51 vm01 bash[28152]: cluster 2026-03-09T16:10:50.881425+0000 mgr.y (mgr.14520) 650 : cluster [DBG] pgmap v1157: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T16:10:51.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:51 vm01 bash[28152]: cluster 2026-03-09T16:10:50.881425+0000 mgr.y (mgr.14520) 650 : cluster [DBG] pgmap v1157: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T16:10:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:51 vm01 bash[20728]: cluster 2026-03-09T16:10:50.260640+0000 mon.a (mon.0) 3767 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T16:10:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:51 vm01 bash[20728]: cluster 2026-03-09T16:10:50.260640+0000 mon.a (mon.0) 3767 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-09T16:10:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:51 vm01 bash[20728]: cluster 2026-03-09T16:10:50.881425+0000 mgr.y (mgr.14520) 650 : cluster [DBG] pgmap v1157: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T16:10:51.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:51 vm01 bash[20728]: cluster 2026-03-09T16:10:50.881425+0000 mgr.y (mgr.14520) 650 : cluster [DBG] pgmap v1157: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T16:10:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:52 vm09 bash[22983]: cluster 2026-03-09T16:10:51.301072+0000 mon.a (mon.0) 3768 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T16:10:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:52 vm09 bash[22983]: cluster 2026-03-09T16:10:51.301072+0000 mon.a (mon.0) 3768 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T16:10:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:52 vm09 bash[22983]: audit 2026-03-09T16:10:51.301848+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:52 vm09 bash[22983]: audit 2026-03-09T16:10:51.301848+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:52 vm09 bash[22983]: audit 2026-03-09T16:10:51.303072+0000 mon.a (mon.0) 3769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:52 vm09 bash[22983]: audit 2026-03-09T16:10:51.303072+0000 mon.a (mon.0) 3769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:52 vm01 bash[28152]: cluster 2026-03-09T16:10:51.301072+0000 mon.a (mon.0) 3768 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T16:10:52.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:52 vm01 bash[28152]: cluster 2026-03-09T16:10:51.301072+0000 mon.a (mon.0) 3768 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:52 vm01 bash[28152]: audit 2026-03-09T16:10:51.301848+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:52 vm01 bash[28152]: audit 2026-03-09T16:10:51.301848+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:52 vm01 bash[28152]: audit 2026-03-09T16:10:51.303072+0000 mon.a (mon.0) 3769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:52 vm01 bash[28152]: audit 2026-03-09T16:10:51.303072+0000 mon.a (mon.0) 3769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:52 vm01 bash[20728]: cluster 2026-03-09T16:10:51.301072+0000 mon.a (mon.0) 3768 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:52 vm01 bash[20728]: cluster 2026-03-09T16:10:51.301072+0000 mon.a (mon.0) 3768 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:52 vm01 bash[20728]: audit 2026-03-09T16:10:51.301848+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:52 vm01 bash[20728]: audit 2026-03-09T16:10:51.301848+0000 mon.c (mon.2) 677 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:52 vm01 bash[20728]: audit 2026-03-09T16:10:51.303072+0000 mon.a (mon.0) 3769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:52.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:52 vm01 bash[20728]: audit 2026-03-09T16:10:51.303072+0000 mon.a (mon.0) 3769 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:10:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:10:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlush (8438 ms) 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FailedFlush 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FailedFlush (12700 ms) 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Flush 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Flush (8167 ms) 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushSnap 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushSnap (13263 ms) 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushTryFlushRaces 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushTryFlushRaces (7645 ms) 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlushReadRace 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlushReadRace (8145 ms) 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetRead 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: ok, hit_set contains 329:602f83fe:::foo:head 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetRead (9242 ms) 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetTrim 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,0 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: first is 1773072567 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,0 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,0 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,1773072569,1773072570,0 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,1773072569,1773072570,0 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,1773072569,1773072570,0 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,1773072569,1773072570,1773072572,1773072573,0 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,1773072569,1773072570,1773072572,1773072573,0 2026-03-09T16:10:53.292 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,1773072569,1773072570,1773072572,1773072573,0 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,1773072569,1773072570,1773072572,1773072573,1773072575,1773072576,0 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,1773072569,1773072570,1773072572,1773072573,1773072575,1773072576,0 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072567,1773072569,1773072570,1773072572,1773072573,1773072575,1773072576,0 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: got ls 1773072570,1773072572,1773072573,1773072575,1773072576,1773072578,1773072579,0 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: first now 1773072570, trimmed 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetTrim (21679 ms) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteOn2ndRead 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: foo0 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteOn2ndRead (14669 ms) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ProxyRead 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ProxyRead (18190 ms) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.CachePin 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.CachePin (22994 ms) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetRedirectRead 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetRedirectRead (5164 ms) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetChunkRead 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetChunkRead (3047 ms) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ManifestPromoteRead 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ManifestPromoteRead (3034 ms) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TrySetDedupTier 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TrySetDedupTier (3022 ms) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP (236212 ms total) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [----------] Global test environment tear-down 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [==========] 77 tests from 4 test suites ran. (859113 ms total) 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stdout: api_tier_pp: [ PASSED ] 77 tests. 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59658 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59658 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59872 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59872 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60297 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60297 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60043 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60043 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59963 2026-03-09T16:10:53.293 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59963 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59764 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59764 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59996 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59996 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60415 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60415 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60451 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60451 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59788 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59788 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60326 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60326 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59613 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59613 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59711 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59711 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60240 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60240 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59605 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59605 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59621 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59621 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60527 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60527 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59906 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59906 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60568 2026-03-09T16:10:53.294 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60568 2026-03-09T16:10:53.295 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.295 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59985 2026-03-09T16:10:53.295 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59985 2026-03-09T16:10:53.295 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.295 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60394 2026-03-09T16:10:53.295 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60394 2026-03-09T16:10:53.295 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:10:53.295 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60615 2026-03-09T16:10:53.295 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60615 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: audit 2026-03-09T16:10:52.283372+0000 mon.a (mon.0) 3770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: audit 2026-03-09T16:10:52.283372+0000 mon.a (mon.0) 3770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: audit 2026-03-09T16:10:52.288220+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: audit 2026-03-09T16:10:52.288220+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: cluster 2026-03-09T16:10:52.289051+0000 mon.a (mon.0) 3771 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: cluster 2026-03-09T16:10:52.289051+0000 mon.a (mon.0) 3771 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: audit 2026-03-09T16:10:52.290527+0000 mon.a (mon.0) 3772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: audit 2026-03-09T16:10:52.290527+0000 mon.a (mon.0) 3772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: cluster 2026-03-09T16:10:52.881785+0000 mgr.y (mgr.14520) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: cluster 2026-03-09T16:10:52.881785+0000 mgr.y (mgr.14520) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: audit 2026-03-09T16:10:53.287040+0000 mon.a (mon.0) 3773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:53 vm09 bash[22983]: audit 2026-03-09T16:10:53.287040+0000 mon.a (mon.0) 3773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: audit 2026-03-09T16:10:52.283372+0000 mon.a (mon.0) 3770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: audit 2026-03-09T16:10:52.283372+0000 mon.a (mon.0) 3770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: audit 2026-03-09T16:10:52.288220+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: audit 2026-03-09T16:10:52.288220+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: cluster 2026-03-09T16:10:52.289051+0000 mon.a (mon.0) 3771 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: cluster 2026-03-09T16:10:52.289051+0000 mon.a (mon.0) 3771 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: audit 2026-03-09T16:10:52.290527+0000 mon.a (mon.0) 3772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: audit 2026-03-09T16:10:52.290527+0000 mon.a (mon.0) 3772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: cluster 2026-03-09T16:10:52.881785+0000 mgr.y (mgr.14520) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: cluster 2026-03-09T16:10:52.881785+0000 mgr.y (mgr.14520) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: audit 2026-03-09T16:10:53.287040+0000 mon.a (mon.0) 3773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:53 vm01 bash[20728]: audit 2026-03-09T16:10:53.287040+0000 mon.a (mon.0) 3773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: audit 2026-03-09T16:10:52.283372+0000 mon.a (mon.0) 3770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: audit 2026-03-09T16:10:52.283372+0000 mon.a (mon.0) 3770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: audit 2026-03-09T16:10:52.288220+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: audit 2026-03-09T16:10:52.288220+0000 mon.c (mon.2) 678 : audit [INF] from='client.? 192.168.123.101:0/3673512250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: cluster 2026-03-09T16:10:52.289051+0000 mon.a (mon.0) 3771 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: cluster 2026-03-09T16:10:52.289051+0000 mon.a (mon.0) 3771 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: audit 2026-03-09T16:10:52.290527+0000 mon.a (mon.0) 3772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: audit 2026-03-09T16:10:52.290527+0000 mon.a (mon.0) 3772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]: dispatch 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: cluster 2026-03-09T16:10:52.881785+0000 mgr.y (mgr.14520) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: cluster 2026-03-09T16:10:52.881785+0000 mgr.y (mgr.14520) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: audit 2026-03-09T16:10:53.287040+0000 mon.a (mon.0) 3773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:53.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:53 vm01 bash[28152]: audit 2026-03-09T16:10:53.287040+0000 mon.a (mon.0) 3773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm01-59821-111"}]': finished 2026-03-09T16:10:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:54 vm09 bash[22983]: cluster 2026-03-09T16:10:53.291437+0000 mon.a (mon.0) 3774 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T16:10:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:54 vm09 bash[22983]: cluster 2026-03-09T16:10:53.291437+0000 mon.a (mon.0) 3774 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T16:10:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:54 vm09 bash[22983]: cluster 2026-03-09T16:10:54.147659+0000 mon.a (mon.0) 3775 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:54 vm09 bash[22983]: cluster 2026-03-09T16:10:54.147659+0000 mon.a (mon.0) 3775 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:54 vm01 bash[28152]: cluster 2026-03-09T16:10:53.291437+0000 mon.a (mon.0) 3774 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T16:10:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:54 vm01 bash[28152]: cluster 2026-03-09T16:10:53.291437+0000 mon.a (mon.0) 3774 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T16:10:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:54 vm01 bash[28152]: cluster 2026-03-09T16:10:54.147659+0000 mon.a (mon.0) 3775 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:54.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:54 vm01 bash[28152]: cluster 2026-03-09T16:10:54.147659+0000 mon.a (mon.0) 3775 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:54 vm01 bash[20728]: cluster 2026-03-09T16:10:53.291437+0000 mon.a (mon.0) 3774 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T16:10:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:54 vm01 bash[20728]: cluster 2026-03-09T16:10:53.291437+0000 mon.a (mon.0) 3774 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-09T16:10:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:54 vm01 bash[20728]: cluster 2026-03-09T16:10:54.147659+0000 mon.a (mon.0) 3775 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:54.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:54 vm01 bash[20728]: cluster 2026-03-09T16:10:54.147659+0000 mon.a (mon.0) 3775 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:10:55.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:55 vm09 bash[22983]: cluster 2026-03-09T16:10:54.882565+0000 mgr.y (mgr.14520) 652 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:55.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:55 vm09 bash[22983]: cluster 2026-03-09T16:10:54.882565+0000 mgr.y (mgr.14520) 652 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:55.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:55 vm01 bash[28152]: cluster 2026-03-09T16:10:54.882565+0000 mgr.y (mgr.14520) 652 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:55.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:55 vm01 bash[28152]: cluster 2026-03-09T16:10:54.882565+0000 mgr.y (mgr.14520) 652 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:55 vm01 bash[20728]: cluster 2026-03-09T16:10:54.882565+0000 mgr.y (mgr.14520) 652 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:55.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:55 vm01 bash[20728]: cluster 2026-03-09T16:10:54.882565+0000 mgr.y (mgr.14520) 652 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:57.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:10:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:10:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:57 vm09 bash[22983]: cluster 2026-03-09T16:10:56.882833+0000 mgr.y (mgr.14520) 653 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:57 vm09 bash[22983]: cluster 2026-03-09T16:10:56.882833+0000 mgr.y (mgr.14520) 653 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:57 vm09 bash[22983]: audit 2026-03-09T16:10:56.905870+0000 mgr.y (mgr.14520) 654 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:57 vm09 bash[22983]: audit 2026-03-09T16:10:56.905870+0000 mgr.y (mgr.14520) 654 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:57 vm01 bash[28152]: cluster 2026-03-09T16:10:56.882833+0000 mgr.y (mgr.14520) 653 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:57 vm01 bash[28152]: cluster 2026-03-09T16:10:56.882833+0000 mgr.y (mgr.14520) 653 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:57 vm01 bash[28152]: audit 2026-03-09T16:10:56.905870+0000 mgr.y (mgr.14520) 654 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:58.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:57 vm01 bash[28152]: audit 2026-03-09T16:10:56.905870+0000 mgr.y (mgr.14520) 654 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:57 vm01 bash[20728]: cluster 2026-03-09T16:10:56.882833+0000 mgr.y (mgr.14520) 653 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:57 vm01 bash[20728]: cluster 2026-03-09T16:10:56.882833+0000 mgr.y (mgr.14520) 653 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:10:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:57 vm01 bash[20728]: audit 2026-03-09T16:10:56.905870+0000 mgr.y (mgr.14520) 654 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:10:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:57 vm01 bash[20728]: audit 2026-03-09T16:10:56.905870+0000 mgr.y (mgr.14520) 654 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:59 vm09 bash[22983]: cluster 2026-03-09T16:10:58.883465+0000 mgr.y (mgr.14520) 655 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 673 B/s rd, 0 op/s 2026-03-09T16:11:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:59 vm09 bash[22983]: cluster 2026-03-09T16:10:58.883465+0000 mgr.y (mgr.14520) 655 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 673 B/s rd, 0 op/s 2026-03-09T16:11:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:59 vm09 bash[22983]: audit 2026-03-09T16:10:59.563932+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:10:59 vm09 bash[22983]: audit 2026-03-09T16:10:59.563932+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:59 vm01 bash[20728]: cluster 2026-03-09T16:10:58.883465+0000 mgr.y (mgr.14520) 655 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 673 B/s rd, 0 op/s 2026-03-09T16:11:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:59 vm01 bash[20728]: cluster 2026-03-09T16:10:58.883465+0000 mgr.y (mgr.14520) 655 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 673 B/s rd, 0 op/s 2026-03-09T16:11:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:59 vm01 bash[20728]: audit 2026-03-09T16:10:59.563932+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:10:59 vm01 bash[20728]: audit 2026-03-09T16:10:59.563932+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:59 vm01 bash[28152]: cluster 2026-03-09T16:10:58.883465+0000 mgr.y (mgr.14520) 655 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 673 B/s rd, 0 op/s 2026-03-09T16:11:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:59 vm01 bash[28152]: cluster 2026-03-09T16:10:58.883465+0000 mgr.y (mgr.14520) 655 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 673 B/s rd, 0 op/s 2026-03-09T16:11:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:59 vm01 bash[28152]: audit 2026-03-09T16:10:59.563932+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:00.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:10:59 vm01 bash[28152]: audit 2026-03-09T16:10:59.563932+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:02 vm09 bash[22983]: cluster 2026-03-09T16:11:00.884058+0000 mgr.y (mgr.14520) 656 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:02 vm09 bash[22983]: cluster 2026-03-09T16:11:00.884058+0000 mgr.y (mgr.14520) 656 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:02.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:02 vm01 bash[28152]: cluster 2026-03-09T16:11:00.884058+0000 mgr.y (mgr.14520) 656 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:02 vm01 bash[28152]: cluster 2026-03-09T16:11:00.884058+0000 mgr.y (mgr.14520) 656 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:01 vm01 bash[20728]: cluster 2026-03-09T16:11:00.884058+0000 mgr.y (mgr.14520) 656 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:01 vm01 bash[20728]: cluster 2026-03-09T16:11:00.884058+0000 mgr.y (mgr.14520) 656 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:03.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:11:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:11:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:11:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:03 vm09 bash[22983]: cluster 2026-03-09T16:11:02.884331+0000 mgr.y (mgr.14520) 657 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:11:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:03 vm09 bash[22983]: cluster 2026-03-09T16:11:02.884331+0000 mgr.y (mgr.14520) 657 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:11:04.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:04 vm01 bash[28152]: cluster 2026-03-09T16:11:02.884331+0000 mgr.y (mgr.14520) 657 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:11:04.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:04 vm01 bash[28152]: cluster 2026-03-09T16:11:02.884331+0000 mgr.y (mgr.14520) 657 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:11:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:04 vm01 bash[20728]: cluster 2026-03-09T16:11:02.884331+0000 mgr.y (mgr.14520) 657 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:11:04.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:04 vm01 bash[20728]: cluster 2026-03-09T16:11:02.884331+0000 mgr.y (mgr.14520) 657 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:11:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:06 vm09 bash[22983]: cluster 2026-03-09T16:11:04.885038+0000 mgr.y (mgr.14520) 658 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:11:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:06 vm09 bash[22983]: cluster 2026-03-09T16:11:04.885038+0000 mgr.y (mgr.14520) 658 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:11:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:06 vm01 bash[28152]: cluster 2026-03-09T16:11:04.885038+0000 mgr.y (mgr.14520) 658 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:11:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:06 vm01 bash[28152]: cluster 2026-03-09T16:11:04.885038+0000 mgr.y (mgr.14520) 658 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:11:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:06 vm01 bash[20728]: cluster 2026-03-09T16:11:04.885038+0000 mgr.y (mgr.14520) 658 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:11:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:06 vm01 bash[20728]: cluster 2026-03-09T16:11:04.885038+0000 mgr.y (mgr.14520) 658 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T16:11:07.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:11:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:11:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:08 vm09 bash[22983]: cluster 2026-03-09T16:11:06.885318+0000 mgr.y (mgr.14520) 659 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:08 vm09 bash[22983]: cluster 2026-03-09T16:11:06.885318+0000 mgr.y (mgr.14520) 659 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:08 vm09 bash[22983]: audit 2026-03-09T16:11:06.915697+0000 mgr.y (mgr.14520) 660 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:08 vm09 bash[22983]: audit 2026-03-09T16:11:06.915697+0000 mgr.y (mgr.14520) 660 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:08 vm01 bash[28152]: cluster 2026-03-09T16:11:06.885318+0000 mgr.y (mgr.14520) 659 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:08 vm01 bash[28152]: cluster 2026-03-09T16:11:06.885318+0000 mgr.y (mgr.14520) 659 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:08 vm01 bash[28152]: audit 2026-03-09T16:11:06.915697+0000 mgr.y (mgr.14520) 660 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:08.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:08 vm01 bash[28152]: audit 2026-03-09T16:11:06.915697+0000 mgr.y (mgr.14520) 660 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:08 vm01 bash[20728]: cluster 2026-03-09T16:11:06.885318+0000 mgr.y (mgr.14520) 659 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:08 vm01 bash[20728]: cluster 2026-03-09T16:11:06.885318+0000 mgr.y (mgr.14520) 659 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:08 vm01 bash[20728]: audit 2026-03-09T16:11:06.915697+0000 mgr.y (mgr.14520) 660 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:08.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:08 vm01 bash[20728]: audit 2026-03-09T16:11:06.915697+0000 mgr.y (mgr.14520) 660 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:10 vm09 bash[22983]: cluster 2026-03-09T16:11:08.885768+0000 mgr.y (mgr.14520) 661 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:10 vm09 bash[22983]: cluster 2026-03-09T16:11:08.885768+0000 mgr.y (mgr.14520) 661 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:10.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:10 vm01 bash[28152]: cluster 2026-03-09T16:11:08.885768+0000 mgr.y (mgr.14520) 661 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:10.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:10 vm01 bash[28152]: cluster 2026-03-09T16:11:08.885768+0000 mgr.y (mgr.14520) 661 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:10.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:10 vm01 bash[20728]: cluster 2026-03-09T16:11:08.885768+0000 mgr.y (mgr.14520) 661 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:10.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:10 vm01 bash[20728]: cluster 2026-03-09T16:11:08.885768+0000 mgr.y (mgr.14520) 661 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:11 vm09 bash[22983]: cluster 2026-03-09T16:11:10.886305+0000 mgr.y (mgr.14520) 662 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:11 vm09 bash[22983]: cluster 2026-03-09T16:11:10.886305+0000 mgr.y (mgr.14520) 662 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:11 vm01 bash[28152]: cluster 2026-03-09T16:11:10.886305+0000 mgr.y (mgr.14520) 662 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:11.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:11 vm01 bash[28152]: cluster 2026-03-09T16:11:10.886305+0000 mgr.y (mgr.14520) 662 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:11 vm01 bash[20728]: cluster 2026-03-09T16:11:10.886305+0000 mgr.y (mgr.14520) 662 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:11.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:11 vm01 bash[20728]: cluster 2026-03-09T16:11:10.886305+0000 mgr.y (mgr.14520) 662 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:13.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:11:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:11:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:11:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:13 vm09 bash[22983]: cluster 2026-03-09T16:11:12.886576+0000 mgr.y (mgr.14520) 663 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:13 vm09 bash[22983]: cluster 2026-03-09T16:11:12.886576+0000 mgr.y (mgr.14520) 663 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:13 vm01 bash[28152]: cluster 2026-03-09T16:11:12.886576+0000 mgr.y (mgr.14520) 663 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:13 vm01 bash[28152]: cluster 2026-03-09T16:11:12.886576+0000 mgr.y (mgr.14520) 663 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:14.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:13 vm01 bash[20728]: cluster 2026-03-09T16:11:12.886576+0000 mgr.y (mgr.14520) 663 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:13 vm01 bash[20728]: cluster 2026-03-09T16:11:12.886576+0000 mgr.y (mgr.14520) 663 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:14 vm09 bash[22983]: audit 2026-03-09T16:11:14.569869+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:15.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:14 vm09 bash[22983]: audit 2026-03-09T16:11:14.569869+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:15.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:14 vm01 bash[28152]: audit 2026-03-09T16:11:14.569869+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:15.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:14 vm01 bash[28152]: audit 2026-03-09T16:11:14.569869+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:14 vm01 bash[20728]: audit 2026-03-09T16:11:14.569869+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:15.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:14 vm01 bash[20728]: audit 2026-03-09T16:11:14.569869+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:15 vm09 bash[22983]: cluster 2026-03-09T16:11:14.887340+0000 mgr.y (mgr.14520) 664 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:15 vm09 bash[22983]: cluster 2026-03-09T16:11:14.887340+0000 mgr.y (mgr.14520) 664 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:15 vm01 bash[28152]: cluster 2026-03-09T16:11:14.887340+0000 mgr.y (mgr.14520) 664 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:16.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:15 vm01 bash[28152]: cluster 2026-03-09T16:11:14.887340+0000 mgr.y (mgr.14520) 664 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:15 vm01 bash[20728]: cluster 2026-03-09T16:11:14.887340+0000 mgr.y (mgr.14520) 664 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:16.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:15 vm01 bash[20728]: cluster 2026-03-09T16:11:14.887340+0000 mgr.y (mgr.14520) 664 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:17.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:11:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:11:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:17 vm01 bash[28152]: cluster 2026-03-09T16:11:16.887656+0000 mgr.y (mgr.14520) 665 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:17.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:17 vm01 bash[28152]: cluster 2026-03-09T16:11:16.887656+0000 mgr.y (mgr.14520) 665 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:17.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:17 vm01 bash[28152]: audit 2026-03-09T16:11:16.919282+0000 mgr.y (mgr.14520) 666 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:17.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:17 vm01 bash[28152]: audit 2026-03-09T16:11:16.919282+0000 mgr.y (mgr.14520) 666 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:17.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:17 vm01 bash[20728]: cluster 2026-03-09T16:11:16.887656+0000 mgr.y (mgr.14520) 665 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:17.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:17 vm01 bash[20728]: cluster 2026-03-09T16:11:16.887656+0000 mgr.y (mgr.14520) 665 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:17.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:17 vm01 bash[20728]: audit 2026-03-09T16:11:16.919282+0000 mgr.y (mgr.14520) 666 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:17.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:17 vm01 bash[20728]: audit 2026-03-09T16:11:16.919282+0000 mgr.y (mgr.14520) 666 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:17 vm09 bash[22983]: cluster 2026-03-09T16:11:16.887656+0000 mgr.y (mgr.14520) 665 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:17 vm09 bash[22983]: cluster 2026-03-09T16:11:16.887656+0000 mgr.y (mgr.14520) 665 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:17 vm09 bash[22983]: audit 2026-03-09T16:11:16.919282+0000 mgr.y (mgr.14520) 666 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:17 vm09 bash[22983]: audit 2026-03-09T16:11:16.919282+0000 mgr.y (mgr.14520) 666 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:19 vm09 bash[22983]: cluster 2026-03-09T16:11:18.888179+0000 mgr.y (mgr.14520) 667 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:19 vm09 bash[22983]: cluster 2026-03-09T16:11:18.888179+0000 mgr.y (mgr.14520) 667 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:20.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:19 vm01 bash[28152]: cluster 2026-03-09T16:11:18.888179+0000 mgr.y (mgr.14520) 667 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:20.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:19 vm01 bash[28152]: cluster 2026-03-09T16:11:18.888179+0000 mgr.y (mgr.14520) 667 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:19 vm01 bash[20728]: cluster 2026-03-09T16:11:18.888179+0000 mgr.y (mgr.14520) 667 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:20.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:19 vm01 bash[20728]: cluster 2026-03-09T16:11:18.888179+0000 mgr.y (mgr.14520) 667 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:21 vm09 bash[22983]: cluster 2026-03-09T16:11:20.888672+0000 mgr.y (mgr.14520) 668 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:21 vm09 bash[22983]: cluster 2026-03-09T16:11:20.888672+0000 mgr.y (mgr.14520) 668 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:22.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:21 vm01 bash[28152]: cluster 2026-03-09T16:11:20.888672+0000 mgr.y (mgr.14520) 668 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:22.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:21 vm01 bash[28152]: cluster 2026-03-09T16:11:20.888672+0000 mgr.y (mgr.14520) 668 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:21 vm01 bash[20728]: cluster 2026-03-09T16:11:20.888672+0000 mgr.y (mgr.14520) 668 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:21 vm01 bash[20728]: cluster 2026-03-09T16:11:20.888672+0000 mgr.y (mgr.14520) 668 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:23.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:11:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:11:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:11:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:23 vm09 bash[22983]: cluster 2026-03-09T16:11:22.888947+0000 mgr.y (mgr.14520) 669 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:23 vm09 bash[22983]: cluster 2026-03-09T16:11:22.888947+0000 mgr.y (mgr.14520) 669 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:23 vm01 bash[20728]: cluster 2026-03-09T16:11:22.888947+0000 mgr.y (mgr.14520) 669 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:23 vm01 bash[20728]: cluster 2026-03-09T16:11:22.888947+0000 mgr.y (mgr.14520) 669 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:24.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:23 vm01 bash[28152]: cluster 2026-03-09T16:11:22.888947+0000 mgr.y (mgr.14520) 669 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:24.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:23 vm01 bash[28152]: cluster 2026-03-09T16:11:22.888947+0000 mgr.y (mgr.14520) 669 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:25 vm09 bash[22983]: cluster 2026-03-09T16:11:24.889599+0000 mgr.y (mgr.14520) 670 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:25 vm09 bash[22983]: cluster 2026-03-09T16:11:24.889599+0000 mgr.y (mgr.14520) 670 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:25 vm01 bash[20728]: cluster 2026-03-09T16:11:24.889599+0000 mgr.y (mgr.14520) 670 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:26.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:25 vm01 bash[20728]: cluster 2026-03-09T16:11:24.889599+0000 mgr.y (mgr.14520) 670 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:26.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:25 vm01 bash[28152]: cluster 2026-03-09T16:11:24.889599+0000 mgr.y (mgr.14520) 670 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:26.428 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:25 vm01 bash[28152]: cluster 2026-03-09T16:11:24.889599+0000 mgr.y (mgr.14520) 670 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:27.366 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:11:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:11:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:27 vm09 bash[22983]: cluster 2026-03-09T16:11:26.889907+0000 mgr.y (mgr.14520) 671 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:27 vm09 bash[22983]: cluster 2026-03-09T16:11:26.889907+0000 mgr.y (mgr.14520) 671 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:27 vm09 bash[22983]: audit 2026-03-09T16:11:26.931782+0000 mgr.y (mgr.14520) 672 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:27 vm09 bash[22983]: audit 2026-03-09T16:11:26.931782+0000 mgr.y (mgr.14520) 672 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:27.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:27 vm01 bash[20728]: cluster 2026-03-09T16:11:26.889907+0000 mgr.y (mgr.14520) 671 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:27.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:27 vm01 bash[20728]: cluster 2026-03-09T16:11:26.889907+0000 mgr.y (mgr.14520) 671 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:27.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:27 vm01 bash[20728]: audit 2026-03-09T16:11:26.931782+0000 mgr.y (mgr.14520) 672 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:27.676 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:27 vm01 bash[20728]: audit 2026-03-09T16:11:26.931782+0000 mgr.y (mgr.14520) 672 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:27 vm01 bash[28152]: cluster 2026-03-09T16:11:26.889907+0000 mgr.y (mgr.14520) 671 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:27 vm01 bash[28152]: cluster 2026-03-09T16:11:26.889907+0000 mgr.y (mgr.14520) 671 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:27 vm01 bash[28152]: audit 2026-03-09T16:11:26.931782+0000 mgr.y (mgr.14520) 672 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:27.676 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:27 vm01 bash[28152]: audit 2026-03-09T16:11:26.931782+0000 mgr.y (mgr.14520) 672 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:29.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:29 vm09 bash[22983]: cluster 2026-03-09T16:11:28.890421+0000 mgr.y (mgr.14520) 673 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:29.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:29 vm09 bash[22983]: cluster 2026-03-09T16:11:28.890421+0000 mgr.y (mgr.14520) 673 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:29.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:29 vm01 bash[28152]: cluster 2026-03-09T16:11:28.890421+0000 mgr.y (mgr.14520) 673 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:29.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:29 vm01 bash[28152]: cluster 2026-03-09T16:11:28.890421+0000 mgr.y (mgr.14520) 673 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:29.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:29 vm01 bash[20728]: cluster 2026-03-09T16:11:28.890421+0000 mgr.y (mgr.14520) 673 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:29.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:29 vm01 bash[20728]: cluster 2026-03-09T16:11:28.890421+0000 mgr.y (mgr.14520) 673 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:30.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:30 vm09 bash[22983]: audit 2026-03-09T16:11:29.576287+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:30.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:30 vm09 bash[22983]: audit 2026-03-09T16:11:29.576287+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:30.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:30 vm01 bash[28152]: audit 2026-03-09T16:11:29.576287+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:30.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:30 vm01 bash[28152]: audit 2026-03-09T16:11:29.576287+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:30.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:30 vm01 bash[20728]: audit 2026-03-09T16:11:29.576287+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:30.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:30 vm01 bash[20728]: audit 2026-03-09T16:11:29.576287+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:31.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:31 vm09 bash[22983]: cluster 2026-03-09T16:11:30.890975+0000 mgr.y (mgr.14520) 674 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:31.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:31 vm09 bash[22983]: cluster 2026-03-09T16:11:30.890975+0000 mgr.y (mgr.14520) 674 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:31.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:31 vm01 bash[28152]: cluster 2026-03-09T16:11:30.890975+0000 mgr.y (mgr.14520) 674 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:31.926 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:31 vm01 bash[28152]: cluster 2026-03-09T16:11:30.890975+0000 mgr.y (mgr.14520) 674 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:31.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:31 vm01 bash[20728]: cluster 2026-03-09T16:11:30.890975+0000 mgr.y (mgr.14520) 674 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:31.926 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:31 vm01 bash[20728]: cluster 2026-03-09T16:11:30.890975+0000 mgr.y (mgr.14520) 674 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:33.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:11:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:11:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:11:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:33 vm09 bash[22983]: cluster 2026-03-09T16:11:32.891306+0000 mgr.y (mgr.14520) 675 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:33 vm09 bash[22983]: cluster 2026-03-09T16:11:32.891306+0000 mgr.y (mgr.14520) 675 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:34.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:33 vm01 bash[28152]: cluster 2026-03-09T16:11:32.891306+0000 mgr.y (mgr.14520) 675 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:34.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:33 vm01 bash[28152]: cluster 2026-03-09T16:11:32.891306+0000 mgr.y (mgr.14520) 675 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:33 vm01 bash[20728]: cluster 2026-03-09T16:11:32.891306+0000 mgr.y (mgr.14520) 675 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:34.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:33 vm01 bash[20728]: cluster 2026-03-09T16:11:32.891306+0000 mgr.y (mgr.14520) 675 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:35 vm09 bash[22983]: cluster 2026-03-09T16:11:34.892042+0000 mgr.y (mgr.14520) 676 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:35 vm09 bash[22983]: cluster 2026-03-09T16:11:34.892042+0000 mgr.y (mgr.14520) 676 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:36.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:35 vm01 bash[28152]: cluster 2026-03-09T16:11:34.892042+0000 mgr.y (mgr.14520) 676 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:36.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:35 vm01 bash[28152]: cluster 2026-03-09T16:11:34.892042+0000 mgr.y (mgr.14520) 676 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:35 vm01 bash[20728]: cluster 2026-03-09T16:11:34.892042+0000 mgr.y (mgr.14520) 676 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:36.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:35 vm01 bash[20728]: cluster 2026-03-09T16:11:34.892042+0000 mgr.y (mgr.14520) 676 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:37.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:11:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:11:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:37 vm09 bash[22983]: cluster 2026-03-09T16:11:36.892340+0000 mgr.y (mgr.14520) 677 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:37 vm09 bash[22983]: cluster 2026-03-09T16:11:36.892340+0000 mgr.y (mgr.14520) 677 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:37 vm09 bash[22983]: audit 2026-03-09T16:11:36.938413+0000 mgr.y (mgr.14520) 678 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:37 vm09 bash[22983]: audit 2026-03-09T16:11:36.938413+0000 mgr.y (mgr.14520) 678 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:37 vm01 bash[28152]: cluster 2026-03-09T16:11:36.892340+0000 mgr.y (mgr.14520) 677 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:37 vm01 bash[28152]: cluster 2026-03-09T16:11:36.892340+0000 mgr.y (mgr.14520) 677 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:37 vm01 bash[28152]: audit 2026-03-09T16:11:36.938413+0000 mgr.y (mgr.14520) 678 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:37 vm01 bash[28152]: audit 2026-03-09T16:11:36.938413+0000 mgr.y (mgr.14520) 678 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:37 vm01 bash[20728]: cluster 2026-03-09T16:11:36.892340+0000 mgr.y (mgr.14520) 677 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:37 vm01 bash[20728]: cluster 2026-03-09T16:11:36.892340+0000 mgr.y (mgr.14520) 677 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:37 vm01 bash[20728]: audit 2026-03-09T16:11:36.938413+0000 mgr.y (mgr.14520) 678 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:38.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:37 vm01 bash[20728]: audit 2026-03-09T16:11:36.938413+0000 mgr.y (mgr.14520) 678 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:39 vm09 bash[22983]: cluster 2026-03-09T16:11:38.892774+0000 mgr.y (mgr.14520) 679 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:39 vm09 bash[22983]: cluster 2026-03-09T16:11:38.892774+0000 mgr.y (mgr.14520) 679 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:40.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:39 vm01 bash[28152]: cluster 2026-03-09T16:11:38.892774+0000 mgr.y (mgr.14520) 679 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:40.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:39 vm01 bash[28152]: cluster 2026-03-09T16:11:38.892774+0000 mgr.y (mgr.14520) 679 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:40.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:39 vm01 bash[20728]: cluster 2026-03-09T16:11:38.892774+0000 mgr.y (mgr.14520) 679 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:40.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:39 vm01 bash[20728]: cluster 2026-03-09T16:11:38.892774+0000 mgr.y (mgr.14520) 679 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:42 vm09 bash[22983]: cluster 2026-03-09T16:11:40.893392+0000 mgr.y (mgr.14520) 680 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:42 vm09 bash[22983]: cluster 2026-03-09T16:11:40.893392+0000 mgr.y (mgr.14520) 680 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:42.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:42 vm01 bash[28152]: cluster 2026-03-09T16:11:40.893392+0000 mgr.y (mgr.14520) 680 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:42.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:42 vm01 bash[28152]: cluster 2026-03-09T16:11:40.893392+0000 mgr.y (mgr.14520) 680 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:42.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:42 vm01 bash[20728]: cluster 2026-03-09T16:11:40.893392+0000 mgr.y (mgr.14520) 680 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:42.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:42 vm01 bash[20728]: cluster 2026-03-09T16:11:40.893392+0000 mgr.y (mgr.14520) 680 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:43.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:11:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:11:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:11:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:44 vm09 bash[22983]: cluster 2026-03-09T16:11:42.893771+0000 mgr.y (mgr.14520) 681 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:44 vm09 bash[22983]: cluster 2026-03-09T16:11:42.893771+0000 mgr.y (mgr.14520) 681 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:44 vm09 bash[22983]: audit 2026-03-09T16:11:43.963095+0000 mon.a (mon.0) 3779 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:11:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:44 vm09 bash[22983]: audit 2026-03-09T16:11:43.963095+0000 mon.a (mon.0) 3779 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:11:44.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:44 vm01 bash[28152]: cluster 2026-03-09T16:11:42.893771+0000 mgr.y (mgr.14520) 681 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:44.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:44 vm01 bash[28152]: cluster 2026-03-09T16:11:42.893771+0000 mgr.y (mgr.14520) 681 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:44.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:44 vm01 bash[28152]: audit 2026-03-09T16:11:43.963095+0000 mon.a (mon.0) 3779 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:11:44.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:44 vm01 bash[28152]: audit 2026-03-09T16:11:43.963095+0000 mon.a (mon.0) 3779 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:11:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:44 vm01 bash[20728]: cluster 2026-03-09T16:11:42.893771+0000 mgr.y (mgr.14520) 681 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:44 vm01 bash[20728]: cluster 2026-03-09T16:11:42.893771+0000 mgr.y (mgr.14520) 681 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:44 vm01 bash[20728]: audit 2026-03-09T16:11:43.963095+0000 mon.a (mon.0) 3779 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:11:44.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:44 vm01 bash[20728]: audit 2026-03-09T16:11:43.963095+0000 mon.a (mon.0) 3779 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:11:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:45 vm09 bash[22983]: audit 2026-03-09T16:11:44.319560+0000 mon.a (mon.0) 3780 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:11:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:45 vm09 bash[22983]: audit 2026-03-09T16:11:44.319560+0000 mon.a (mon.0) 3780 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:11:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:45 vm09 bash[22983]: audit 2026-03-09T16:11:44.320475+0000 mon.a (mon.0) 3781 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:11:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:45 vm09 bash[22983]: audit 2026-03-09T16:11:44.320475+0000 mon.a (mon.0) 3781 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:11:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:45 vm09 bash[22983]: audit 2026-03-09T16:11:44.328221+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:11:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:45 vm09 bash[22983]: audit 2026-03-09T16:11:44.328221+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:11:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:45 vm09 bash[22983]: audit 2026-03-09T16:11:44.583521+0000 mon.a (mon.0) 3783 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:45 vm09 bash[22983]: audit 2026-03-09T16:11:44.583521+0000 mon.a (mon.0) 3783 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:45 vm01 bash[28152]: audit 2026-03-09T16:11:44.319560+0000 mon.a (mon.0) 3780 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:11:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:45 vm01 bash[28152]: audit 2026-03-09T16:11:44.319560+0000 mon.a (mon.0) 3780 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:11:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:45 vm01 bash[28152]: audit 2026-03-09T16:11:44.320475+0000 mon.a (mon.0) 3781 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:11:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:45 vm01 bash[28152]: audit 2026-03-09T16:11:44.320475+0000 mon.a (mon.0) 3781 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:11:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:45 vm01 bash[28152]: audit 2026-03-09T16:11:44.328221+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:11:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:45 vm01 bash[28152]: audit 2026-03-09T16:11:44.328221+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:11:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:45 vm01 bash[28152]: audit 2026-03-09T16:11:44.583521+0000 mon.a (mon.0) 3783 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:45 vm01 bash[28152]: audit 2026-03-09T16:11:44.583521+0000 mon.a (mon.0) 3783 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:45 vm01 bash[20728]: audit 2026-03-09T16:11:44.319560+0000 mon.a (mon.0) 3780 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:11:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:45 vm01 bash[20728]: audit 2026-03-09T16:11:44.319560+0000 mon.a (mon.0) 3780 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:11:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:45 vm01 bash[20728]: audit 2026-03-09T16:11:44.320475+0000 mon.a (mon.0) 3781 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:11:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:45 vm01 bash[20728]: audit 2026-03-09T16:11:44.320475+0000 mon.a (mon.0) 3781 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:11:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:45 vm01 bash[20728]: audit 2026-03-09T16:11:44.328221+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:11:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:45 vm01 bash[20728]: audit 2026-03-09T16:11:44.328221+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:11:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:45 vm01 bash[20728]: audit 2026-03-09T16:11:44.583521+0000 mon.a (mon.0) 3783 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:45 vm01 bash[20728]: audit 2026-03-09T16:11:44.583521+0000 mon.a (mon.0) 3783 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:11:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:46 vm09 bash[22983]: cluster 2026-03-09T16:11:44.894440+0000 mgr.y (mgr.14520) 682 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:46 vm09 bash[22983]: cluster 2026-03-09T16:11:44.894440+0000 mgr.y (mgr.14520) 682 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:46 vm01 bash[28152]: cluster 2026-03-09T16:11:44.894440+0000 mgr.y (mgr.14520) 682 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:46 vm01 bash[28152]: cluster 2026-03-09T16:11:44.894440+0000 mgr.y (mgr.14520) 682 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:46 vm01 bash[20728]: cluster 2026-03-09T16:11:44.894440+0000 mgr.y (mgr.14520) 682 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:46 vm01 bash[20728]: cluster 2026-03-09T16:11:44.894440+0000 mgr.y (mgr.14520) 682 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:47.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:11:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:11:48.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:48 vm09 bash[22983]: cluster 2026-03-09T16:11:46.894734+0000 mgr.y (mgr.14520) 683 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:48.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:48 vm09 bash[22983]: cluster 2026-03-09T16:11:46.894734+0000 mgr.y (mgr.14520) 683 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:48.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:48 vm09 bash[22983]: audit 2026-03-09T16:11:46.947519+0000 mgr.y (mgr.14520) 684 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:48.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:48 vm09 bash[22983]: audit 2026-03-09T16:11:46.947519+0000 mgr.y (mgr.14520) 684 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:48 vm01 bash[28152]: cluster 2026-03-09T16:11:46.894734+0000 mgr.y (mgr.14520) 683 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:48 vm01 bash[28152]: cluster 2026-03-09T16:11:46.894734+0000 mgr.y (mgr.14520) 683 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:48 vm01 bash[28152]: audit 2026-03-09T16:11:46.947519+0000 mgr.y (mgr.14520) 684 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:48 vm01 bash[28152]: audit 2026-03-09T16:11:46.947519+0000 mgr.y (mgr.14520) 684 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:48 vm01 bash[20728]: cluster 2026-03-09T16:11:46.894734+0000 mgr.y (mgr.14520) 683 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:48 vm01 bash[20728]: cluster 2026-03-09T16:11:46.894734+0000 mgr.y (mgr.14520) 683 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:48 vm01 bash[20728]: audit 2026-03-09T16:11:46.947519+0000 mgr.y (mgr.14520) 684 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:48 vm01 bash[20728]: audit 2026-03-09T16:11:46.947519+0000 mgr.y (mgr.14520) 684 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:50 vm09 bash[22983]: cluster 2026-03-09T16:11:48.895387+0000 mgr.y (mgr.14520) 685 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:50 vm09 bash[22983]: cluster 2026-03-09T16:11:48.895387+0000 mgr.y (mgr.14520) 685 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:50.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:50 vm01 bash[28152]: cluster 2026-03-09T16:11:48.895387+0000 mgr.y (mgr.14520) 685 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:50.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:50 vm01 bash[28152]: cluster 2026-03-09T16:11:48.895387+0000 mgr.y (mgr.14520) 685 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:50.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:50 vm01 bash[20728]: cluster 2026-03-09T16:11:48.895387+0000 mgr.y (mgr.14520) 685 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:50.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:50 vm01 bash[20728]: cluster 2026-03-09T16:11:48.895387+0000 mgr.y (mgr.14520) 685 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:52.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:52 vm09 bash[22983]: cluster 2026-03-09T16:11:50.895927+0000 mgr.y (mgr.14520) 686 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:52.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:52 vm09 bash[22983]: cluster 2026-03-09T16:11:50.895927+0000 mgr.y (mgr.14520) 686 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:52.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:52 vm01 bash[28152]: cluster 2026-03-09T16:11:50.895927+0000 mgr.y (mgr.14520) 686 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:52.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:52 vm01 bash[28152]: cluster 2026-03-09T16:11:50.895927+0000 mgr.y (mgr.14520) 686 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:52 vm01 bash[20728]: cluster 2026-03-09T16:11:50.895927+0000 mgr.y (mgr.14520) 686 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:52.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:52 vm01 bash[20728]: cluster 2026-03-09T16:11:50.895927+0000 mgr.y (mgr.14520) 686 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:53.176 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:11:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:11:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:11:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:54 vm09 bash[22983]: cluster 2026-03-09T16:11:52.896207+0000 mgr.y (mgr.14520) 687 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:54 vm09 bash[22983]: cluster 2026-03-09T16:11:52.896207+0000 mgr.y (mgr.14520) 687 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:54.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:54 vm01 bash[28152]: cluster 2026-03-09T16:11:52.896207+0000 mgr.y (mgr.14520) 687 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:54.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:54 vm01 bash[28152]: cluster 2026-03-09T16:11:52.896207+0000 mgr.y (mgr.14520) 687 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:54.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:54 vm01 bash[20728]: cluster 2026-03-09T16:11:52.896207+0000 mgr.y (mgr.14520) 687 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:54.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:54 vm01 bash[20728]: cluster 2026-03-09T16:11:52.896207+0000 mgr.y (mgr.14520) 687 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:56 vm09 bash[22983]: cluster 2026-03-09T16:11:54.896795+0000 mgr.y (mgr.14520) 688 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:56 vm09 bash[22983]: cluster 2026-03-09T16:11:54.896795+0000 mgr.y (mgr.14520) 688 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:56.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:56 vm01 bash[28152]: cluster 2026-03-09T16:11:54.896795+0000 mgr.y (mgr.14520) 688 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:56.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:56 vm01 bash[28152]: cluster 2026-03-09T16:11:54.896795+0000 mgr.y (mgr.14520) 688 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:56.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:56 vm01 bash[20728]: cluster 2026-03-09T16:11:54.896795+0000 mgr.y (mgr.14520) 688 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:56.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:56 vm01 bash[20728]: cluster 2026-03-09T16:11:54.896795+0000 mgr.y (mgr.14520) 688 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:11:57.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:11:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:11:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:58 vm09 bash[22983]: cluster 2026-03-09T16:11:56.897143+0000 mgr.y (mgr.14520) 689 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:58 vm09 bash[22983]: cluster 2026-03-09T16:11:56.897143+0000 mgr.y (mgr.14520) 689 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:58 vm09 bash[22983]: audit 2026-03-09T16:11:56.953962+0000 mgr.y (mgr.14520) 690 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:11:58 vm09 bash[22983]: audit 2026-03-09T16:11:56.953962+0000 mgr.y (mgr.14520) 690 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:58 vm01 bash[28152]: cluster 2026-03-09T16:11:56.897143+0000 mgr.y (mgr.14520) 689 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:58 vm01 bash[28152]: cluster 2026-03-09T16:11:56.897143+0000 mgr.y (mgr.14520) 689 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:58 vm01 bash[28152]: audit 2026-03-09T16:11:56.953962+0000 mgr.y (mgr.14520) 690 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:11:58 vm01 bash[28152]: audit 2026-03-09T16:11:56.953962+0000 mgr.y (mgr.14520) 690 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:58 vm01 bash[20728]: cluster 2026-03-09T16:11:56.897143+0000 mgr.y (mgr.14520) 689 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:58 vm01 bash[20728]: cluster 2026-03-09T16:11:56.897143+0000 mgr.y (mgr.14520) 689 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:11:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:58 vm01 bash[20728]: audit 2026-03-09T16:11:56.953962+0000 mgr.y (mgr.14520) 690 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:11:58.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:11:58 vm01 bash[20728]: audit 2026-03-09T16:11:56.953962+0000 mgr.y (mgr.14520) 690 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:00 vm09 bash[22983]: cluster 2026-03-09T16:11:58.897813+0000 mgr.y (mgr.14520) 691 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:00 vm09 bash[22983]: cluster 2026-03-09T16:11:58.897813+0000 mgr.y (mgr.14520) 691 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:00 vm09 bash[22983]: audit 2026-03-09T16:11:59.591605+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:00 vm09 bash[22983]: audit 2026-03-09T16:11:59.591605+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:00 vm01 bash[28152]: cluster 2026-03-09T16:11:58.897813+0000 mgr.y (mgr.14520) 691 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:00 vm01 bash[28152]: cluster 2026-03-09T16:11:58.897813+0000 mgr.y (mgr.14520) 691 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:00 vm01 bash[28152]: audit 2026-03-09T16:11:59.591605+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:00 vm01 bash[28152]: audit 2026-03-09T16:11:59.591605+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:00 vm01 bash[20728]: cluster 2026-03-09T16:11:58.897813+0000 mgr.y (mgr.14520) 691 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:00 vm01 bash[20728]: cluster 2026-03-09T16:11:58.897813+0000 mgr.y (mgr.14520) 691 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:00 vm01 bash[20728]: audit 2026-03-09T16:11:59.591605+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:00.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:00 vm01 bash[20728]: audit 2026-03-09T16:11:59.591605+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:02 vm09 bash[22983]: cluster 2026-03-09T16:12:00.898391+0000 mgr.y (mgr.14520) 692 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:02 vm09 bash[22983]: cluster 2026-03-09T16:12:00.898391+0000 mgr.y (mgr.14520) 692 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:02.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:02 vm01 bash[28152]: cluster 2026-03-09T16:12:00.898391+0000 mgr.y (mgr.14520) 692 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:02.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:02 vm01 bash[28152]: cluster 2026-03-09T16:12:00.898391+0000 mgr.y (mgr.14520) 692 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:02.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:02 vm01 bash[20728]: cluster 2026-03-09T16:12:00.898391+0000 mgr.y (mgr.14520) 692 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:02.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:02 vm01 bash[20728]: cluster 2026-03-09T16:12:00.898391+0000 mgr.y (mgr.14520) 692 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:03.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:12:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:12:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:12:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:04 vm09 bash[22983]: cluster 2026-03-09T16:12:02.898662+0000 mgr.y (mgr.14520) 693 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:04 vm09 bash[22983]: cluster 2026-03-09T16:12:02.898662+0000 mgr.y (mgr.14520) 693 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:04.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:04 vm01 bash[28152]: cluster 2026-03-09T16:12:02.898662+0000 mgr.y (mgr.14520) 693 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:04.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:04 vm01 bash[28152]: cluster 2026-03-09T16:12:02.898662+0000 mgr.y (mgr.14520) 693 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:04 vm01 bash[20728]: cluster 2026-03-09T16:12:02.898662+0000 mgr.y (mgr.14520) 693 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:04 vm01 bash[20728]: cluster 2026-03-09T16:12:02.898662+0000 mgr.y (mgr.14520) 693 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:06 vm01 bash[28152]: cluster 2026-03-09T16:12:04.899391+0000 mgr.y (mgr.14520) 694 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:06 vm01 bash[28152]: cluster 2026-03-09T16:12:04.899391+0000 mgr.y (mgr.14520) 694 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:06 vm01 bash[20728]: cluster 2026-03-09T16:12:04.899391+0000 mgr.y (mgr.14520) 694 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:06 vm01 bash[20728]: cluster 2026-03-09T16:12:04.899391+0000 mgr.y (mgr.14520) 694 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:06 vm09 bash[22983]: cluster 2026-03-09T16:12:04.899391+0000 mgr.y (mgr.14520) 694 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:06 vm09 bash[22983]: cluster 2026-03-09T16:12:04.899391+0000 mgr.y (mgr.14520) 694 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:07.217 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:12:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:12:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:07 vm09 bash[22983]: cluster 2026-03-09T16:12:06.899703+0000 mgr.y (mgr.14520) 695 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:07 vm09 bash[22983]: cluster 2026-03-09T16:12:06.899703+0000 mgr.y (mgr.14520) 695 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:07 vm09 bash[22983]: audit 2026-03-09T16:12:06.962802+0000 mgr.y (mgr.14520) 696 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:07 vm09 bash[22983]: audit 2026-03-09T16:12:06.962802+0000 mgr.y (mgr.14520) 696 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:07.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:07 vm01 bash[28152]: cluster 2026-03-09T16:12:06.899703+0000 mgr.y (mgr.14520) 695 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:07.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:07 vm01 bash[28152]: cluster 2026-03-09T16:12:06.899703+0000 mgr.y (mgr.14520) 695 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:07.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:07 vm01 bash[28152]: audit 2026-03-09T16:12:06.962802+0000 mgr.y (mgr.14520) 696 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:07.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:07 vm01 bash[28152]: audit 2026-03-09T16:12:06.962802+0000 mgr.y (mgr.14520) 696 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:07.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:07 vm01 bash[20728]: cluster 2026-03-09T16:12:06.899703+0000 mgr.y (mgr.14520) 695 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:07.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:07 vm01 bash[20728]: cluster 2026-03-09T16:12:06.899703+0000 mgr.y (mgr.14520) 695 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:07.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:07 vm01 bash[20728]: audit 2026-03-09T16:12:06.962802+0000 mgr.y (mgr.14520) 696 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:07.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:07 vm01 bash[20728]: audit 2026-03-09T16:12:06.962802+0000 mgr.y (mgr.14520) 696 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:09 vm09 bash[22983]: cluster 2026-03-09T16:12:08.900131+0000 mgr.y (mgr.14520) 697 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:09 vm09 bash[22983]: cluster 2026-03-09T16:12:08.900131+0000 mgr.y (mgr.14520) 697 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:10.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:09 vm01 bash[28152]: cluster 2026-03-09T16:12:08.900131+0000 mgr.y (mgr.14520) 697 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:10.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:09 vm01 bash[28152]: cluster 2026-03-09T16:12:08.900131+0000 mgr.y (mgr.14520) 697 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:10.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:09 vm01 bash[20728]: cluster 2026-03-09T16:12:08.900131+0000 mgr.y (mgr.14520) 697 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:10.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:09 vm01 bash[20728]: cluster 2026-03-09T16:12:08.900131+0000 mgr.y (mgr.14520) 697 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:11 vm09 bash[22983]: cluster 2026-03-09T16:12:10.900611+0000 mgr.y (mgr.14520) 698 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:11 vm09 bash[22983]: cluster 2026-03-09T16:12:10.900611+0000 mgr.y (mgr.14520) 698 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:12.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:11 vm01 bash[28152]: cluster 2026-03-09T16:12:10.900611+0000 mgr.y (mgr.14520) 698 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:12.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:11 vm01 bash[28152]: cluster 2026-03-09T16:12:10.900611+0000 mgr.y (mgr.14520) 698 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:12.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:11 vm01 bash[20728]: cluster 2026-03-09T16:12:10.900611+0000 mgr.y (mgr.14520) 698 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:12.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:11 vm01 bash[20728]: cluster 2026-03-09T16:12:10.900611+0000 mgr.y (mgr.14520) 698 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:13.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:12:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:12:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:12:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:13 vm09 bash[22983]: cluster 2026-03-09T16:12:12.900935+0000 mgr.y (mgr.14520) 699 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:13 vm09 bash[22983]: cluster 2026-03-09T16:12:12.900935+0000 mgr.y (mgr.14520) 699 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:13 vm01 bash[28152]: cluster 2026-03-09T16:12:12.900935+0000 mgr.y (mgr.14520) 699 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:13 vm01 bash[28152]: cluster 2026-03-09T16:12:12.900935+0000 mgr.y (mgr.14520) 699 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:13 vm01 bash[20728]: cluster 2026-03-09T16:12:12.900935+0000 mgr.y (mgr.14520) 699 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:14.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:13 vm01 bash[20728]: cluster 2026-03-09T16:12:12.900935+0000 mgr.y (mgr.14520) 699 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:14 vm09 bash[22983]: audit 2026-03-09T16:12:14.597843+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:14 vm09 bash[22983]: audit 2026-03-09T16:12:14.597843+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:15.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:14 vm01 bash[28152]: audit 2026-03-09T16:12:14.597843+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:15.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:14 vm01 bash[28152]: audit 2026-03-09T16:12:14.597843+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:15.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:14 vm01 bash[20728]: audit 2026-03-09T16:12:14.597843+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:15.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:14 vm01 bash[20728]: audit 2026-03-09T16:12:14.597843+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:15 vm09 bash[22983]: cluster 2026-03-09T16:12:14.901796+0000 mgr.y (mgr.14520) 700 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:15 vm09 bash[22983]: cluster 2026-03-09T16:12:14.901796+0000 mgr.y (mgr.14520) 700 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:16.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:15 vm01 bash[28152]: cluster 2026-03-09T16:12:14.901796+0000 mgr.y (mgr.14520) 700 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:16.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:15 vm01 bash[28152]: cluster 2026-03-09T16:12:14.901796+0000 mgr.y (mgr.14520) 700 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:16.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:15 vm01 bash[20728]: cluster 2026-03-09T16:12:14.901796+0000 mgr.y (mgr.14520) 700 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:16.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:15 vm01 bash[20728]: cluster 2026-03-09T16:12:14.901796+0000 mgr.y (mgr.14520) 700 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:17.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:12:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:12:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:17 vm09 bash[22983]: cluster 2026-03-09T16:12:16.902086+0000 mgr.y (mgr.14520) 701 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:17 vm09 bash[22983]: cluster 2026-03-09T16:12:16.902086+0000 mgr.y (mgr.14520) 701 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:17 vm09 bash[22983]: audit 2026-03-09T16:12:16.966286+0000 mgr.y (mgr.14520) 702 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:17 vm09 bash[22983]: audit 2026-03-09T16:12:16.966286+0000 mgr.y (mgr.14520) 702 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:18.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:17 vm01 bash[28152]: cluster 2026-03-09T16:12:16.902086+0000 mgr.y (mgr.14520) 701 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:18.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:17 vm01 bash[28152]: cluster 2026-03-09T16:12:16.902086+0000 mgr.y (mgr.14520) 701 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:18.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:17 vm01 bash[28152]: audit 2026-03-09T16:12:16.966286+0000 mgr.y (mgr.14520) 702 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:18.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:17 vm01 bash[28152]: audit 2026-03-09T16:12:16.966286+0000 mgr.y (mgr.14520) 702 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:18.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:17 vm01 bash[20728]: cluster 2026-03-09T16:12:16.902086+0000 mgr.y (mgr.14520) 701 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:18.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:17 vm01 bash[20728]: cluster 2026-03-09T16:12:16.902086+0000 mgr.y (mgr.14520) 701 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:18.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:17 vm01 bash[20728]: audit 2026-03-09T16:12:16.966286+0000 mgr.y (mgr.14520) 702 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:18.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:17 vm01 bash[20728]: audit 2026-03-09T16:12:16.966286+0000 mgr.y (mgr.14520) 702 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:19 vm09 bash[22983]: cluster 2026-03-09T16:12:18.902587+0000 mgr.y (mgr.14520) 703 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:19 vm09 bash[22983]: cluster 2026-03-09T16:12:18.902587+0000 mgr.y (mgr.14520) 703 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:20.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:19 vm01 bash[28152]: cluster 2026-03-09T16:12:18.902587+0000 mgr.y (mgr.14520) 703 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:20.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:19 vm01 bash[28152]: cluster 2026-03-09T16:12:18.902587+0000 mgr.y (mgr.14520) 703 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:20.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:19 vm01 bash[20728]: cluster 2026-03-09T16:12:18.902587+0000 mgr.y (mgr.14520) 703 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:20.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:19 vm01 bash[20728]: cluster 2026-03-09T16:12:18.902587+0000 mgr.y (mgr.14520) 703 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:22 vm09 bash[22983]: cluster 2026-03-09T16:12:20.903115+0000 mgr.y (mgr.14520) 704 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:22 vm09 bash[22983]: cluster 2026-03-09T16:12:20.903115+0000 mgr.y (mgr.14520) 704 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:22.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:22 vm01 bash[28152]: cluster 2026-03-09T16:12:20.903115+0000 mgr.y (mgr.14520) 704 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:22.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:22 vm01 bash[28152]: cluster 2026-03-09T16:12:20.903115+0000 mgr.y (mgr.14520) 704 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:22 vm01 bash[20728]: cluster 2026-03-09T16:12:20.903115+0000 mgr.y (mgr.14520) 704 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:22 vm01 bash[20728]: cluster 2026-03-09T16:12:20.903115+0000 mgr.y (mgr.14520) 704 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:23.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:12:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:12:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:12:24.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:24 vm01 bash[28152]: cluster 2026-03-09T16:12:22.903441+0000 mgr.y (mgr.14520) 705 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:24.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:24 vm01 bash[28152]: cluster 2026-03-09T16:12:22.903441+0000 mgr.y (mgr.14520) 705 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:24 vm01 bash[20728]: cluster 2026-03-09T16:12:22.903441+0000 mgr.y (mgr.14520) 705 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:24 vm01 bash[20728]: cluster 2026-03-09T16:12:22.903441+0000 mgr.y (mgr.14520) 705 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:24 vm09 bash[22983]: cluster 2026-03-09T16:12:22.903441+0000 mgr.y (mgr.14520) 705 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:24 vm09 bash[22983]: cluster 2026-03-09T16:12:22.903441+0000 mgr.y (mgr.14520) 705 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:26.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:26 vm01 bash[28152]: cluster 2026-03-09T16:12:24.904178+0000 mgr.y (mgr.14520) 706 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:26.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:26 vm01 bash[28152]: cluster 2026-03-09T16:12:24.904178+0000 mgr.y (mgr.14520) 706 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:26.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:26 vm01 bash[20728]: cluster 2026-03-09T16:12:24.904178+0000 mgr.y (mgr.14520) 706 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:26.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:26 vm01 bash[20728]: cluster 2026-03-09T16:12:24.904178+0000 mgr.y (mgr.14520) 706 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:26 vm09 bash[22983]: cluster 2026-03-09T16:12:24.904178+0000 mgr.y (mgr.14520) 706 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:26 vm09 bash[22983]: cluster 2026-03-09T16:12:24.904178+0000 mgr.y (mgr.14520) 706 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:27.223 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:12:26 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:12:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:27 vm09 bash[22983]: cluster 2026-03-09T16:12:26.904465+0000 mgr.y (mgr.14520) 707 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:27 vm09 bash[22983]: cluster 2026-03-09T16:12:26.904465+0000 mgr.y (mgr.14520) 707 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:27 vm09 bash[22983]: audit 2026-03-09T16:12:26.968452+0000 mgr.y (mgr.14520) 708 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:27 vm09 bash[22983]: audit 2026-03-09T16:12:26.968452+0000 mgr.y (mgr.14520) 708 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:27.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:27 vm01 bash[28152]: cluster 2026-03-09T16:12:26.904465+0000 mgr.y (mgr.14520) 707 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:27.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:27 vm01 bash[28152]: cluster 2026-03-09T16:12:26.904465+0000 mgr.y (mgr.14520) 707 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:27.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:27 vm01 bash[28152]: audit 2026-03-09T16:12:26.968452+0000 mgr.y (mgr.14520) 708 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:27.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:27 vm01 bash[28152]: audit 2026-03-09T16:12:26.968452+0000 mgr.y (mgr.14520) 708 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:27.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:27 vm01 bash[20728]: cluster 2026-03-09T16:12:26.904465+0000 mgr.y (mgr.14520) 707 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:27.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:27 vm01 bash[20728]: cluster 2026-03-09T16:12:26.904465+0000 mgr.y (mgr.14520) 707 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:27.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:27 vm01 bash[20728]: audit 2026-03-09T16:12:26.968452+0000 mgr.y (mgr.14520) 708 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:27.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:27 vm01 bash[20728]: audit 2026-03-09T16:12:26.968452+0000 mgr.y (mgr.14520) 708 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:29 vm09 bash[22983]: cluster 2026-03-09T16:12:28.905010+0000 mgr.y (mgr.14520) 709 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:29 vm09 bash[22983]: cluster 2026-03-09T16:12:28.905010+0000 mgr.y (mgr.14520) 709 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:29 vm09 bash[22983]: audit 2026-03-09T16:12:29.605143+0000 mon.a (mon.0) 3786 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:29 vm09 bash[22983]: audit 2026-03-09T16:12:29.605143+0000 mon.a (mon.0) 3786 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:29 vm01 bash[28152]: cluster 2026-03-09T16:12:28.905010+0000 mgr.y (mgr.14520) 709 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:29 vm01 bash[28152]: cluster 2026-03-09T16:12:28.905010+0000 mgr.y (mgr.14520) 709 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:29 vm01 bash[28152]: audit 2026-03-09T16:12:29.605143+0000 mon.a (mon.0) 3786 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:29 vm01 bash[28152]: audit 2026-03-09T16:12:29.605143+0000 mon.a (mon.0) 3786 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:29 vm01 bash[20728]: cluster 2026-03-09T16:12:28.905010+0000 mgr.y (mgr.14520) 709 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:29 vm01 bash[20728]: cluster 2026-03-09T16:12:28.905010+0000 mgr.y (mgr.14520) 709 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:29 vm01 bash[20728]: audit 2026-03-09T16:12:29.605143+0000 mon.a (mon.0) 3786 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:29 vm01 bash[20728]: audit 2026-03-09T16:12:29.605143+0000 mon.a (mon.0) 3786 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:31 vm09 bash[22983]: cluster 2026-03-09T16:12:30.905662+0000 mgr.y (mgr.14520) 710 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:31 vm09 bash[22983]: cluster 2026-03-09T16:12:30.905662+0000 mgr.y (mgr.14520) 710 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:32.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:31 vm01 bash[28152]: cluster 2026-03-09T16:12:30.905662+0000 mgr.y (mgr.14520) 710 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:32.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:31 vm01 bash[28152]: cluster 2026-03-09T16:12:30.905662+0000 mgr.y (mgr.14520) 710 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:32.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:31 vm01 bash[20728]: cluster 2026-03-09T16:12:30.905662+0000 mgr.y (mgr.14520) 710 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:32.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:31 vm01 bash[20728]: cluster 2026-03-09T16:12:30.905662+0000 mgr.y (mgr.14520) 710 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:33.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:12:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:12:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:12:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:34 vm09 bash[22983]: cluster 2026-03-09T16:12:32.905942+0000 mgr.y (mgr.14520) 711 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:34.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:34 vm09 bash[22983]: cluster 2026-03-09T16:12:32.905942+0000 mgr.y (mgr.14520) 711 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:34.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:34 vm01 bash[20728]: cluster 2026-03-09T16:12:32.905942+0000 mgr.y (mgr.14520) 711 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:34.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:34 vm01 bash[20728]: cluster 2026-03-09T16:12:32.905942+0000 mgr.y (mgr.14520) 711 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:34.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:34 vm01 bash[28152]: cluster 2026-03-09T16:12:32.905942+0000 mgr.y (mgr.14520) 711 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:34.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:34 vm01 bash[28152]: cluster 2026-03-09T16:12:32.905942+0000 mgr.y (mgr.14520) 711 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:35 vm09 bash[22983]: cluster 2026-03-09T16:12:34.906542+0000 mgr.y (mgr.14520) 712 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:35 vm09 bash[22983]: cluster 2026-03-09T16:12:34.906542+0000 mgr.y (mgr.14520) 712 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:35.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:35 vm01 bash[28152]: cluster 2026-03-09T16:12:34.906542+0000 mgr.y (mgr.14520) 712 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:35.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:35 vm01 bash[28152]: cluster 2026-03-09T16:12:34.906542+0000 mgr.y (mgr.14520) 712 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:35.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:35 vm01 bash[20728]: cluster 2026-03-09T16:12:34.906542+0000 mgr.y (mgr.14520) 712 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:35.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:35 vm01 bash[20728]: cluster 2026-03-09T16:12:34.906542+0000 mgr.y (mgr.14520) 712 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:37.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:12:36 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:12:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:37 vm09 bash[22983]: cluster 2026-03-09T16:12:36.906826+0000 mgr.y (mgr.14520) 713 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:37 vm09 bash[22983]: cluster 2026-03-09T16:12:36.906826+0000 mgr.y (mgr.14520) 713 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:37 vm09 bash[22983]: audit 2026-03-09T16:12:36.977958+0000 mgr.y (mgr.14520) 714 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:37 vm09 bash[22983]: audit 2026-03-09T16:12:36.977958+0000 mgr.y (mgr.14520) 714 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:37 vm01 bash[28152]: cluster 2026-03-09T16:12:36.906826+0000 mgr.y (mgr.14520) 713 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:37 vm01 bash[28152]: cluster 2026-03-09T16:12:36.906826+0000 mgr.y (mgr.14520) 713 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:37 vm01 bash[28152]: audit 2026-03-09T16:12:36.977958+0000 mgr.y (mgr.14520) 714 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:37 vm01 bash[28152]: audit 2026-03-09T16:12:36.977958+0000 mgr.y (mgr.14520) 714 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:37 vm01 bash[20728]: cluster 2026-03-09T16:12:36.906826+0000 mgr.y (mgr.14520) 713 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:37 vm01 bash[20728]: cluster 2026-03-09T16:12:36.906826+0000 mgr.y (mgr.14520) 713 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:37 vm01 bash[20728]: audit 2026-03-09T16:12:36.977958+0000 mgr.y (mgr.14520) 714 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:37 vm01 bash[20728]: audit 2026-03-09T16:12:36.977958+0000 mgr.y (mgr.14520) 714 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:39 vm09 bash[22983]: cluster 2026-03-09T16:12:38.907276+0000 mgr.y (mgr.14520) 715 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:39 vm09 bash[22983]: cluster 2026-03-09T16:12:38.907276+0000 mgr.y (mgr.14520) 715 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:39.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:39 vm01 bash[28152]: cluster 2026-03-09T16:12:38.907276+0000 mgr.y (mgr.14520) 715 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:39.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:39 vm01 bash[28152]: cluster 2026-03-09T16:12:38.907276+0000 mgr.y (mgr.14520) 715 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:39.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:39 vm01 bash[20728]: cluster 2026-03-09T16:12:38.907276+0000 mgr.y (mgr.14520) 715 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:39.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:39 vm01 bash[20728]: cluster 2026-03-09T16:12:38.907276+0000 mgr.y (mgr.14520) 715 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:41 vm09 bash[22983]: cluster 2026-03-09T16:12:40.907742+0000 mgr.y (mgr.14520) 716 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:41 vm09 bash[22983]: cluster 2026-03-09T16:12:40.907742+0000 mgr.y (mgr.14520) 716 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:42.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:41 vm01 bash[28152]: cluster 2026-03-09T16:12:40.907742+0000 mgr.y (mgr.14520) 716 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:42.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:41 vm01 bash[28152]: cluster 2026-03-09T16:12:40.907742+0000 mgr.y (mgr.14520) 716 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:42.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:41 vm01 bash[20728]: cluster 2026-03-09T16:12:40.907742+0000 mgr.y (mgr.14520) 716 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:42.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:41 vm01 bash[20728]: cluster 2026-03-09T16:12:40.907742+0000 mgr.y (mgr.14520) 716 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:43.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:12:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:12:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:12:44.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:43 vm09 bash[22983]: cluster 2026-03-09T16:12:42.908105+0000 mgr.y (mgr.14520) 717 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:44.384 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:43 vm09 bash[22983]: cluster 2026-03-09T16:12:42.908105+0000 mgr.y (mgr.14520) 717 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:43 vm01 bash[20728]: cluster 2026-03-09T16:12:42.908105+0000 mgr.y (mgr.14520) 717 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:43 vm01 bash[20728]: cluster 2026-03-09T16:12:42.908105+0000 mgr.y (mgr.14520) 717 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:43 vm01 bash[28152]: cluster 2026-03-09T16:12:42.908105+0000 mgr.y (mgr.14520) 717 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:44.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:43 vm01 bash[28152]: cluster 2026-03-09T16:12:42.908105+0000 mgr.y (mgr.14520) 717 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.372272+0000 mon.a (mon.0) 3787 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:12:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.372272+0000 mon.a (mon.0) 3787 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:12:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.618053+0000 mon.a (mon.0) 3788 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.618053+0000 mon.a (mon.0) 3788 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.715229+0000 mon.a (mon.0) 3789 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:12:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.715229+0000 mon.a (mon.0) 3789 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:12:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.715812+0000 mon.a (mon.0) 3790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:12:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.715812+0000 mon.a (mon.0) 3790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:12:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.828567+0000 mon.a (mon.0) 3791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:12:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:44 vm09 bash[22983]: audit 2026-03-09T16:12:44.828567+0000 mon.a (mon.0) 3791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:12:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.372272+0000 mon.a (mon.0) 3787 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:12:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.372272+0000 mon.a (mon.0) 3787 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:12:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.618053+0000 mon.a (mon.0) 3788 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.618053+0000 mon.a (mon.0) 3788 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.715229+0000 mon.a (mon.0) 3789 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.715229+0000 mon.a (mon.0) 3789 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.715812+0000 mon.a (mon.0) 3790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.715812+0000 mon.a (mon.0) 3790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.828567+0000 mon.a (mon.0) 3791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:44 vm01 bash[28152]: audit 2026-03-09T16:12:44.828567+0000 mon.a (mon.0) 3791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.372272+0000 mon.a (mon.0) 3787 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.372272+0000 mon.a (mon.0) 3787 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.618053+0000 mon.a (mon.0) 3788 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.618053+0000 mon.a (mon.0) 3788 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.715229+0000 mon.a (mon.0) 3789 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.715229+0000 mon.a (mon.0) 3789 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.715812+0000 mon.a (mon.0) 3790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.715812+0000 mon.a (mon.0) 3790 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.828567+0000 mon.a (mon.0) 3791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:12:45.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:44 vm01 bash[20728]: audit 2026-03-09T16:12:44.828567+0000 mon.a (mon.0) 3791 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:12:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:45 vm09 bash[22983]: cluster 2026-03-09T16:12:44.908831+0000 mgr.y (mgr.14520) 718 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:45 vm09 bash[22983]: cluster 2026-03-09T16:12:44.908831+0000 mgr.y (mgr.14520) 718 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:45 vm01 bash[28152]: cluster 2026-03-09T16:12:44.908831+0000 mgr.y (mgr.14520) 718 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:46.431 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:45 vm01 bash[28152]: cluster 2026-03-09T16:12:44.908831+0000 mgr.y (mgr.14520) 718 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:46.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:45 vm01 bash[20728]: cluster 2026-03-09T16:12:44.908831+0000 mgr.y (mgr.14520) 718 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:46.432 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:45 vm01 bash[20728]: cluster 2026-03-09T16:12:44.908831+0000 mgr.y (mgr.14520) 718 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:47.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:12:46 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:12:48.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:47 vm09 bash[22983]: cluster 2026-03-09T16:12:46.909100+0000 mgr.y (mgr.14520) 719 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:48.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:47 vm09 bash[22983]: cluster 2026-03-09T16:12:46.909100+0000 mgr.y (mgr.14520) 719 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:48.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:47 vm09 bash[22983]: audit 2026-03-09T16:12:46.980905+0000 mgr.y (mgr.14520) 720 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:48.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:47 vm09 bash[22983]: audit 2026-03-09T16:12:46.980905+0000 mgr.y (mgr.14520) 720 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:47 vm01 bash[28152]: cluster 2026-03-09T16:12:46.909100+0000 mgr.y (mgr.14520) 719 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:47 vm01 bash[28152]: cluster 2026-03-09T16:12:46.909100+0000 mgr.y (mgr.14520) 719 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:47 vm01 bash[28152]: audit 2026-03-09T16:12:46.980905+0000 mgr.y (mgr.14520) 720 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:47 vm01 bash[28152]: audit 2026-03-09T16:12:46.980905+0000 mgr.y (mgr.14520) 720 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:47 vm01 bash[20728]: cluster 2026-03-09T16:12:46.909100+0000 mgr.y (mgr.14520) 719 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:47 vm01 bash[20728]: cluster 2026-03-09T16:12:46.909100+0000 mgr.y (mgr.14520) 719 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:47 vm01 bash[20728]: audit 2026-03-09T16:12:46.980905+0000 mgr.y (mgr.14520) 720 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:47 vm01 bash[20728]: audit 2026-03-09T16:12:46.980905+0000 mgr.y (mgr.14520) 720 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:50 vm09 bash[22983]: cluster 2026-03-09T16:12:48.909632+0000 mgr.y (mgr.14520) 721 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:50.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:50 vm09 bash[22983]: cluster 2026-03-09T16:12:48.909632+0000 mgr.y (mgr.14520) 721 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:50.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:50 vm01 bash[28152]: cluster 2026-03-09T16:12:48.909632+0000 mgr.y (mgr.14520) 721 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:50.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:50 vm01 bash[28152]: cluster 2026-03-09T16:12:48.909632+0000 mgr.y (mgr.14520) 721 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:50.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:50 vm01 bash[20728]: cluster 2026-03-09T16:12:48.909632+0000 mgr.y (mgr.14520) 721 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:50.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:50 vm01 bash[20728]: cluster 2026-03-09T16:12:48.909632+0000 mgr.y (mgr.14520) 721 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:52.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:52 vm09 bash[22983]: cluster 2026-03-09T16:12:50.910186+0000 mgr.y (mgr.14520) 722 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:52.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:52 vm09 bash[22983]: cluster 2026-03-09T16:12:50.910186+0000 mgr.y (mgr.14520) 722 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:52.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:52 vm01 bash[28152]: cluster 2026-03-09T16:12:50.910186+0000 mgr.y (mgr.14520) 722 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:52.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:52 vm01 bash[28152]: cluster 2026-03-09T16:12:50.910186+0000 mgr.y (mgr.14520) 722 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:52.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:52 vm01 bash[20728]: cluster 2026-03-09T16:12:50.910186+0000 mgr.y (mgr.14520) 722 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:52.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:52 vm01 bash[20728]: cluster 2026-03-09T16:12:50.910186+0000 mgr.y (mgr.14520) 722 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:53.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:12:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:12:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:12:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:54 vm09 bash[22983]: cluster 2026-03-09T16:12:52.910498+0000 mgr.y (mgr.14520) 723 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:54 vm09 bash[22983]: cluster 2026-03-09T16:12:52.910498+0000 mgr.y (mgr.14520) 723 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:54.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:54 vm01 bash[28152]: cluster 2026-03-09T16:12:52.910498+0000 mgr.y (mgr.14520) 723 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:54.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:54 vm01 bash[28152]: cluster 2026-03-09T16:12:52.910498+0000 mgr.y (mgr.14520) 723 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:54.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:54 vm01 bash[20728]: cluster 2026-03-09T16:12:52.910498+0000 mgr.y (mgr.14520) 723 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:54.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:54 vm01 bash[20728]: cluster 2026-03-09T16:12:52.910498+0000 mgr.y (mgr.14520) 723 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:56 vm09 bash[22983]: cluster 2026-03-09T16:12:54.911325+0000 mgr.y (mgr.14520) 724 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:56 vm09 bash[22983]: cluster 2026-03-09T16:12:54.911325+0000 mgr.y (mgr.14520) 724 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:56.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:56 vm01 bash[28152]: cluster 2026-03-09T16:12:54.911325+0000 mgr.y (mgr.14520) 724 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:56.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:56 vm01 bash[28152]: cluster 2026-03-09T16:12:54.911325+0000 mgr.y (mgr.14520) 724 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:56.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:56 vm01 bash[20728]: cluster 2026-03-09T16:12:54.911325+0000 mgr.y (mgr.14520) 724 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:56.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:56 vm01 bash[20728]: cluster 2026-03-09T16:12:54.911325+0000 mgr.y (mgr.14520) 724 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:12:57.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:12:56 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:12:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:58 vm09 bash[22983]: cluster 2026-03-09T16:12:56.911709+0000 mgr.y (mgr.14520) 725 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:58 vm09 bash[22983]: cluster 2026-03-09T16:12:56.911709+0000 mgr.y (mgr.14520) 725 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:58 vm09 bash[22983]: audit 2026-03-09T16:12:56.991520+0000 mgr.y (mgr.14520) 726 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:58 vm09 bash[22983]: audit 2026-03-09T16:12:56.991520+0000 mgr.y (mgr.14520) 726 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:58 vm01 bash[28152]: cluster 2026-03-09T16:12:56.911709+0000 mgr.y (mgr.14520) 725 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:58 vm01 bash[28152]: cluster 2026-03-09T16:12:56.911709+0000 mgr.y (mgr.14520) 725 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:58 vm01 bash[28152]: audit 2026-03-09T16:12:56.991520+0000 mgr.y (mgr.14520) 726 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:58 vm01 bash[28152]: audit 2026-03-09T16:12:56.991520+0000 mgr.y (mgr.14520) 726 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:58 vm01 bash[20728]: cluster 2026-03-09T16:12:56.911709+0000 mgr.y (mgr.14520) 725 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:58 vm01 bash[20728]: cluster 2026-03-09T16:12:56.911709+0000 mgr.y (mgr.14520) 725 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:58 vm01 bash[20728]: audit 2026-03-09T16:12:56.991520+0000 mgr.y (mgr.14520) 726 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:58 vm01 bash[20728]: audit 2026-03-09T16:12:56.991520+0000 mgr.y (mgr.14520) 726 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:12:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:59 vm09 bash[22983]: cluster 2026-03-09T16:12:58.912219+0000 mgr.y (mgr.14520) 727 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:12:59 vm09 bash[22983]: cluster 2026-03-09T16:12:58.912219+0000 mgr.y (mgr.14520) 727 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:59.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:59 vm01 bash[28152]: cluster 2026-03-09T16:12:58.912219+0000 mgr.y (mgr.14520) 727 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:59.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:12:59 vm01 bash[28152]: cluster 2026-03-09T16:12:58.912219+0000 mgr.y (mgr.14520) 727 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:59.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:59 vm01 bash[20728]: cluster 2026-03-09T16:12:58.912219+0000 mgr.y (mgr.14520) 727 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:12:59.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:12:59 vm01 bash[20728]: cluster 2026-03-09T16:12:58.912219+0000 mgr.y (mgr.14520) 727 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:00 vm09 bash[22983]: audit 2026-03-09T16:12:59.624431+0000 mon.a (mon.0) 3792 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:00 vm09 bash[22983]: audit 2026-03-09T16:12:59.624431+0000 mon.a (mon.0) 3792 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:00.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:00 vm01 bash[28152]: audit 2026-03-09T16:12:59.624431+0000 mon.a (mon.0) 3792 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:00.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:00 vm01 bash[28152]: audit 2026-03-09T16:12:59.624431+0000 mon.a (mon.0) 3792 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:00.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:00 vm01 bash[20728]: audit 2026-03-09T16:12:59.624431+0000 mon.a (mon.0) 3792 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:00.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:00 vm01 bash[20728]: audit 2026-03-09T16:12:59.624431+0000 mon.a (mon.0) 3792 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:01.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:01 vm01 bash[28152]: cluster 2026-03-09T16:13:00.912807+0000 mgr.y (mgr.14520) 728 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:01.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:01 vm01 bash[28152]: cluster 2026-03-09T16:13:00.912807+0000 mgr.y (mgr.14520) 728 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:01.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:01 vm01 bash[20728]: cluster 2026-03-09T16:13:00.912807+0000 mgr.y (mgr.14520) 728 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:01.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:01 vm01 bash[20728]: cluster 2026-03-09T16:13:00.912807+0000 mgr.y (mgr.14520) 728 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:02.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:01 vm09 bash[22983]: cluster 2026-03-09T16:13:00.912807+0000 mgr.y (mgr.14520) 728 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:02.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:01 vm09 bash[22983]: cluster 2026-03-09T16:13:00.912807+0000 mgr.y (mgr.14520) 728 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:03.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:13:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:13:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:13:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:03 vm09 bash[22983]: cluster 2026-03-09T16:13:02.913071+0000 mgr.y (mgr.14520) 729 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:03 vm09 bash[22983]: cluster 2026-03-09T16:13:02.913071+0000 mgr.y (mgr.14520) 729 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:03 vm01 bash[20728]: cluster 2026-03-09T16:13:02.913071+0000 mgr.y (mgr.14520) 729 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:03 vm01 bash[20728]: cluster 2026-03-09T16:13:02.913071+0000 mgr.y (mgr.14520) 729 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:04.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:03 vm01 bash[28152]: cluster 2026-03-09T16:13:02.913071+0000 mgr.y (mgr.14520) 729 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:04.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:03 vm01 bash[28152]: cluster 2026-03-09T16:13:02.913071+0000 mgr.y (mgr.14520) 729 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:05 vm09 bash[22983]: cluster 2026-03-09T16:13:04.914024+0000 mgr.y (mgr.14520) 730 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:05 vm09 bash[22983]: cluster 2026-03-09T16:13:04.914024+0000 mgr.y (mgr.14520) 730 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:05 vm01 bash[20728]: cluster 2026-03-09T16:13:04.914024+0000 mgr.y (mgr.14520) 730 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:05 vm01 bash[20728]: cluster 2026-03-09T16:13:04.914024+0000 mgr.y (mgr.14520) 730 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:05 vm01 bash[28152]: cluster 2026-03-09T16:13:04.914024+0000 mgr.y (mgr.14520) 730 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:05 vm01 bash[28152]: cluster 2026-03-09T16:13:04.914024+0000 mgr.y (mgr.14520) 730 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:07.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:13:06 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:13:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:07 vm09 bash[22983]: cluster 2026-03-09T16:13:06.914367+0000 mgr.y (mgr.14520) 731 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:07 vm09 bash[22983]: cluster 2026-03-09T16:13:06.914367+0000 mgr.y (mgr.14520) 731 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:07 vm09 bash[22983]: audit 2026-03-09T16:13:07.001554+0000 mgr.y (mgr.14520) 732 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:07 vm09 bash[22983]: audit 2026-03-09T16:13:07.001554+0000 mgr.y (mgr.14520) 732 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:07 vm01 bash[20728]: cluster 2026-03-09T16:13:06.914367+0000 mgr.y (mgr.14520) 731 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:07 vm01 bash[20728]: cluster 2026-03-09T16:13:06.914367+0000 mgr.y (mgr.14520) 731 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:07 vm01 bash[20728]: audit 2026-03-09T16:13:07.001554+0000 mgr.y (mgr.14520) 732 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:07 vm01 bash[20728]: audit 2026-03-09T16:13:07.001554+0000 mgr.y (mgr.14520) 732 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:07 vm01 bash[28152]: cluster 2026-03-09T16:13:06.914367+0000 mgr.y (mgr.14520) 731 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:07 vm01 bash[28152]: cluster 2026-03-09T16:13:06.914367+0000 mgr.y (mgr.14520) 731 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:07 vm01 bash[28152]: audit 2026-03-09T16:13:07.001554+0000 mgr.y (mgr.14520) 732 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:07 vm01 bash[28152]: audit 2026-03-09T16:13:07.001554+0000 mgr.y (mgr.14520) 732 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:09 vm09 bash[22983]: cluster 2026-03-09T16:13:08.914885+0000 mgr.y (mgr.14520) 733 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:09 vm09 bash[22983]: cluster 2026-03-09T16:13:08.914885+0000 mgr.y (mgr.14520) 733 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:10.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:09 vm01 bash[28152]: cluster 2026-03-09T16:13:08.914885+0000 mgr.y (mgr.14520) 733 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:10.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:09 vm01 bash[28152]: cluster 2026-03-09T16:13:08.914885+0000 mgr.y (mgr.14520) 733 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:10.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:09 vm01 bash[20728]: cluster 2026-03-09T16:13:08.914885+0000 mgr.y (mgr.14520) 733 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:10.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:09 vm01 bash[20728]: cluster 2026-03-09T16:13:08.914885+0000 mgr.y (mgr.14520) 733 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:12 vm09 bash[22983]: cluster 2026-03-09T16:13:10.915371+0000 mgr.y (mgr.14520) 734 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:12 vm09 bash[22983]: cluster 2026-03-09T16:13:10.915371+0000 mgr.y (mgr.14520) 734 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:12.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:12 vm01 bash[20728]: cluster 2026-03-09T16:13:10.915371+0000 mgr.y (mgr.14520) 734 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:12.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:12 vm01 bash[20728]: cluster 2026-03-09T16:13:10.915371+0000 mgr.y (mgr.14520) 734 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:12.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:12 vm01 bash[28152]: cluster 2026-03-09T16:13:10.915371+0000 mgr.y (mgr.14520) 734 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:12.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:12 vm01 bash[28152]: cluster 2026-03-09T16:13:10.915371+0000 mgr.y (mgr.14520) 734 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:13.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:13:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:13:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:13:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:14 vm01 bash[28152]: cluster 2026-03-09T16:13:12.915624+0000 mgr.y (mgr.14520) 735 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:14 vm01 bash[28152]: cluster 2026-03-09T16:13:12.915624+0000 mgr.y (mgr.14520) 735 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:14.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:14 vm01 bash[20728]: cluster 2026-03-09T16:13:12.915624+0000 mgr.y (mgr.14520) 735 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:14.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:14 vm01 bash[20728]: cluster 2026-03-09T16:13:12.915624+0000 mgr.y (mgr.14520) 735 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:14 vm09 bash[22983]: cluster 2026-03-09T16:13:12.915624+0000 mgr.y (mgr.14520) 735 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:14 vm09 bash[22983]: cluster 2026-03-09T16:13:12.915624+0000 mgr.y (mgr.14520) 735 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:15 vm09 bash[22983]: audit 2026-03-09T16:13:14.630791+0000 mon.a (mon.0) 3793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:15 vm09 bash[22983]: audit 2026-03-09T16:13:14.630791+0000 mon.a (mon.0) 3793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:15.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:15 vm01 bash[28152]: audit 2026-03-09T16:13:14.630791+0000 mon.a (mon.0) 3793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:15.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:15 vm01 bash[28152]: audit 2026-03-09T16:13:14.630791+0000 mon.a (mon.0) 3793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:15.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:15 vm01 bash[20728]: audit 2026-03-09T16:13:14.630791+0000 mon.a (mon.0) 3793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:15.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:15 vm01 bash[20728]: audit 2026-03-09T16:13:14.630791+0000 mon.a (mon.0) 3793 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:16 vm09 bash[22983]: cluster 2026-03-09T16:13:14.916484+0000 mgr.y (mgr.14520) 736 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:16 vm09 bash[22983]: cluster 2026-03-09T16:13:14.916484+0000 mgr.y (mgr.14520) 736 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:16.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:16 vm01 bash[28152]: cluster 2026-03-09T16:13:14.916484+0000 mgr.y (mgr.14520) 736 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:16.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:16 vm01 bash[28152]: cluster 2026-03-09T16:13:14.916484+0000 mgr.y (mgr.14520) 736 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:16.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:16 vm01 bash[20728]: cluster 2026-03-09T16:13:14.916484+0000 mgr.y (mgr.14520) 736 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:16.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:16 vm01 bash[20728]: cluster 2026-03-09T16:13:14.916484+0000 mgr.y (mgr.14520) 736 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:17.248 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:13:16 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:13:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:17 vm09 bash[22983]: cluster 2026-03-09T16:13:16.916899+0000 mgr.y (mgr.14520) 737 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:17 vm09 bash[22983]: cluster 2026-03-09T16:13:16.916899+0000 mgr.y (mgr.14520) 737 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:17 vm09 bash[22983]: audit 2026-03-09T16:13:17.002623+0000 mgr.y (mgr.14520) 738 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:17 vm09 bash[22983]: audit 2026-03-09T16:13:17.002623+0000 mgr.y (mgr.14520) 738 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:17 vm01 bash[28152]: cluster 2026-03-09T16:13:16.916899+0000 mgr.y (mgr.14520) 737 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:17 vm01 bash[28152]: cluster 2026-03-09T16:13:16.916899+0000 mgr.y (mgr.14520) 737 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:17 vm01 bash[28152]: audit 2026-03-09T16:13:17.002623+0000 mgr.y (mgr.14520) 738 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:17 vm01 bash[28152]: audit 2026-03-09T16:13:17.002623+0000 mgr.y (mgr.14520) 738 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:17 vm01 bash[20728]: cluster 2026-03-09T16:13:16.916899+0000 mgr.y (mgr.14520) 737 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:17 vm01 bash[20728]: cluster 2026-03-09T16:13:16.916899+0000 mgr.y (mgr.14520) 737 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:17 vm01 bash[20728]: audit 2026-03-09T16:13:17.002623+0000 mgr.y (mgr.14520) 738 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:17 vm01 bash[20728]: audit 2026-03-09T16:13:17.002623+0000 mgr.y (mgr.14520) 738 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:19.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:19 vm01 bash[28152]: cluster 2026-03-09T16:13:18.917506+0000 mgr.y (mgr.14520) 739 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:19.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:19 vm01 bash[28152]: cluster 2026-03-09T16:13:18.917506+0000 mgr.y (mgr.14520) 739 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:19.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:19 vm01 bash[20728]: cluster 2026-03-09T16:13:18.917506+0000 mgr.y (mgr.14520) 739 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:19.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:19 vm01 bash[20728]: cluster 2026-03-09T16:13:18.917506+0000 mgr.y (mgr.14520) 739 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:19.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:19 vm09 bash[22983]: cluster 2026-03-09T16:13:18.917506+0000 mgr.y (mgr.14520) 739 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:19.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:19 vm09 bash[22983]: cluster 2026-03-09T16:13:18.917506+0000 mgr.y (mgr.14520) 739 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:21 vm09 bash[22983]: cluster 2026-03-09T16:13:20.917986+0000 mgr.y (mgr.14520) 740 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:21 vm09 bash[22983]: cluster 2026-03-09T16:13:20.917986+0000 mgr.y (mgr.14520) 740 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:22.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:21 vm01 bash[28152]: cluster 2026-03-09T16:13:20.917986+0000 mgr.y (mgr.14520) 740 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:22.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:21 vm01 bash[28152]: cluster 2026-03-09T16:13:20.917986+0000 mgr.y (mgr.14520) 740 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:21 vm01 bash[20728]: cluster 2026-03-09T16:13:20.917986+0000 mgr.y (mgr.14520) 740 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:21 vm01 bash[20728]: cluster 2026-03-09T16:13:20.917986+0000 mgr.y (mgr.14520) 740 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:23.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:13:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:13:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:13:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:23 vm09 bash[22983]: cluster 2026-03-09T16:13:22.918258+0000 mgr.y (mgr.14520) 741 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:23 vm09 bash[22983]: cluster 2026-03-09T16:13:22.918258+0000 mgr.y (mgr.14520) 741 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:24.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:23 vm01 bash[28152]: cluster 2026-03-09T16:13:22.918258+0000 mgr.y (mgr.14520) 741 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:24.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:23 vm01 bash[28152]: cluster 2026-03-09T16:13:22.918258+0000 mgr.y (mgr.14520) 741 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:23 vm01 bash[20728]: cluster 2026-03-09T16:13:22.918258+0000 mgr.y (mgr.14520) 741 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:23 vm01 bash[20728]: cluster 2026-03-09T16:13:22.918258+0000 mgr.y (mgr.14520) 741 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:26 vm09 bash[22983]: cluster 2026-03-09T16:13:24.918891+0000 mgr.y (mgr.14520) 742 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:26 vm09 bash[22983]: cluster 2026-03-09T16:13:24.918891+0000 mgr.y (mgr.14520) 742 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:26.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:26 vm01 bash[28152]: cluster 2026-03-09T16:13:24.918891+0000 mgr.y (mgr.14520) 742 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:26.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:26 vm01 bash[28152]: cluster 2026-03-09T16:13:24.918891+0000 mgr.y (mgr.14520) 742 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:26.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:26 vm01 bash[20728]: cluster 2026-03-09T16:13:24.918891+0000 mgr.y (mgr.14520) 742 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:26.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:26 vm01 bash[20728]: cluster 2026-03-09T16:13:24.918891+0000 mgr.y (mgr.14520) 742 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:27.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:13:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:13:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:28 vm09 bash[22983]: cluster 2026-03-09T16:13:26.919192+0000 mgr.y (mgr.14520) 743 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:28 vm09 bash[22983]: cluster 2026-03-09T16:13:26.919192+0000 mgr.y (mgr.14520) 743 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:28 vm09 bash[22983]: audit 2026-03-09T16:13:27.011916+0000 mgr.y (mgr.14520) 744 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:28 vm09 bash[22983]: audit 2026-03-09T16:13:27.011916+0000 mgr.y (mgr.14520) 744 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:28 vm01 bash[28152]: cluster 2026-03-09T16:13:26.919192+0000 mgr.y (mgr.14520) 743 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:28 vm01 bash[28152]: cluster 2026-03-09T16:13:26.919192+0000 mgr.y (mgr.14520) 743 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:28 vm01 bash[28152]: audit 2026-03-09T16:13:27.011916+0000 mgr.y (mgr.14520) 744 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:28 vm01 bash[28152]: audit 2026-03-09T16:13:27.011916+0000 mgr.y (mgr.14520) 744 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:28 vm01 bash[20728]: cluster 2026-03-09T16:13:26.919192+0000 mgr.y (mgr.14520) 743 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:28 vm01 bash[20728]: cluster 2026-03-09T16:13:26.919192+0000 mgr.y (mgr.14520) 743 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:28 vm01 bash[20728]: audit 2026-03-09T16:13:27.011916+0000 mgr.y (mgr.14520) 744 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:28 vm01 bash[20728]: audit 2026-03-09T16:13:27.011916+0000 mgr.y (mgr.14520) 744 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:30 vm01 bash[28152]: cluster 2026-03-09T16:13:28.919726+0000 mgr.y (mgr.14520) 745 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:30 vm01 bash[28152]: cluster 2026-03-09T16:13:28.919726+0000 mgr.y (mgr.14520) 745 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:30 vm01 bash[28152]: audit 2026-03-09T16:13:29.638729+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:30 vm01 bash[28152]: audit 2026-03-09T16:13:29.638729+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:30 vm01 bash[20728]: cluster 2026-03-09T16:13:28.919726+0000 mgr.y (mgr.14520) 745 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:30 vm01 bash[20728]: cluster 2026-03-09T16:13:28.919726+0000 mgr.y (mgr.14520) 745 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:30 vm01 bash[20728]: audit 2026-03-09T16:13:29.638729+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:30.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:30 vm01 bash[20728]: audit 2026-03-09T16:13:29.638729+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:30 vm09 bash[22983]: cluster 2026-03-09T16:13:28.919726+0000 mgr.y (mgr.14520) 745 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:30 vm09 bash[22983]: cluster 2026-03-09T16:13:28.919726+0000 mgr.y (mgr.14520) 745 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:30 vm09 bash[22983]: audit 2026-03-09T16:13:29.638729+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:30 vm09 bash[22983]: audit 2026-03-09T16:13:29.638729+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:32.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:32 vm01 bash[28152]: cluster 2026-03-09T16:13:30.920282+0000 mgr.y (mgr.14520) 746 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:32.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:32 vm01 bash[28152]: cluster 2026-03-09T16:13:30.920282+0000 mgr.y (mgr.14520) 746 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:32.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:32 vm01 bash[20728]: cluster 2026-03-09T16:13:30.920282+0000 mgr.y (mgr.14520) 746 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:32.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:32 vm01 bash[20728]: cluster 2026-03-09T16:13:30.920282+0000 mgr.y (mgr.14520) 746 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:32 vm09 bash[22983]: cluster 2026-03-09T16:13:30.920282+0000 mgr.y (mgr.14520) 746 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:32 vm09 bash[22983]: cluster 2026-03-09T16:13:30.920282+0000 mgr.y (mgr.14520) 746 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:33.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:13:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:13:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:13:34.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:34 vm01 bash[28152]: cluster 2026-03-09T16:13:32.920556+0000 mgr.y (mgr.14520) 747 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:34.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:34 vm01 bash[28152]: cluster 2026-03-09T16:13:32.920556+0000 mgr.y (mgr.14520) 747 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:34.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:34 vm01 bash[20728]: cluster 2026-03-09T16:13:32.920556+0000 mgr.y (mgr.14520) 747 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:34.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:34 vm01 bash[20728]: cluster 2026-03-09T16:13:32.920556+0000 mgr.y (mgr.14520) 747 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:34 vm09 bash[22983]: cluster 2026-03-09T16:13:32.920556+0000 mgr.y (mgr.14520) 747 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:34 vm09 bash[22983]: cluster 2026-03-09T16:13:32.920556+0000 mgr.y (mgr.14520) 747 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:36.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:36 vm01 bash[28152]: cluster 2026-03-09T16:13:34.921271+0000 mgr.y (mgr.14520) 748 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:36.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:36 vm01 bash[28152]: cluster 2026-03-09T16:13:34.921271+0000 mgr.y (mgr.14520) 748 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:36.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:36 vm01 bash[20728]: cluster 2026-03-09T16:13:34.921271+0000 mgr.y (mgr.14520) 748 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:36.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:36 vm01 bash[20728]: cluster 2026-03-09T16:13:34.921271+0000 mgr.y (mgr.14520) 748 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:36 vm09 bash[22983]: cluster 2026-03-09T16:13:34.921271+0000 mgr.y (mgr.14520) 748 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:36 vm09 bash[22983]: cluster 2026-03-09T16:13:34.921271+0000 mgr.y (mgr.14520) 748 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:37.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:13:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:13:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:38 vm09 bash[22983]: cluster 2026-03-09T16:13:36.921607+0000 mgr.y (mgr.14520) 749 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:38 vm09 bash[22983]: cluster 2026-03-09T16:13:36.921607+0000 mgr.y (mgr.14520) 749 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:38 vm09 bash[22983]: audit 2026-03-09T16:13:37.017373+0000 mgr.y (mgr.14520) 750 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:38 vm09 bash[22983]: audit 2026-03-09T16:13:37.017373+0000 mgr.y (mgr.14520) 750 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:38.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:38 vm01 bash[28152]: cluster 2026-03-09T16:13:36.921607+0000 mgr.y (mgr.14520) 749 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:38.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:38 vm01 bash[28152]: cluster 2026-03-09T16:13:36.921607+0000 mgr.y (mgr.14520) 749 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:38.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:38 vm01 bash[28152]: audit 2026-03-09T16:13:37.017373+0000 mgr.y (mgr.14520) 750 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:38.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:38 vm01 bash[28152]: audit 2026-03-09T16:13:37.017373+0000 mgr.y (mgr.14520) 750 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:38.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:38 vm01 bash[20728]: cluster 2026-03-09T16:13:36.921607+0000 mgr.y (mgr.14520) 749 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:38.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:38 vm01 bash[20728]: cluster 2026-03-09T16:13:36.921607+0000 mgr.y (mgr.14520) 749 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:38.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:38 vm01 bash[20728]: audit 2026-03-09T16:13:37.017373+0000 mgr.y (mgr.14520) 750 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:38.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:38 vm01 bash[20728]: audit 2026-03-09T16:13:37.017373+0000 mgr.y (mgr.14520) 750 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:39 vm09 bash[22983]: cluster 2026-03-09T16:13:38.922113+0000 mgr.y (mgr.14520) 751 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:39 vm09 bash[22983]: cluster 2026-03-09T16:13:38.922113+0000 mgr.y (mgr.14520) 751 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:39.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:39 vm01 bash[28152]: cluster 2026-03-09T16:13:38.922113+0000 mgr.y (mgr.14520) 751 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:39.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:39 vm01 bash[28152]: cluster 2026-03-09T16:13:38.922113+0000 mgr.y (mgr.14520) 751 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:39.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:39 vm01 bash[20728]: cluster 2026-03-09T16:13:38.922113+0000 mgr.y (mgr.14520) 751 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:39.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:39 vm01 bash[20728]: cluster 2026-03-09T16:13:38.922113+0000 mgr.y (mgr.14520) 751 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:41 vm09 bash[22983]: cluster 2026-03-09T16:13:40.922695+0000 mgr.y (mgr.14520) 752 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:41 vm09 bash[22983]: cluster 2026-03-09T16:13:40.922695+0000 mgr.y (mgr.14520) 752 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:42.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:41 vm01 bash[28152]: cluster 2026-03-09T16:13:40.922695+0000 mgr.y (mgr.14520) 752 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:42.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:41 vm01 bash[28152]: cluster 2026-03-09T16:13:40.922695+0000 mgr.y (mgr.14520) 752 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:42.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:41 vm01 bash[20728]: cluster 2026-03-09T16:13:40.922695+0000 mgr.y (mgr.14520) 752 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:42.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:41 vm01 bash[20728]: cluster 2026-03-09T16:13:40.922695+0000 mgr.y (mgr.14520) 752 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:43.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:13:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:13:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:13:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:43 vm09 bash[22983]: cluster 2026-03-09T16:13:42.923100+0000 mgr.y (mgr.14520) 753 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:43 vm09 bash[22983]: cluster 2026-03-09T16:13:42.923100+0000 mgr.y (mgr.14520) 753 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:44.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:43 vm01 bash[28152]: cluster 2026-03-09T16:13:42.923100+0000 mgr.y (mgr.14520) 753 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:44.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:43 vm01 bash[28152]: cluster 2026-03-09T16:13:42.923100+0000 mgr.y (mgr.14520) 753 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:43 vm01 bash[20728]: cluster 2026-03-09T16:13:42.923100+0000 mgr.y (mgr.14520) 753 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:43 vm01 bash[20728]: cluster 2026-03-09T16:13:42.923100+0000 mgr.y (mgr.14520) 753 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:45 vm09 bash[22983]: audit 2026-03-09T16:13:44.644977+0000 mon.a (mon.0) 3795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:45 vm09 bash[22983]: audit 2026-03-09T16:13:44.644977+0000 mon.a (mon.0) 3795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:45 vm09 bash[22983]: audit 2026-03-09T16:13:44.871171+0000 mon.a (mon.0) 3796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:13:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:45 vm09 bash[22983]: audit 2026-03-09T16:13:44.871171+0000 mon.a (mon.0) 3796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:13:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:45 vm01 bash[28152]: audit 2026-03-09T16:13:44.644977+0000 mon.a (mon.0) 3795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:45 vm01 bash[28152]: audit 2026-03-09T16:13:44.644977+0000 mon.a (mon.0) 3795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:45 vm01 bash[28152]: audit 2026-03-09T16:13:44.871171+0000 mon.a (mon.0) 3796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:13:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:45 vm01 bash[28152]: audit 2026-03-09T16:13:44.871171+0000 mon.a (mon.0) 3796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:13:45.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:45 vm01 bash[20728]: audit 2026-03-09T16:13:44.644977+0000 mon.a (mon.0) 3795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:45.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:45 vm01 bash[20728]: audit 2026-03-09T16:13:44.644977+0000 mon.a (mon.0) 3795 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:13:45.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:45 vm01 bash[20728]: audit 2026-03-09T16:13:44.871171+0000 mon.a (mon.0) 3796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:13:45.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:45 vm01 bash[20728]: audit 2026-03-09T16:13:44.871171+0000 mon.a (mon.0) 3796 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:13:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:46 vm09 bash[22983]: cluster 2026-03-09T16:13:44.923878+0000 mgr.y (mgr.14520) 754 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:46 vm09 bash[22983]: cluster 2026-03-09T16:13:44.923878+0000 mgr.y (mgr.14520) 754 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:46 vm09 bash[22983]: audit 2026-03-09T16:13:45.221059+0000 mon.a (mon.0) 3797 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:13:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:46 vm09 bash[22983]: audit 2026-03-09T16:13:45.221059+0000 mon.a (mon.0) 3797 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:13:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:46 vm09 bash[22983]: audit 2026-03-09T16:13:45.221675+0000 mon.a (mon.0) 3798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:13:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:46 vm09 bash[22983]: audit 2026-03-09T16:13:45.221675+0000 mon.a (mon.0) 3798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:13:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:46 vm09 bash[22983]: audit 2026-03-09T16:13:45.226664+0000 mon.a (mon.0) 3799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:13:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:46 vm09 bash[22983]: audit 2026-03-09T16:13:45.226664+0000 mon.a (mon.0) 3799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:46 vm01 bash[28152]: cluster 2026-03-09T16:13:44.923878+0000 mgr.y (mgr.14520) 754 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:46 vm01 bash[28152]: cluster 2026-03-09T16:13:44.923878+0000 mgr.y (mgr.14520) 754 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:46 vm01 bash[28152]: audit 2026-03-09T16:13:45.221059+0000 mon.a (mon.0) 3797 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:46 vm01 bash[28152]: audit 2026-03-09T16:13:45.221059+0000 mon.a (mon.0) 3797 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:46 vm01 bash[28152]: audit 2026-03-09T16:13:45.221675+0000 mon.a (mon.0) 3798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:46 vm01 bash[28152]: audit 2026-03-09T16:13:45.221675+0000 mon.a (mon.0) 3798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:46 vm01 bash[28152]: audit 2026-03-09T16:13:45.226664+0000 mon.a (mon.0) 3799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:46 vm01 bash[28152]: audit 2026-03-09T16:13:45.226664+0000 mon.a (mon.0) 3799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:46 vm01 bash[20728]: cluster 2026-03-09T16:13:44.923878+0000 mgr.y (mgr.14520) 754 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:46 vm01 bash[20728]: cluster 2026-03-09T16:13:44.923878+0000 mgr.y (mgr.14520) 754 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:46 vm01 bash[20728]: audit 2026-03-09T16:13:45.221059+0000 mon.a (mon.0) 3797 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:46 vm01 bash[20728]: audit 2026-03-09T16:13:45.221059+0000 mon.a (mon.0) 3797 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:46 vm01 bash[20728]: audit 2026-03-09T16:13:45.221675+0000 mon.a (mon.0) 3798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:46 vm01 bash[20728]: audit 2026-03-09T16:13:45.221675+0000 mon.a (mon.0) 3798 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:46 vm01 bash[20728]: audit 2026-03-09T16:13:45.226664+0000 mon.a (mon.0) 3799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:13:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:46 vm01 bash[20728]: audit 2026-03-09T16:13:45.226664+0000 mon.a (mon.0) 3799 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:13:47.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:13:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:13:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:48 vm09 bash[22983]: cluster 2026-03-09T16:13:46.924390+0000 mgr.y (mgr.14520) 755 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:48 vm09 bash[22983]: cluster 2026-03-09T16:13:46.924390+0000 mgr.y (mgr.14520) 755 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:48 vm09 bash[22983]: audit 2026-03-09T16:13:47.027840+0000 mgr.y (mgr.14520) 756 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:48 vm09 bash[22983]: audit 2026-03-09T16:13:47.027840+0000 mgr.y (mgr.14520) 756 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:48.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:48 vm01 bash[28152]: cluster 2026-03-09T16:13:46.924390+0000 mgr.y (mgr.14520) 755 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:48.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:48 vm01 bash[28152]: cluster 2026-03-09T16:13:46.924390+0000 mgr.y (mgr.14520) 755 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:48.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:48 vm01 bash[28152]: audit 2026-03-09T16:13:47.027840+0000 mgr.y (mgr.14520) 756 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:48.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:48 vm01 bash[28152]: audit 2026-03-09T16:13:47.027840+0000 mgr.y (mgr.14520) 756 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:48.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:48 vm01 bash[20728]: cluster 2026-03-09T16:13:46.924390+0000 mgr.y (mgr.14520) 755 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:48.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:48 vm01 bash[20728]: cluster 2026-03-09T16:13:46.924390+0000 mgr.y (mgr.14520) 755 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:48.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:48 vm01 bash[20728]: audit 2026-03-09T16:13:47.027840+0000 mgr.y (mgr.14520) 756 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:48.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:48 vm01 bash[20728]: audit 2026-03-09T16:13:47.027840+0000 mgr.y (mgr.14520) 756 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:49 vm09 bash[22983]: cluster 2026-03-09T16:13:48.924904+0000 mgr.y (mgr.14520) 757 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:49 vm09 bash[22983]: cluster 2026-03-09T16:13:48.924904+0000 mgr.y (mgr.14520) 757 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:49.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:49 vm01 bash[28152]: cluster 2026-03-09T16:13:48.924904+0000 mgr.y (mgr.14520) 757 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:49.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:49 vm01 bash[28152]: cluster 2026-03-09T16:13:48.924904+0000 mgr.y (mgr.14520) 757 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:49.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:49 vm01 bash[20728]: cluster 2026-03-09T16:13:48.924904+0000 mgr.y (mgr.14520) 757 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:49.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:49 vm01 bash[20728]: cluster 2026-03-09T16:13:48.924904+0000 mgr.y (mgr.14520) 757 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:52.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:51 vm09 bash[22983]: cluster 2026-03-09T16:13:50.925379+0000 mgr.y (mgr.14520) 758 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:52.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:51 vm09 bash[22983]: cluster 2026-03-09T16:13:50.925379+0000 mgr.y (mgr.14520) 758 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:52.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:51 vm01 bash[28152]: cluster 2026-03-09T16:13:50.925379+0000 mgr.y (mgr.14520) 758 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:52.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:51 vm01 bash[28152]: cluster 2026-03-09T16:13:50.925379+0000 mgr.y (mgr.14520) 758 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:52.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:51 vm01 bash[20728]: cluster 2026-03-09T16:13:50.925379+0000 mgr.y (mgr.14520) 758 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:52.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:51 vm01 bash[20728]: cluster 2026-03-09T16:13:50.925379+0000 mgr.y (mgr.14520) 758 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:53.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:13:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:13:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:13:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:54 vm09 bash[22983]: cluster 2026-03-09T16:13:52.925696+0000 mgr.y (mgr.14520) 759 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:54 vm09 bash[22983]: cluster 2026-03-09T16:13:52.925696+0000 mgr.y (mgr.14520) 759 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:54.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:54 vm01 bash[28152]: cluster 2026-03-09T16:13:52.925696+0000 mgr.y (mgr.14520) 759 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:54.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:54 vm01 bash[28152]: cluster 2026-03-09T16:13:52.925696+0000 mgr.y (mgr.14520) 759 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:54.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:54 vm01 bash[20728]: cluster 2026-03-09T16:13:52.925696+0000 mgr.y (mgr.14520) 759 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:54.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:54 vm01 bash[20728]: cluster 2026-03-09T16:13:52.925696+0000 mgr.y (mgr.14520) 759 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:56 vm09 bash[22983]: cluster 2026-03-09T16:13:54.926369+0000 mgr.y (mgr.14520) 760 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:56 vm09 bash[22983]: cluster 2026-03-09T16:13:54.926369+0000 mgr.y (mgr.14520) 760 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:56.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:56 vm01 bash[28152]: cluster 2026-03-09T16:13:54.926369+0000 mgr.y (mgr.14520) 760 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:56.427 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:56 vm01 bash[28152]: cluster 2026-03-09T16:13:54.926369+0000 mgr.y (mgr.14520) 760 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:56.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:56 vm01 bash[20728]: cluster 2026-03-09T16:13:54.926369+0000 mgr.y (mgr.14520) 760 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:56.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:56 vm01 bash[20728]: cluster 2026-03-09T16:13:54.926369+0000 mgr.y (mgr.14520) 760 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:13:57.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:13:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:13:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:58 vm09 bash[22983]: cluster 2026-03-09T16:13:56.926705+0000 mgr.y (mgr.14520) 761 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:58 vm09 bash[22983]: cluster 2026-03-09T16:13:56.926705+0000 mgr.y (mgr.14520) 761 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:58 vm09 bash[22983]: audit 2026-03-09T16:13:57.038154+0000 mgr.y (mgr.14520) 762 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:13:58 vm09 bash[22983]: audit 2026-03-09T16:13:57.038154+0000 mgr.y (mgr.14520) 762 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:58 vm01 bash[28152]: cluster 2026-03-09T16:13:56.926705+0000 mgr.y (mgr.14520) 761 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:58 vm01 bash[28152]: cluster 2026-03-09T16:13:56.926705+0000 mgr.y (mgr.14520) 761 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:58 vm01 bash[28152]: audit 2026-03-09T16:13:57.038154+0000 mgr.y (mgr.14520) 762 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:13:58 vm01 bash[28152]: audit 2026-03-09T16:13:57.038154+0000 mgr.y (mgr.14520) 762 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:58 vm01 bash[20728]: cluster 2026-03-09T16:13:56.926705+0000 mgr.y (mgr.14520) 761 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:58 vm01 bash[20728]: cluster 2026-03-09T16:13:56.926705+0000 mgr.y (mgr.14520) 761 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:13:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:58 vm01 bash[20728]: audit 2026-03-09T16:13:57.038154+0000 mgr.y (mgr.14520) 762 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:13:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:13:58 vm01 bash[20728]: audit 2026-03-09T16:13:57.038154+0000 mgr.y (mgr.14520) 762 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:00 vm09 bash[22983]: cluster 2026-03-09T16:13:58.927129+0000 mgr.y (mgr.14520) 763 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:00 vm09 bash[22983]: cluster 2026-03-09T16:13:58.927129+0000 mgr.y (mgr.14520) 763 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:00 vm09 bash[22983]: audit 2026-03-09T16:13:59.652125+0000 mon.a (mon.0) 3800 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:00 vm09 bash[22983]: audit 2026-03-09T16:13:59.652125+0000 mon.a (mon.0) 3800 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:00 vm01 bash[28152]: cluster 2026-03-09T16:13:58.927129+0000 mgr.y (mgr.14520) 763 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:00 vm01 bash[28152]: cluster 2026-03-09T16:13:58.927129+0000 mgr.y (mgr.14520) 763 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:00 vm01 bash[28152]: audit 2026-03-09T16:13:59.652125+0000 mon.a (mon.0) 3800 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:00 vm01 bash[28152]: audit 2026-03-09T16:13:59.652125+0000 mon.a (mon.0) 3800 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:00 vm01 bash[20728]: cluster 2026-03-09T16:13:58.927129+0000 mgr.y (mgr.14520) 763 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:00 vm01 bash[20728]: cluster 2026-03-09T16:13:58.927129+0000 mgr.y (mgr.14520) 763 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:00 vm01 bash[20728]: audit 2026-03-09T16:13:59.652125+0000 mon.a (mon.0) 3800 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:00 vm01 bash[20728]: audit 2026-03-09T16:13:59.652125+0000 mon.a (mon.0) 3800 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:02 vm09 bash[22983]: cluster 2026-03-09T16:14:00.927654+0000 mgr.y (mgr.14520) 764 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:02 vm09 bash[22983]: cluster 2026-03-09T16:14:00.927654+0000 mgr.y (mgr.14520) 764 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:02 vm01 bash[28152]: cluster 2026-03-09T16:14:00.927654+0000 mgr.y (mgr.14520) 764 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:02.426 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:02 vm01 bash[28152]: cluster 2026-03-09T16:14:00.927654+0000 mgr.y (mgr.14520) 764 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:02 vm01 bash[20728]: cluster 2026-03-09T16:14:00.927654+0000 mgr.y (mgr.14520) 764 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:02.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:02 vm01 bash[20728]: cluster 2026-03-09T16:14:00.927654+0000 mgr.y (mgr.14520) 764 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:03.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:14:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:14:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:14:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:04 vm09 bash[22983]: cluster 2026-03-09T16:14:02.927911+0000 mgr.y (mgr.14520) 765 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:04 vm09 bash[22983]: cluster 2026-03-09T16:14:02.927911+0000 mgr.y (mgr.14520) 765 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:04.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:04 vm01 bash[28152]: cluster 2026-03-09T16:14:02.927911+0000 mgr.y (mgr.14520) 765 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:04.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:04 vm01 bash[28152]: cluster 2026-03-09T16:14:02.927911+0000 mgr.y (mgr.14520) 765 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:04 vm01 bash[20728]: cluster 2026-03-09T16:14:02.927911+0000 mgr.y (mgr.14520) 765 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:04 vm01 bash[20728]: cluster 2026-03-09T16:14:02.927911+0000 mgr.y (mgr.14520) 765 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:06 vm09 bash[22983]: cluster 2026-03-09T16:14:04.928669+0000 mgr.y (mgr.14520) 766 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:06 vm09 bash[22983]: cluster 2026-03-09T16:14:04.928669+0000 mgr.y (mgr.14520) 766 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:06 vm01 bash[28152]: cluster 2026-03-09T16:14:04.928669+0000 mgr.y (mgr.14520) 766 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:06 vm01 bash[28152]: cluster 2026-03-09T16:14:04.928669+0000 mgr.y (mgr.14520) 766 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:06 vm01 bash[20728]: cluster 2026-03-09T16:14:04.928669+0000 mgr.y (mgr.14520) 766 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:06 vm01 bash[20728]: cluster 2026-03-09T16:14:04.928669+0000 mgr.y (mgr.14520) 766 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:07.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:14:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:14:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:08 vm09 bash[22983]: cluster 2026-03-09T16:14:06.929054+0000 mgr.y (mgr.14520) 767 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:08 vm09 bash[22983]: cluster 2026-03-09T16:14:06.929054+0000 mgr.y (mgr.14520) 767 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:08 vm09 bash[22983]: audit 2026-03-09T16:14:07.048951+0000 mgr.y (mgr.14520) 768 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:08 vm09 bash[22983]: audit 2026-03-09T16:14:07.048951+0000 mgr.y (mgr.14520) 768 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:08 vm01 bash[28152]: cluster 2026-03-09T16:14:06.929054+0000 mgr.y (mgr.14520) 767 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:08 vm01 bash[28152]: cluster 2026-03-09T16:14:06.929054+0000 mgr.y (mgr.14520) 767 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:08 vm01 bash[28152]: audit 2026-03-09T16:14:07.048951+0000 mgr.y (mgr.14520) 768 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:08 vm01 bash[28152]: audit 2026-03-09T16:14:07.048951+0000 mgr.y (mgr.14520) 768 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:08 vm01 bash[20728]: cluster 2026-03-09T16:14:06.929054+0000 mgr.y (mgr.14520) 767 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:08 vm01 bash[20728]: cluster 2026-03-09T16:14:06.929054+0000 mgr.y (mgr.14520) 767 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:08 vm01 bash[20728]: audit 2026-03-09T16:14:07.048951+0000 mgr.y (mgr.14520) 768 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:08 vm01 bash[20728]: audit 2026-03-09T16:14:07.048951+0000 mgr.y (mgr.14520) 768 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:09 vm09 bash[22983]: cluster 2026-03-09T16:14:08.929512+0000 mgr.y (mgr.14520) 769 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:09 vm09 bash[22983]: cluster 2026-03-09T16:14:08.929512+0000 mgr.y (mgr.14520) 769 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:09.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:09 vm01 bash[28152]: cluster 2026-03-09T16:14:08.929512+0000 mgr.y (mgr.14520) 769 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:09.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:09 vm01 bash[28152]: cluster 2026-03-09T16:14:08.929512+0000 mgr.y (mgr.14520) 769 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:09.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:09 vm01 bash[20728]: cluster 2026-03-09T16:14:08.929512+0000 mgr.y (mgr.14520) 769 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:09.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:09 vm01 bash[20728]: cluster 2026-03-09T16:14:08.929512+0000 mgr.y (mgr.14520) 769 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:11 vm09 bash[22983]: cluster 2026-03-09T16:14:10.930194+0000 mgr.y (mgr.14520) 770 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:11 vm09 bash[22983]: cluster 2026-03-09T16:14:10.930194+0000 mgr.y (mgr.14520) 770 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:12.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:11 vm01 bash[28152]: cluster 2026-03-09T16:14:10.930194+0000 mgr.y (mgr.14520) 770 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:12.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:11 vm01 bash[28152]: cluster 2026-03-09T16:14:10.930194+0000 mgr.y (mgr.14520) 770 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:12.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:11 vm01 bash[20728]: cluster 2026-03-09T16:14:10.930194+0000 mgr.y (mgr.14520) 770 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:12.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:11 vm01 bash[20728]: cluster 2026-03-09T16:14:10.930194+0000 mgr.y (mgr.14520) 770 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:13.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:14:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:14:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:14:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:14 vm09 bash[22983]: cluster 2026-03-09T16:14:12.930606+0000 mgr.y (mgr.14520) 771 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:14 vm09 bash[22983]: cluster 2026-03-09T16:14:12.930606+0000 mgr.y (mgr.14520) 771 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:14 vm01 bash[28152]: cluster 2026-03-09T16:14:12.930606+0000 mgr.y (mgr.14520) 771 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:14 vm01 bash[28152]: cluster 2026-03-09T16:14:12.930606+0000 mgr.y (mgr.14520) 771 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:14.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:14 vm01 bash[20728]: cluster 2026-03-09T16:14:12.930606+0000 mgr.y (mgr.14520) 771 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:14.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:14 vm01 bash[20728]: cluster 2026-03-09T16:14:12.930606+0000 mgr.y (mgr.14520) 771 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:15 vm09 bash[22983]: audit 2026-03-09T16:14:14.658380+0000 mon.a (mon.0) 3801 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:15 vm09 bash[22983]: audit 2026-03-09T16:14:14.658380+0000 mon.a (mon.0) 3801 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:15.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:15 vm01 bash[28152]: audit 2026-03-09T16:14:14.658380+0000 mon.a (mon.0) 3801 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:15.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:15 vm01 bash[28152]: audit 2026-03-09T16:14:14.658380+0000 mon.a (mon.0) 3801 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:15.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:15 vm01 bash[20728]: audit 2026-03-09T16:14:14.658380+0000 mon.a (mon.0) 3801 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:15.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:15 vm01 bash[20728]: audit 2026-03-09T16:14:14.658380+0000 mon.a (mon.0) 3801 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:16 vm09 bash[22983]: cluster 2026-03-09T16:14:14.931252+0000 mgr.y (mgr.14520) 772 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:16 vm09 bash[22983]: cluster 2026-03-09T16:14:14.931252+0000 mgr.y (mgr.14520) 772 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:16.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:16 vm01 bash[28152]: cluster 2026-03-09T16:14:14.931252+0000 mgr.y (mgr.14520) 772 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:16.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:16 vm01 bash[28152]: cluster 2026-03-09T16:14:14.931252+0000 mgr.y (mgr.14520) 772 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:16.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:16 vm01 bash[20728]: cluster 2026-03-09T16:14:14.931252+0000 mgr.y (mgr.14520) 772 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:16.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:16 vm01 bash[20728]: cluster 2026-03-09T16:14:14.931252+0000 mgr.y (mgr.14520) 772 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:17.327 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:14:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:14:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:17 vm09 bash[22983]: cluster 2026-03-09T16:14:16.931547+0000 mgr.y (mgr.14520) 773 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:17 vm09 bash[22983]: cluster 2026-03-09T16:14:16.931547+0000 mgr.y (mgr.14520) 773 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:17 vm09 bash[22983]: audit 2026-03-09T16:14:17.059675+0000 mgr.y (mgr.14520) 774 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:17 vm09 bash[22983]: audit 2026-03-09T16:14:17.059675+0000 mgr.y (mgr.14520) 774 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:17 vm01 bash[28152]: cluster 2026-03-09T16:14:16.931547+0000 mgr.y (mgr.14520) 773 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:17 vm01 bash[28152]: cluster 2026-03-09T16:14:16.931547+0000 mgr.y (mgr.14520) 773 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:17 vm01 bash[28152]: audit 2026-03-09T16:14:17.059675+0000 mgr.y (mgr.14520) 774 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:17 vm01 bash[28152]: audit 2026-03-09T16:14:17.059675+0000 mgr.y (mgr.14520) 774 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:17 vm01 bash[20728]: cluster 2026-03-09T16:14:16.931547+0000 mgr.y (mgr.14520) 773 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:17 vm01 bash[20728]: cluster 2026-03-09T16:14:16.931547+0000 mgr.y (mgr.14520) 773 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:17 vm01 bash[20728]: audit 2026-03-09T16:14:17.059675+0000 mgr.y (mgr.14520) 774 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:17 vm01 bash[20728]: audit 2026-03-09T16:14:17.059675+0000 mgr.y (mgr.14520) 774 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:19 vm09 bash[22983]: cluster 2026-03-09T16:14:18.932017+0000 mgr.y (mgr.14520) 775 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:14:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:19 vm09 bash[22983]: cluster 2026-03-09T16:14:18.932017+0000 mgr.y (mgr.14520) 775 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:14:20.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:19 vm01 bash[28152]: cluster 2026-03-09T16:14:18.932017+0000 mgr.y (mgr.14520) 775 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:14:20.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:19 vm01 bash[28152]: cluster 2026-03-09T16:14:18.932017+0000 mgr.y (mgr.14520) 775 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:14:20.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:19 vm01 bash[20728]: cluster 2026-03-09T16:14:18.932017+0000 mgr.y (mgr.14520) 775 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:14:20.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:19 vm01 bash[20728]: cluster 2026-03-09T16:14:18.932017+0000 mgr.y (mgr.14520) 775 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T16:14:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:22 vm09 bash[22983]: cluster 2026-03-09T16:14:20.932633+0000 mgr.y (mgr.14520) 776 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:22 vm09 bash[22983]: cluster 2026-03-09T16:14:20.932633+0000 mgr.y (mgr.14520) 776 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:22.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:22 vm01 bash[28152]: cluster 2026-03-09T16:14:20.932633+0000 mgr.y (mgr.14520) 776 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:22.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:22 vm01 bash[28152]: cluster 2026-03-09T16:14:20.932633+0000 mgr.y (mgr.14520) 776 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:22 vm01 bash[20728]: cluster 2026-03-09T16:14:20.932633+0000 mgr.y (mgr.14520) 776 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:22 vm01 bash[20728]: cluster 2026-03-09T16:14:20.932633+0000 mgr.y (mgr.14520) 776 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:23.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:14:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:14:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:14:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:24 vm09 bash[22983]: cluster 2026-03-09T16:14:22.933006+0000 mgr.y (mgr.14520) 777 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 6.6 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:24 vm09 bash[22983]: cluster 2026-03-09T16:14:22.933006+0000 mgr.y (mgr.14520) 777 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 6.6 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:24.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:24 vm01 bash[28152]: cluster 2026-03-09T16:14:22.933006+0000 mgr.y (mgr.14520) 777 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 6.6 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:24.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:24 vm01 bash[28152]: cluster 2026-03-09T16:14:22.933006+0000 mgr.y (mgr.14520) 777 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 6.6 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:24 vm01 bash[20728]: cluster 2026-03-09T16:14:22.933006+0000 mgr.y (mgr.14520) 777 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 6.6 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:24 vm01 bash[20728]: cluster 2026-03-09T16:14:22.933006+0000 mgr.y (mgr.14520) 777 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 6.6 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T16:14:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:26 vm09 bash[22983]: cluster 2026-03-09T16:14:24.933911+0000 mgr.y (mgr.14520) 778 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:26.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:26 vm09 bash[22983]: cluster 2026-03-09T16:14:24.933911+0000 mgr.y (mgr.14520) 778 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:26.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:26 vm01 bash[28152]: cluster 2026-03-09T16:14:24.933911+0000 mgr.y (mgr.14520) 778 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:26.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:26 vm01 bash[28152]: cluster 2026-03-09T16:14:24.933911+0000 mgr.y (mgr.14520) 778 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:26.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:26 vm01 bash[20728]: cluster 2026-03-09T16:14:24.933911+0000 mgr.y (mgr.14520) 778 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:26.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:26 vm01 bash[20728]: cluster 2026-03-09T16:14:24.933911+0000 mgr.y (mgr.14520) 778 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:27.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:14:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:14:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:28 vm09 bash[22983]: cluster 2026-03-09T16:14:26.934251+0000 mgr.y (mgr.14520) 779 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:28 vm09 bash[22983]: cluster 2026-03-09T16:14:26.934251+0000 mgr.y (mgr.14520) 779 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:28 vm09 bash[22983]: audit 2026-03-09T16:14:27.063373+0000 mgr.y (mgr.14520) 780 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:28 vm09 bash[22983]: audit 2026-03-09T16:14:27.063373+0000 mgr.y (mgr.14520) 780 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:28 vm01 bash[28152]: cluster 2026-03-09T16:14:26.934251+0000 mgr.y (mgr.14520) 779 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:28 vm01 bash[28152]: cluster 2026-03-09T16:14:26.934251+0000 mgr.y (mgr.14520) 779 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:28 vm01 bash[28152]: audit 2026-03-09T16:14:27.063373+0000 mgr.y (mgr.14520) 780 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:28 vm01 bash[28152]: audit 2026-03-09T16:14:27.063373+0000 mgr.y (mgr.14520) 780 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:28 vm01 bash[20728]: cluster 2026-03-09T16:14:26.934251+0000 mgr.y (mgr.14520) 779 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:28 vm01 bash[20728]: cluster 2026-03-09T16:14:26.934251+0000 mgr.y (mgr.14520) 779 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:28 vm01 bash[20728]: audit 2026-03-09T16:14:27.063373+0000 mgr.y (mgr.14520) 780 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:28 vm01 bash[20728]: audit 2026-03-09T16:14:27.063373+0000 mgr.y (mgr.14520) 780 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:30 vm09 bash[22983]: cluster 2026-03-09T16:14:28.934686+0000 mgr.y (mgr.14520) 781 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:30 vm09 bash[22983]: cluster 2026-03-09T16:14:28.934686+0000 mgr.y (mgr.14520) 781 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:30 vm09 bash[22983]: audit 2026-03-09T16:14:29.664570+0000 mon.a (mon.0) 3802 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:30 vm09 bash[22983]: audit 2026-03-09T16:14:29.664570+0000 mon.a (mon.0) 3802 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:30 vm01 bash[28152]: cluster 2026-03-09T16:14:28.934686+0000 mgr.y (mgr.14520) 781 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:30 vm01 bash[28152]: cluster 2026-03-09T16:14:28.934686+0000 mgr.y (mgr.14520) 781 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:30 vm01 bash[28152]: audit 2026-03-09T16:14:29.664570+0000 mon.a (mon.0) 3802 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:30.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:30 vm01 bash[28152]: audit 2026-03-09T16:14:29.664570+0000 mon.a (mon.0) 3802 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:30 vm01 bash[20728]: cluster 2026-03-09T16:14:28.934686+0000 mgr.y (mgr.14520) 781 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:30 vm01 bash[20728]: cluster 2026-03-09T16:14:28.934686+0000 mgr.y (mgr.14520) 781 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:14:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:30 vm01 bash[20728]: audit 2026-03-09T16:14:29.664570+0000 mon.a (mon.0) 3802 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:30.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:30 vm01 bash[20728]: audit 2026-03-09T16:14:29.664570+0000 mon.a (mon.0) 3802 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:32 vm09 bash[22983]: cluster 2026-03-09T16:14:30.935411+0000 mgr.y (mgr.14520) 782 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s 2026-03-09T16:14:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:32 vm09 bash[22983]: cluster 2026-03-09T16:14:30.935411+0000 mgr.y (mgr.14520) 782 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s 2026-03-09T16:14:32.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:32 vm01 bash[28152]: cluster 2026-03-09T16:14:30.935411+0000 mgr.y (mgr.14520) 782 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s 2026-03-09T16:14:32.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:32 vm01 bash[28152]: cluster 2026-03-09T16:14:30.935411+0000 mgr.y (mgr.14520) 782 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s 2026-03-09T16:14:32.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:32 vm01 bash[20728]: cluster 2026-03-09T16:14:30.935411+0000 mgr.y (mgr.14520) 782 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s 2026-03-09T16:14:32.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:32 vm01 bash[20728]: cluster 2026-03-09T16:14:30.935411+0000 mgr.y (mgr.14520) 782 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s 2026-03-09T16:14:33.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:14:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:14:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:14:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:34 vm09 bash[22983]: cluster 2026-03-09T16:14:32.935951+0000 mgr.y (mgr.14520) 783 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:14:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:34 vm09 bash[22983]: cluster 2026-03-09T16:14:32.935951+0000 mgr.y (mgr.14520) 783 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:14:34.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:34 vm01 bash[28152]: cluster 2026-03-09T16:14:32.935951+0000 mgr.y (mgr.14520) 783 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:14:34.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:34 vm01 bash[28152]: cluster 2026-03-09T16:14:32.935951+0000 mgr.y (mgr.14520) 783 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:14:34.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:34 vm01 bash[20728]: cluster 2026-03-09T16:14:32.935951+0000 mgr.y (mgr.14520) 783 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:14:34.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:34 vm01 bash[20728]: cluster 2026-03-09T16:14:32.935951+0000 mgr.y (mgr.14520) 783 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 50 op/s 2026-03-09T16:14:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:35 vm09 bash[22983]: cluster 2026-03-09T16:14:34.937154+0000 mgr.y (mgr.14520) 784 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s 2026-03-09T16:14:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:35 vm09 bash[22983]: cluster 2026-03-09T16:14:34.937154+0000 mgr.y (mgr.14520) 784 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s 2026-03-09T16:14:35.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:35 vm01 bash[28152]: cluster 2026-03-09T16:14:34.937154+0000 mgr.y (mgr.14520) 784 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s 2026-03-09T16:14:35.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:35 vm01 bash[28152]: cluster 2026-03-09T16:14:34.937154+0000 mgr.y (mgr.14520) 784 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s 2026-03-09T16:14:35.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:35 vm01 bash[20728]: cluster 2026-03-09T16:14:34.937154+0000 mgr.y (mgr.14520) 784 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s 2026-03-09T16:14:35.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:35 vm01 bash[20728]: cluster 2026-03-09T16:14:34.937154+0000 mgr.y (mgr.14520) 784 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s 2026-03-09T16:14:37.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:14:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:14:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:38 vm09 bash[22983]: cluster 2026-03-09T16:14:36.937674+0000 mgr.y (mgr.14520) 785 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:38 vm09 bash[22983]: cluster 2026-03-09T16:14:36.937674+0000 mgr.y (mgr.14520) 785 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:38 vm09 bash[22983]: audit 2026-03-09T16:14:37.072182+0000 mgr.y (mgr.14520) 786 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:38 vm09 bash[22983]: audit 2026-03-09T16:14:37.072182+0000 mgr.y (mgr.14520) 786 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:38 vm01 bash[28152]: cluster 2026-03-09T16:14:36.937674+0000 mgr.y (mgr.14520) 785 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:38 vm01 bash[28152]: cluster 2026-03-09T16:14:36.937674+0000 mgr.y (mgr.14520) 785 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:38 vm01 bash[28152]: audit 2026-03-09T16:14:37.072182+0000 mgr.y (mgr.14520) 786 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:38.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:38 vm01 bash[28152]: audit 2026-03-09T16:14:37.072182+0000 mgr.y (mgr.14520) 786 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:38 vm01 bash[20728]: cluster 2026-03-09T16:14:36.937674+0000 mgr.y (mgr.14520) 785 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:38 vm01 bash[20728]: cluster 2026-03-09T16:14:36.937674+0000 mgr.y (mgr.14520) 785 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:38 vm01 bash[20728]: audit 2026-03-09T16:14:37.072182+0000 mgr.y (mgr.14520) 786 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:38.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:38 vm01 bash[20728]: audit 2026-03-09T16:14:37.072182+0000 mgr.y (mgr.14520) 786 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:40 vm09 bash[22983]: cluster 2026-03-09T16:14:38.938219+0000 mgr.y (mgr.14520) 787 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:40 vm09 bash[22983]: cluster 2026-03-09T16:14:38.938219+0000 mgr.y (mgr.14520) 787 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:40.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:40 vm01 bash[28152]: cluster 2026-03-09T16:14:38.938219+0000 mgr.y (mgr.14520) 787 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:40.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:40 vm01 bash[28152]: cluster 2026-03-09T16:14:38.938219+0000 mgr.y (mgr.14520) 787 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:40.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:40 vm01 bash[20728]: cluster 2026-03-09T16:14:38.938219+0000 mgr.y (mgr.14520) 787 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:40.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:40 vm01 bash[20728]: cluster 2026-03-09T16:14:38.938219+0000 mgr.y (mgr.14520) 787 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:42 vm09 bash[22983]: cluster 2026-03-09T16:14:40.938813+0000 mgr.y (mgr.14520) 788 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:42 vm09 bash[22983]: cluster 2026-03-09T16:14:40.938813+0000 mgr.y (mgr.14520) 788 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:42.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:42 vm01 bash[28152]: cluster 2026-03-09T16:14:40.938813+0000 mgr.y (mgr.14520) 788 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:42.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:42 vm01 bash[28152]: cluster 2026-03-09T16:14:40.938813+0000 mgr.y (mgr.14520) 788 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:42.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:42 vm01 bash[20728]: cluster 2026-03-09T16:14:40.938813+0000 mgr.y (mgr.14520) 788 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:42.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:42 vm01 bash[20728]: cluster 2026-03-09T16:14:40.938813+0000 mgr.y (mgr.14520) 788 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:43.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:14:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:14:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:14:44.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:44 vm01 bash[28152]: cluster 2026-03-09T16:14:42.939085+0000 mgr.y (mgr.14520) 789 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:44.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:44 vm01 bash[28152]: cluster 2026-03-09T16:14:42.939085+0000 mgr.y (mgr.14520) 789 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:44 vm01 bash[20728]: cluster 2026-03-09T16:14:42.939085+0000 mgr.y (mgr.14520) 789 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:44 vm01 bash[20728]: cluster 2026-03-09T16:14:42.939085+0000 mgr.y (mgr.14520) 789 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:44 vm09 bash[22983]: cluster 2026-03-09T16:14:42.939085+0000 mgr.y (mgr.14520) 789 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:44.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:44 vm09 bash[22983]: cluster 2026-03-09T16:14:42.939085+0000 mgr.y (mgr.14520) 789 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:45 vm01 bash[28152]: audit 2026-03-09T16:14:44.672337+0000 mon.a (mon.0) 3803 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:45.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:45 vm01 bash[28152]: audit 2026-03-09T16:14:44.672337+0000 mon.a (mon.0) 3803 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:45.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:45 vm01 bash[20728]: audit 2026-03-09T16:14:44.672337+0000 mon.a (mon.0) 3803 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:45.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:45 vm01 bash[20728]: audit 2026-03-09T16:14:44.672337+0000 mon.a (mon.0) 3803 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:45 vm09 bash[22983]: audit 2026-03-09T16:14:44.672337+0000 mon.a (mon.0) 3803 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:45 vm09 bash[22983]: audit 2026-03-09T16:14:44.672337+0000 mon.a (mon.0) 3803 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: cluster 2026-03-09T16:14:44.940197+0000 mgr.y (mgr.14520) 790 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: cluster 2026-03-09T16:14:44.940197+0000 mgr.y (mgr.14520) 790 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: audit 2026-03-09T16:14:45.266661+0000 mon.a (mon.0) 3804 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: audit 2026-03-09T16:14:45.266661+0000 mon.a (mon.0) 3804 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: audit 2026-03-09T16:14:45.638949+0000 mon.a (mon.0) 3805 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: audit 2026-03-09T16:14:45.638949+0000 mon.a (mon.0) 3805 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: audit 2026-03-09T16:14:45.639744+0000 mon.a (mon.0) 3806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: audit 2026-03-09T16:14:45.639744+0000 mon.a (mon.0) 3806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: audit 2026-03-09T16:14:45.674819+0000 mon.a (mon.0) 3807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:46 vm01 bash[28152]: audit 2026-03-09T16:14:45.674819+0000 mon.a (mon.0) 3807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: cluster 2026-03-09T16:14:44.940197+0000 mgr.y (mgr.14520) 790 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: cluster 2026-03-09T16:14:44.940197+0000 mgr.y (mgr.14520) 790 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: audit 2026-03-09T16:14:45.266661+0000 mon.a (mon.0) 3804 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: audit 2026-03-09T16:14:45.266661+0000 mon.a (mon.0) 3804 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: audit 2026-03-09T16:14:45.638949+0000 mon.a (mon.0) 3805 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: audit 2026-03-09T16:14:45.638949+0000 mon.a (mon.0) 3805 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: audit 2026-03-09T16:14:45.639744+0000 mon.a (mon.0) 3806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: audit 2026-03-09T16:14:45.639744+0000 mon.a (mon.0) 3806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: audit 2026-03-09T16:14:45.674819+0000 mon.a (mon.0) 3807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:14:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:46 vm01 bash[20728]: audit 2026-03-09T16:14:45.674819+0000 mon.a (mon.0) 3807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: cluster 2026-03-09T16:14:44.940197+0000 mgr.y (mgr.14520) 790 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: cluster 2026-03-09T16:14:44.940197+0000 mgr.y (mgr.14520) 790 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: audit 2026-03-09T16:14:45.266661+0000 mon.a (mon.0) 3804 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: audit 2026-03-09T16:14:45.266661+0000 mon.a (mon.0) 3804 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: audit 2026-03-09T16:14:45.638949+0000 mon.a (mon.0) 3805 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: audit 2026-03-09T16:14:45.638949+0000 mon.a (mon.0) 3805 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: audit 2026-03-09T16:14:45.639744+0000 mon.a (mon.0) 3806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: audit 2026-03-09T16:14:45.639744+0000 mon.a (mon.0) 3806 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: audit 2026-03-09T16:14:45.674819+0000 mon.a (mon.0) 3807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:14:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:46 vm09 bash[22983]: audit 2026-03-09T16:14:45.674819+0000 mon.a (mon.0) 3807 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:14:47.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:14:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:14:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:48 vm01 bash[28152]: cluster 2026-03-09T16:14:46.940597+0000 mgr.y (mgr.14520) 791 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:48 vm01 bash[28152]: cluster 2026-03-09T16:14:46.940597+0000 mgr.y (mgr.14520) 791 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:48 vm01 bash[28152]: audit 2026-03-09T16:14:47.076676+0000 mgr.y (mgr.14520) 792 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:48.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:48 vm01 bash[28152]: audit 2026-03-09T16:14:47.076676+0000 mgr.y (mgr.14520) 792 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:48 vm01 bash[20728]: cluster 2026-03-09T16:14:46.940597+0000 mgr.y (mgr.14520) 791 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:48 vm01 bash[20728]: cluster 2026-03-09T16:14:46.940597+0000 mgr.y (mgr.14520) 791 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:48 vm01 bash[20728]: audit 2026-03-09T16:14:47.076676+0000 mgr.y (mgr.14520) 792 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:48.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:48 vm01 bash[20728]: audit 2026-03-09T16:14:47.076676+0000 mgr.y (mgr.14520) 792 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:48 vm09 bash[22983]: cluster 2026-03-09T16:14:46.940597+0000 mgr.y (mgr.14520) 791 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:48 vm09 bash[22983]: cluster 2026-03-09T16:14:46.940597+0000 mgr.y (mgr.14520) 791 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:48 vm09 bash[22983]: audit 2026-03-09T16:14:47.076676+0000 mgr.y (mgr.14520) 792 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:48 vm09 bash[22983]: audit 2026-03-09T16:14:47.076676+0000 mgr.y (mgr.14520) 792 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:50 vm09 bash[22983]: cluster 2026-03-09T16:14:48.941307+0000 mgr.y (mgr.14520) 793 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:50 vm09 bash[22983]: cluster 2026-03-09T16:14:48.941307+0000 mgr.y (mgr.14520) 793 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:50.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:50 vm01 bash[28152]: cluster 2026-03-09T16:14:48.941307+0000 mgr.y (mgr.14520) 793 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:50.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:50 vm01 bash[28152]: cluster 2026-03-09T16:14:48.941307+0000 mgr.y (mgr.14520) 793 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:50.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:50 vm01 bash[20728]: cluster 2026-03-09T16:14:48.941307+0000 mgr.y (mgr.14520) 793 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:50.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:50 vm01 bash[20728]: cluster 2026-03-09T16:14:48.941307+0000 mgr.y (mgr.14520) 793 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:51 vm09 bash[22983]: cluster 2026-03-09T16:14:50.942086+0000 mgr.y (mgr.14520) 794 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:51 vm09 bash[22983]: cluster 2026-03-09T16:14:50.942086+0000 mgr.y (mgr.14520) 794 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:51.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:51 vm01 bash[28152]: cluster 2026-03-09T16:14:50.942086+0000 mgr.y (mgr.14520) 794 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:51.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:51 vm01 bash[28152]: cluster 2026-03-09T16:14:50.942086+0000 mgr.y (mgr.14520) 794 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:51.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:51 vm01 bash[20728]: cluster 2026-03-09T16:14:50.942086+0000 mgr.y (mgr.14520) 794 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:51.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:51 vm01 bash[20728]: cluster 2026-03-09T16:14:50.942086+0000 mgr.y (mgr.14520) 794 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:53.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:14:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:14:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:14:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:53 vm09 bash[22983]: cluster 2026-03-09T16:14:52.942450+0000 mgr.y (mgr.14520) 795 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:53 vm09 bash[22983]: cluster 2026-03-09T16:14:52.942450+0000 mgr.y (mgr.14520) 795 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:54.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:54 vm01 bash[28152]: cluster 2026-03-09T16:14:52.942450+0000 mgr.y (mgr.14520) 795 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:54.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:54 vm01 bash[28152]: cluster 2026-03-09T16:14:52.942450+0000 mgr.y (mgr.14520) 795 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:54.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:54 vm01 bash[20728]: cluster 2026-03-09T16:14:52.942450+0000 mgr.y (mgr.14520) 795 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:54.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:54 vm01 bash[20728]: cluster 2026-03-09T16:14:52.942450+0000 mgr.y (mgr.14520) 795 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:56 vm09 bash[22983]: cluster 2026-03-09T16:14:54.943284+0000 mgr.y (mgr.14520) 796 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:56.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:56 vm09 bash[22983]: cluster 2026-03-09T16:14:54.943284+0000 mgr.y (mgr.14520) 796 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:56.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:56 vm01 bash[28152]: cluster 2026-03-09T16:14:54.943284+0000 mgr.y (mgr.14520) 796 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:56.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:56 vm01 bash[28152]: cluster 2026-03-09T16:14:54.943284+0000 mgr.y (mgr.14520) 796 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:56.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:56 vm01 bash[20728]: cluster 2026-03-09T16:14:54.943284+0000 mgr.y (mgr.14520) 796 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:56.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:56 vm01 bash[20728]: cluster 2026-03-09T16:14:54.943284+0000 mgr.y (mgr.14520) 796 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:14:57.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:14:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:14:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:58 vm09 bash[22983]: cluster 2026-03-09T16:14:56.943621+0000 mgr.y (mgr.14520) 797 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:58 vm09 bash[22983]: cluster 2026-03-09T16:14:56.943621+0000 mgr.y (mgr.14520) 797 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:58 vm09 bash[22983]: audit 2026-03-09T16:14:57.085663+0000 mgr.y (mgr.14520) 798 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:14:58 vm09 bash[22983]: audit 2026-03-09T16:14:57.085663+0000 mgr.y (mgr.14520) 798 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:58.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:58 vm01 bash[28152]: cluster 2026-03-09T16:14:56.943621+0000 mgr.y (mgr.14520) 797 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:58 vm01 bash[28152]: cluster 2026-03-09T16:14:56.943621+0000 mgr.y (mgr.14520) 797 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:58 vm01 bash[28152]: audit 2026-03-09T16:14:57.085663+0000 mgr.y (mgr.14520) 798 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:58.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:14:58 vm01 bash[28152]: audit 2026-03-09T16:14:57.085663+0000 mgr.y (mgr.14520) 798 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:58 vm01 bash[20728]: cluster 2026-03-09T16:14:56.943621+0000 mgr.y (mgr.14520) 797 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:58 vm01 bash[20728]: cluster 2026-03-09T16:14:56.943621+0000 mgr.y (mgr.14520) 797 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:14:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:58 vm01 bash[20728]: audit 2026-03-09T16:14:57.085663+0000 mgr.y (mgr.14520) 798 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:14:58.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:14:58 vm01 bash[20728]: audit 2026-03-09T16:14:57.085663+0000 mgr.y (mgr.14520) 798 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:00 vm09 bash[22983]: cluster 2026-03-09T16:14:58.944195+0000 mgr.y (mgr.14520) 799 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:00 vm09 bash[22983]: cluster 2026-03-09T16:14:58.944195+0000 mgr.y (mgr.14520) 799 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:00 vm09 bash[22983]: audit 2026-03-09T16:14:59.679008+0000 mon.a (mon.0) 3808 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:00 vm09 bash[22983]: audit 2026-03-09T16:14:59.679008+0000 mon.a (mon.0) 3808 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:00 vm01 bash[28152]: cluster 2026-03-09T16:14:58.944195+0000 mgr.y (mgr.14520) 799 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:00 vm01 bash[28152]: cluster 2026-03-09T16:14:58.944195+0000 mgr.y (mgr.14520) 799 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:00 vm01 bash[28152]: audit 2026-03-09T16:14:59.679008+0000 mon.a (mon.0) 3808 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:00.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:00 vm01 bash[28152]: audit 2026-03-09T16:14:59.679008+0000 mon.a (mon.0) 3808 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:00 vm01 bash[20728]: cluster 2026-03-09T16:14:58.944195+0000 mgr.y (mgr.14520) 799 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:00 vm01 bash[20728]: cluster 2026-03-09T16:14:58.944195+0000 mgr.y (mgr.14520) 799 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:00 vm01 bash[20728]: audit 2026-03-09T16:14:59.679008+0000 mon.a (mon.0) 3808 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:00.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:00 vm01 bash[20728]: audit 2026-03-09T16:14:59.679008+0000 mon.a (mon.0) 3808 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:02 vm09 bash[22983]: cluster 2026-03-09T16:15:00.944740+0000 mgr.y (mgr.14520) 800 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:02 vm09 bash[22983]: cluster 2026-03-09T16:15:00.944740+0000 mgr.y (mgr.14520) 800 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:02.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:02 vm01 bash[28152]: cluster 2026-03-09T16:15:00.944740+0000 mgr.y (mgr.14520) 800 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:02.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:02 vm01 bash[28152]: cluster 2026-03-09T16:15:00.944740+0000 mgr.y (mgr.14520) 800 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:02.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:02 vm01 bash[20728]: cluster 2026-03-09T16:15:00.944740+0000 mgr.y (mgr.14520) 800 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:02.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:02 vm01 bash[20728]: cluster 2026-03-09T16:15:00.944740+0000 mgr.y (mgr.14520) 800 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:03.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:15:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:15:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:15:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:04 vm09 bash[22983]: cluster 2026-03-09T16:15:02.945014+0000 mgr.y (mgr.14520) 801 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:04 vm09 bash[22983]: cluster 2026-03-09T16:15:02.945014+0000 mgr.y (mgr.14520) 801 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:04.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:04 vm01 bash[28152]: cluster 2026-03-09T16:15:02.945014+0000 mgr.y (mgr.14520) 801 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:04.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:04 vm01 bash[28152]: cluster 2026-03-09T16:15:02.945014+0000 mgr.y (mgr.14520) 801 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:04 vm01 bash[20728]: cluster 2026-03-09T16:15:02.945014+0000 mgr.y (mgr.14520) 801 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:04 vm01 bash[20728]: cluster 2026-03-09T16:15:02.945014+0000 mgr.y (mgr.14520) 801 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:06 vm09 bash[22983]: cluster 2026-03-09T16:15:04.945934+0000 mgr.y (mgr.14520) 802 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:06.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:06 vm09 bash[22983]: cluster 2026-03-09T16:15:04.945934+0000 mgr.y (mgr.14520) 802 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:06 vm01 bash[28152]: cluster 2026-03-09T16:15:04.945934+0000 mgr.y (mgr.14520) 802 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:06.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:06 vm01 bash[28152]: cluster 2026-03-09T16:15:04.945934+0000 mgr.y (mgr.14520) 802 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:06 vm01 bash[20728]: cluster 2026-03-09T16:15:04.945934+0000 mgr.y (mgr.14520) 802 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:06.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:06 vm01 bash[20728]: cluster 2026-03-09T16:15:04.945934+0000 mgr.y (mgr.14520) 802 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:07.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:15:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:15:08.130 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:15:07 vm09 bash[50619]: logger=cleanup t=2026-03-09T16:15:07.746329582Z level=info msg="Completed cleanup jobs" duration=1.597942ms 2026-03-09T16:15:08.130 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:15:07 vm09 bash[50619]: logger=plugins.update.checker t=2026-03-09T16:15:07.912542107Z level=info msg="Update check succeeded" duration=61.351684ms 2026-03-09T16:15:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:08 vm09 bash[22983]: cluster 2026-03-09T16:15:06.946384+0000 mgr.y (mgr.14520) 803 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:08 vm09 bash[22983]: cluster 2026-03-09T16:15:06.946384+0000 mgr.y (mgr.14520) 803 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:08 vm09 bash[22983]: audit 2026-03-09T16:15:07.096470+0000 mgr.y (mgr.14520) 804 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:08 vm09 bash[22983]: audit 2026-03-09T16:15:07.096470+0000 mgr.y (mgr.14520) 804 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:08.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:08 vm01 bash[28152]: cluster 2026-03-09T16:15:06.946384+0000 mgr.y (mgr.14520) 803 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:08 vm01 bash[28152]: cluster 2026-03-09T16:15:06.946384+0000 mgr.y (mgr.14520) 803 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:08 vm01 bash[28152]: audit 2026-03-09T16:15:07.096470+0000 mgr.y (mgr.14520) 804 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:08.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:08 vm01 bash[28152]: audit 2026-03-09T16:15:07.096470+0000 mgr.y (mgr.14520) 804 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:08 vm01 bash[20728]: cluster 2026-03-09T16:15:06.946384+0000 mgr.y (mgr.14520) 803 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:08 vm01 bash[20728]: cluster 2026-03-09T16:15:06.946384+0000 mgr.y (mgr.14520) 803 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:08 vm01 bash[20728]: audit 2026-03-09T16:15:07.096470+0000 mgr.y (mgr.14520) 804 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:08.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:08 vm01 bash[20728]: audit 2026-03-09T16:15:07.096470+0000 mgr.y (mgr.14520) 804 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:10.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:10 vm01 bash[28152]: cluster 2026-03-09T16:15:08.947004+0000 mgr.y (mgr.14520) 805 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:10.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:10 vm01 bash[28152]: cluster 2026-03-09T16:15:08.947004+0000 mgr.y (mgr.14520) 805 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:10.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:10 vm01 bash[20728]: cluster 2026-03-09T16:15:08.947004+0000 mgr.y (mgr.14520) 805 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:10.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:10 vm01 bash[20728]: cluster 2026-03-09T16:15:08.947004+0000 mgr.y (mgr.14520) 805 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:10 vm09 bash[22983]: cluster 2026-03-09T16:15:08.947004+0000 mgr.y (mgr.14520) 805 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:10 vm09 bash[22983]: cluster 2026-03-09T16:15:08.947004+0000 mgr.y (mgr.14520) 805 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:11 vm09 bash[22983]: cluster 2026-03-09T16:15:10.947935+0000 mgr.y (mgr.14520) 806 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:11 vm09 bash[22983]: cluster 2026-03-09T16:15:10.947935+0000 mgr.y (mgr.14520) 806 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:11.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:11 vm01 bash[28152]: cluster 2026-03-09T16:15:10.947935+0000 mgr.y (mgr.14520) 806 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:11.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:11 vm01 bash[28152]: cluster 2026-03-09T16:15:10.947935+0000 mgr.y (mgr.14520) 806 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:11.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:11 vm01 bash[20728]: cluster 2026-03-09T16:15:10.947935+0000 mgr.y (mgr.14520) 806 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:11.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:11 vm01 bash[20728]: cluster 2026-03-09T16:15:10.947935+0000 mgr.y (mgr.14520) 806 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:13.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:15:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:15:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:15:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:14 vm09 bash[22983]: cluster 2026-03-09T16:15:12.948284+0000 mgr.y (mgr.14520) 807 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:14 vm09 bash[22983]: cluster 2026-03-09T16:15:12.948284+0000 mgr.y (mgr.14520) 807 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:14 vm01 bash[28152]: cluster 2026-03-09T16:15:12.948284+0000 mgr.y (mgr.14520) 807 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:14.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:14 vm01 bash[28152]: cluster 2026-03-09T16:15:12.948284+0000 mgr.y (mgr.14520) 807 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:14.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:14 vm01 bash[20728]: cluster 2026-03-09T16:15:12.948284+0000 mgr.y (mgr.14520) 807 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:14.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:14 vm01 bash[20728]: cluster 2026-03-09T16:15:12.948284+0000 mgr.y (mgr.14520) 807 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:15.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:15 vm01 bash[28152]: audit 2026-03-09T16:15:14.686081+0000 mon.a (mon.0) 3809 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:15.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:15 vm01 bash[28152]: audit 2026-03-09T16:15:14.686081+0000 mon.a (mon.0) 3809 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:15.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:15 vm01 bash[20728]: audit 2026-03-09T16:15:14.686081+0000 mon.a (mon.0) 3809 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:15.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:15 vm01 bash[20728]: audit 2026-03-09T16:15:14.686081+0000 mon.a (mon.0) 3809 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:15 vm09 bash[22983]: audit 2026-03-09T16:15:14.686081+0000 mon.a (mon.0) 3809 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:15 vm09 bash[22983]: audit 2026-03-09T16:15:14.686081+0000 mon.a (mon.0) 3809 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:16.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:16 vm01 bash[28152]: cluster 2026-03-09T16:15:14.949102+0000 mgr.y (mgr.14520) 808 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:16.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:16 vm01 bash[28152]: cluster 2026-03-09T16:15:14.949102+0000 mgr.y (mgr.14520) 808 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:16.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:16 vm01 bash[20728]: cluster 2026-03-09T16:15:14.949102+0000 mgr.y (mgr.14520) 808 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:16.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:16 vm01 bash[20728]: cluster 2026-03-09T16:15:14.949102+0000 mgr.y (mgr.14520) 808 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:16 vm09 bash[22983]: cluster 2026-03-09T16:15:14.949102+0000 mgr.y (mgr.14520) 808 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:16 vm09 bash[22983]: cluster 2026-03-09T16:15:14.949102+0000 mgr.y (mgr.14520) 808 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:17.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:15:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:15:17.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:17 vm09 bash[22983]: cluster 2026-03-09T16:15:16.949578+0000 mgr.y (mgr.14520) 809 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:17.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:17 vm09 bash[22983]: cluster 2026-03-09T16:15:16.949578+0000 mgr.y (mgr.14520) 809 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:17.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:17 vm09 bash[22983]: audit 2026-03-09T16:15:17.107407+0000 mgr.y (mgr.14520) 810 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:17.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:17 vm09 bash[22983]: audit 2026-03-09T16:15:17.107407+0000 mgr.y (mgr.14520) 810 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:17 vm01 bash[28152]: cluster 2026-03-09T16:15:16.949578+0000 mgr.y (mgr.14520) 809 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:17 vm01 bash[28152]: cluster 2026-03-09T16:15:16.949578+0000 mgr.y (mgr.14520) 809 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:17 vm01 bash[28152]: audit 2026-03-09T16:15:17.107407+0000 mgr.y (mgr.14520) 810 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:17.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:17 vm01 bash[28152]: audit 2026-03-09T16:15:17.107407+0000 mgr.y (mgr.14520) 810 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:17 vm01 bash[20728]: cluster 2026-03-09T16:15:16.949578+0000 mgr.y (mgr.14520) 809 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:17 vm01 bash[20728]: cluster 2026-03-09T16:15:16.949578+0000 mgr.y (mgr.14520) 809 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:17 vm01 bash[20728]: audit 2026-03-09T16:15:17.107407+0000 mgr.y (mgr.14520) 810 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:17.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:17 vm01 bash[20728]: audit 2026-03-09T16:15:17.107407+0000 mgr.y (mgr.14520) 810 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:20 vm09 bash[22983]: cluster 2026-03-09T16:15:18.950079+0000 mgr.y (mgr.14520) 811 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:20 vm09 bash[22983]: cluster 2026-03-09T16:15:18.950079+0000 mgr.y (mgr.14520) 811 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:20.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:20 vm01 bash[20728]: cluster 2026-03-09T16:15:18.950079+0000 mgr.y (mgr.14520) 811 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:20.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:20 vm01 bash[20728]: cluster 2026-03-09T16:15:18.950079+0000 mgr.y (mgr.14520) 811 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:20.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:20 vm01 bash[28152]: cluster 2026-03-09T16:15:18.950079+0000 mgr.y (mgr.14520) 811 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:20.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:20 vm01 bash[28152]: cluster 2026-03-09T16:15:18.950079+0000 mgr.y (mgr.14520) 811 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:22 vm09 bash[22983]: cluster 2026-03-09T16:15:20.950597+0000 mgr.y (mgr.14520) 812 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:22.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:22 vm09 bash[22983]: cluster 2026-03-09T16:15:20.950597+0000 mgr.y (mgr.14520) 812 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:22.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:22 vm01 bash[28152]: cluster 2026-03-09T16:15:20.950597+0000 mgr.y (mgr.14520) 812 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:22.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:22 vm01 bash[28152]: cluster 2026-03-09T16:15:20.950597+0000 mgr.y (mgr.14520) 812 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:22 vm01 bash[20728]: cluster 2026-03-09T16:15:20.950597+0000 mgr.y (mgr.14520) 812 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:22.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:22 vm01 bash[20728]: cluster 2026-03-09T16:15:20.950597+0000 mgr.y (mgr.14520) 812 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:23.175 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:15:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:15:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:15:24.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:24 vm01 bash[28152]: cluster 2026-03-09T16:15:22.950902+0000 mgr.y (mgr.14520) 813 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:24.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:24 vm01 bash[28152]: cluster 2026-03-09T16:15:22.950902+0000 mgr.y (mgr.14520) 813 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:24 vm01 bash[20728]: cluster 2026-03-09T16:15:22.950902+0000 mgr.y (mgr.14520) 813 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:24.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:24 vm01 bash[20728]: cluster 2026-03-09T16:15:22.950902+0000 mgr.y (mgr.14520) 813 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:24 vm09 bash[22983]: cluster 2026-03-09T16:15:22.950902+0000 mgr.y (mgr.14520) 813 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:24 vm09 bash[22983]: cluster 2026-03-09T16:15:22.950902+0000 mgr.y (mgr.14520) 813 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:26.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:26 vm01 bash[28152]: cluster 2026-03-09T16:15:24.951611+0000 mgr.y (mgr.14520) 814 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:26.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:26 vm01 bash[28152]: cluster 2026-03-09T16:15:24.951611+0000 mgr.y (mgr.14520) 814 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:26.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:26 vm01 bash[20728]: cluster 2026-03-09T16:15:24.951611+0000 mgr.y (mgr.14520) 814 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:26.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:26 vm01 bash[20728]: cluster 2026-03-09T16:15:24.951611+0000 mgr.y (mgr.14520) 814 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:26 vm09 bash[22983]: cluster 2026-03-09T16:15:24.951611+0000 mgr.y (mgr.14520) 814 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:26 vm09 bash[22983]: cluster 2026-03-09T16:15:24.951611+0000 mgr.y (mgr.14520) 814 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:27.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:15:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:15:28.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:28 vm01 bash[28152]: cluster 2026-03-09T16:15:26.951879+0000 mgr.y (mgr.14520) 815 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:28.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:28 vm01 bash[28152]: cluster 2026-03-09T16:15:26.951879+0000 mgr.y (mgr.14520) 815 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:28 vm01 bash[28152]: audit 2026-03-09T16:15:27.118149+0000 mgr.y (mgr.14520) 816 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:28.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:28 vm01 bash[28152]: audit 2026-03-09T16:15:27.118149+0000 mgr.y (mgr.14520) 816 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:28 vm01 bash[20728]: cluster 2026-03-09T16:15:26.951879+0000 mgr.y (mgr.14520) 815 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:28 vm01 bash[20728]: cluster 2026-03-09T16:15:26.951879+0000 mgr.y (mgr.14520) 815 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:28 vm01 bash[20728]: audit 2026-03-09T16:15:27.118149+0000 mgr.y (mgr.14520) 816 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:28.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:28 vm01 bash[20728]: audit 2026-03-09T16:15:27.118149+0000 mgr.y (mgr.14520) 816 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:28 vm09 bash[22983]: cluster 2026-03-09T16:15:26.951879+0000 mgr.y (mgr.14520) 815 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:28 vm09 bash[22983]: cluster 2026-03-09T16:15:26.951879+0000 mgr.y (mgr.14520) 815 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:28 vm09 bash[22983]: audit 2026-03-09T16:15:27.118149+0000 mgr.y (mgr.14520) 816 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:28 vm09 bash[22983]: audit 2026-03-09T16:15:27.118149+0000 mgr.y (mgr.14520) 816 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:30 vm09 bash[22983]: cluster 2026-03-09T16:15:28.952414+0000 mgr.y (mgr.14520) 817 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:30 vm09 bash[22983]: cluster 2026-03-09T16:15:28.952414+0000 mgr.y (mgr.14520) 817 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:30 vm09 bash[22983]: audit 2026-03-09T16:15:29.695333+0000 mon.a (mon.0) 3810 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:30.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:30 vm09 bash[22983]: audit 2026-03-09T16:15:29.695333+0000 mon.a (mon.0) 3810 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:30.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:30 vm01 bash[28152]: cluster 2026-03-09T16:15:28.952414+0000 mgr.y (mgr.14520) 817 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:30.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:30 vm01 bash[28152]: cluster 2026-03-09T16:15:28.952414+0000 mgr.y (mgr.14520) 817 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:30.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:30 vm01 bash[28152]: audit 2026-03-09T16:15:29.695333+0000 mon.a (mon.0) 3810 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:30.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:30 vm01 bash[28152]: audit 2026-03-09T16:15:29.695333+0000 mon.a (mon.0) 3810 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:30.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:30 vm01 bash[20728]: cluster 2026-03-09T16:15:28.952414+0000 mgr.y (mgr.14520) 817 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:30.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:30 vm01 bash[20728]: cluster 2026-03-09T16:15:28.952414+0000 mgr.y (mgr.14520) 817 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:30.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:30 vm01 bash[20728]: audit 2026-03-09T16:15:29.695333+0000 mon.a (mon.0) 3810 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:30.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:30 vm01 bash[20728]: audit 2026-03-09T16:15:29.695333+0000 mon.a (mon.0) 3810 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:31 vm09 bash[22983]: cluster 2026-03-09T16:15:30.953072+0000 mgr.y (mgr.14520) 818 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:31 vm09 bash[22983]: cluster 2026-03-09T16:15:30.953072+0000 mgr.y (mgr.14520) 818 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:31.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:31 vm01 bash[28152]: cluster 2026-03-09T16:15:30.953072+0000 mgr.y (mgr.14520) 818 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:31.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:31 vm01 bash[28152]: cluster 2026-03-09T16:15:30.953072+0000 mgr.y (mgr.14520) 818 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:31.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:31 vm01 bash[20728]: cluster 2026-03-09T16:15:30.953072+0000 mgr.y (mgr.14520) 818 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:31.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:31 vm01 bash[20728]: cluster 2026-03-09T16:15:30.953072+0000 mgr.y (mgr.14520) 818 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:33.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:15:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:15:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:15:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:34 vm09 bash[22983]: cluster 2026-03-09T16:15:32.953463+0000 mgr.y (mgr.14520) 819 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:34 vm09 bash[22983]: cluster 2026-03-09T16:15:32.953463+0000 mgr.y (mgr.14520) 819 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:34.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:34 vm01 bash[28152]: cluster 2026-03-09T16:15:32.953463+0000 mgr.y (mgr.14520) 819 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:34.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:34 vm01 bash[28152]: cluster 2026-03-09T16:15:32.953463+0000 mgr.y (mgr.14520) 819 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:34.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:34 vm01 bash[20728]: cluster 2026-03-09T16:15:32.953463+0000 mgr.y (mgr.14520) 819 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:34.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:34 vm01 bash[20728]: cluster 2026-03-09T16:15:32.953463+0000 mgr.y (mgr.14520) 819 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:36 vm09 bash[22983]: cluster 2026-03-09T16:15:34.954077+0000 mgr.y (mgr.14520) 820 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:36 vm09 bash[22983]: cluster 2026-03-09T16:15:34.954077+0000 mgr.y (mgr.14520) 820 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:36.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:36 vm01 bash[28152]: cluster 2026-03-09T16:15:34.954077+0000 mgr.y (mgr.14520) 820 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:36.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:36 vm01 bash[28152]: cluster 2026-03-09T16:15:34.954077+0000 mgr.y (mgr.14520) 820 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:36.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:36 vm01 bash[20728]: cluster 2026-03-09T16:15:34.954077+0000 mgr.y (mgr.14520) 820 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:36.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:36 vm01 bash[20728]: cluster 2026-03-09T16:15:34.954077+0000 mgr.y (mgr.14520) 820 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:37.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:15:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:15:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:38 vm09 bash[22983]: cluster 2026-03-09T16:15:36.954356+0000 mgr.y (mgr.14520) 821 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:38 vm09 bash[22983]: cluster 2026-03-09T16:15:36.954356+0000 mgr.y (mgr.14520) 821 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:38 vm09 bash[22983]: audit 2026-03-09T16:15:37.127771+0000 mgr.y (mgr.14520) 822 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:38 vm09 bash[22983]: audit 2026-03-09T16:15:37.127771+0000 mgr.y (mgr.14520) 822 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:38 vm01 bash[20728]: cluster 2026-03-09T16:15:36.954356+0000 mgr.y (mgr.14520) 821 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:38 vm01 bash[20728]: cluster 2026-03-09T16:15:36.954356+0000 mgr.y (mgr.14520) 821 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:38 vm01 bash[20728]: audit 2026-03-09T16:15:37.127771+0000 mgr.y (mgr.14520) 822 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:38 vm01 bash[20728]: audit 2026-03-09T16:15:37.127771+0000 mgr.y (mgr.14520) 822 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:38 vm01 bash[28152]: cluster 2026-03-09T16:15:36.954356+0000 mgr.y (mgr.14520) 821 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:38 vm01 bash[28152]: cluster 2026-03-09T16:15:36.954356+0000 mgr.y (mgr.14520) 821 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:38 vm01 bash[28152]: audit 2026-03-09T16:15:37.127771+0000 mgr.y (mgr.14520) 822 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:38 vm01 bash[28152]: audit 2026-03-09T16:15:37.127771+0000 mgr.y (mgr.14520) 822 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:40 vm09 bash[22983]: cluster 2026-03-09T16:15:38.954835+0000 mgr.y (mgr.14520) 823 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:40 vm09 bash[22983]: cluster 2026-03-09T16:15:38.954835+0000 mgr.y (mgr.14520) 823 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:40.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:40 vm01 bash[28152]: cluster 2026-03-09T16:15:38.954835+0000 mgr.y (mgr.14520) 823 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:40.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:40 vm01 bash[28152]: cluster 2026-03-09T16:15:38.954835+0000 mgr.y (mgr.14520) 823 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:40.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:40 vm01 bash[20728]: cluster 2026-03-09T16:15:38.954835+0000 mgr.y (mgr.14520) 823 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:40.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:40 vm01 bash[20728]: cluster 2026-03-09T16:15:38.954835+0000 mgr.y (mgr.14520) 823 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:42 vm09 bash[22983]: cluster 2026-03-09T16:15:40.955327+0000 mgr.y (mgr.14520) 824 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:42 vm09 bash[22983]: cluster 2026-03-09T16:15:40.955327+0000 mgr.y (mgr.14520) 824 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:42.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:42 vm01 bash[28152]: cluster 2026-03-09T16:15:40.955327+0000 mgr.y (mgr.14520) 824 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:42.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:42 vm01 bash[28152]: cluster 2026-03-09T16:15:40.955327+0000 mgr.y (mgr.14520) 824 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:42.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:42 vm01 bash[20728]: cluster 2026-03-09T16:15:40.955327+0000 mgr.y (mgr.14520) 824 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:42.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:42 vm01 bash[20728]: cluster 2026-03-09T16:15:40.955327+0000 mgr.y (mgr.14520) 824 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:43.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:15:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:15:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:15:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:44 vm09 bash[22983]: cluster 2026-03-09T16:15:42.955584+0000 mgr.y (mgr.14520) 825 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:44 vm09 bash[22983]: cluster 2026-03-09T16:15:42.955584+0000 mgr.y (mgr.14520) 825 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:44.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:44 vm01 bash[28152]: cluster 2026-03-09T16:15:42.955584+0000 mgr.y (mgr.14520) 825 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:44.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:44 vm01 bash[28152]: cluster 2026-03-09T16:15:42.955584+0000 mgr.y (mgr.14520) 825 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:44 vm01 bash[20728]: cluster 2026-03-09T16:15:42.955584+0000 mgr.y (mgr.14520) 825 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:44.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:44 vm01 bash[20728]: cluster 2026-03-09T16:15:42.955584+0000 mgr.y (mgr.14520) 825 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:45.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:45 vm01 bash[28152]: audit 2026-03-09T16:15:44.702505+0000 mon.a (mon.0) 3811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:45.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:45 vm01 bash[28152]: audit 2026-03-09T16:15:44.702505+0000 mon.a (mon.0) 3811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:45.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:45 vm01 bash[20728]: audit 2026-03-09T16:15:44.702505+0000 mon.a (mon.0) 3811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:45.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:45 vm01 bash[20728]: audit 2026-03-09T16:15:44.702505+0000 mon.a (mon.0) 3811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:45 vm09 bash[22983]: audit 2026-03-09T16:15:44.702505+0000 mon.a (mon.0) 3811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:45 vm09 bash[22983]: audit 2026-03-09T16:15:44.702505+0000 mon.a (mon.0) 3811 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: cluster 2026-03-09T16:15:44.956278+0000 mgr.y (mgr.14520) 826 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: cluster 2026-03-09T16:15:44.956278+0000 mgr.y (mgr.14520) 826 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:45.718571+0000 mon.a (mon.0) 3812 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:45.718571+0000 mon.a (mon.0) 3812 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.075712+0000 mon.a (mon.0) 3813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.075712+0000 mon.a (mon.0) 3813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.075800+0000 mon.a (mon.0) 3814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.075800+0000 mon.a (mon.0) 3814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.076884+0000 mon.a (mon.0) 3815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.076884+0000 mon.a (mon.0) 3815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.077473+0000 mon.a (mon.0) 3816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.077473+0000 mon.a (mon.0) 3816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.083071+0000 mon.a (mon.0) 3817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:46 vm01 bash[28152]: audit 2026-03-09T16:15:46.083071+0000 mon.a (mon.0) 3817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: cluster 2026-03-09T16:15:44.956278+0000 mgr.y (mgr.14520) 826 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: cluster 2026-03-09T16:15:44.956278+0000 mgr.y (mgr.14520) 826 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:45.718571+0000 mon.a (mon.0) 3812 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:45.718571+0000 mon.a (mon.0) 3812 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.075712+0000 mon.a (mon.0) 3813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.075712+0000 mon.a (mon.0) 3813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.075800+0000 mon.a (mon.0) 3814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.075800+0000 mon.a (mon.0) 3814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.076884+0000 mon.a (mon.0) 3815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.076884+0000 mon.a (mon.0) 3815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.077473+0000 mon.a (mon.0) 3816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.077473+0000 mon.a (mon.0) 3816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.083071+0000 mon.a (mon.0) 3817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:15:46.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:46 vm01 bash[20728]: audit 2026-03-09T16:15:46.083071+0000 mon.a (mon.0) 3817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:15:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: cluster 2026-03-09T16:15:44.956278+0000 mgr.y (mgr.14520) 826 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: cluster 2026-03-09T16:15:44.956278+0000 mgr.y (mgr.14520) 826 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:45.718571+0000 mon.a (mon.0) 3812 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:15:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:45.718571+0000 mon.a (mon.0) 3812 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:15:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.075712+0000 mon.a (mon.0) 3813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.075712+0000 mon.a (mon.0) 3813 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.075800+0000 mon.a (mon.0) 3814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.075800+0000 mon.a (mon.0) 3814 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:15:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.076884+0000 mon.a (mon.0) 3815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:15:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.076884+0000 mon.a (mon.0) 3815 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:15:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.077473+0000 mon.a (mon.0) 3816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:15:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.077473+0000 mon.a (mon.0) 3816 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:15:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.083071+0000 mon.a (mon.0) 3817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:15:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:46 vm09 bash[22983]: audit 2026-03-09T16:15:46.083071+0000 mon.a (mon.0) 3817 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:15:47.382 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:15:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:15:48.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:48 vm01 bash[28152]: cluster 2026-03-09T16:15:46.956711+0000 mgr.y (mgr.14520) 827 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:48.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:48 vm01 bash[28152]: cluster 2026-03-09T16:15:46.956711+0000 mgr.y (mgr.14520) 827 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:48.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:48 vm01 bash[28152]: audit 2026-03-09T16:15:47.136596+0000 mgr.y (mgr.14520) 828 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:48.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:48 vm01 bash[28152]: audit 2026-03-09T16:15:47.136596+0000 mgr.y (mgr.14520) 828 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:48 vm01 bash[20728]: cluster 2026-03-09T16:15:46.956711+0000 mgr.y (mgr.14520) 827 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:48 vm01 bash[20728]: cluster 2026-03-09T16:15:46.956711+0000 mgr.y (mgr.14520) 827 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:48 vm01 bash[20728]: audit 2026-03-09T16:15:47.136596+0000 mgr.y (mgr.14520) 828 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:48 vm01 bash[20728]: audit 2026-03-09T16:15:47.136596+0000 mgr.y (mgr.14520) 828 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:48 vm09 bash[22983]: cluster 2026-03-09T16:15:46.956711+0000 mgr.y (mgr.14520) 827 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:48 vm09 bash[22983]: cluster 2026-03-09T16:15:46.956711+0000 mgr.y (mgr.14520) 827 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:48 vm09 bash[22983]: audit 2026-03-09T16:15:47.136596+0000 mgr.y (mgr.14520) 828 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:48 vm09 bash[22983]: audit 2026-03-09T16:15:47.136596+0000 mgr.y (mgr.14520) 828 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:50.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:50 vm01 bash[28152]: cluster 2026-03-09T16:15:48.957249+0000 mgr.y (mgr.14520) 829 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:50.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:50 vm01 bash[28152]: cluster 2026-03-09T16:15:48.957249+0000 mgr.y (mgr.14520) 829 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:50.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:50 vm01 bash[20728]: cluster 2026-03-09T16:15:48.957249+0000 mgr.y (mgr.14520) 829 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:50.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:50 vm01 bash[20728]: cluster 2026-03-09T16:15:48.957249+0000 mgr.y (mgr.14520) 829 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:50 vm09 bash[22983]: cluster 2026-03-09T16:15:48.957249+0000 mgr.y (mgr.14520) 829 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:50 vm09 bash[22983]: cluster 2026-03-09T16:15:48.957249+0000 mgr.y (mgr.14520) 829 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:52.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:52 vm01 bash[28152]: cluster 2026-03-09T16:15:50.957887+0000 mgr.y (mgr.14520) 830 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:52.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:52 vm01 bash[28152]: cluster 2026-03-09T16:15:50.957887+0000 mgr.y (mgr.14520) 830 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:52.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:52 vm01 bash[20728]: cluster 2026-03-09T16:15:50.957887+0000 mgr.y (mgr.14520) 830 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:52.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:52 vm01 bash[20728]: cluster 2026-03-09T16:15:50.957887+0000 mgr.y (mgr.14520) 830 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:52 vm09 bash[22983]: cluster 2026-03-09T16:15:50.957887+0000 mgr.y (mgr.14520) 830 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:52 vm09 bash[22983]: cluster 2026-03-09T16:15:50.957887+0000 mgr.y (mgr.14520) 830 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:53.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:15:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:15:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:15:54.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:54 vm01 bash[28152]: cluster 2026-03-09T16:15:52.958179+0000 mgr.y (mgr.14520) 831 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:54.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:54 vm01 bash[28152]: cluster 2026-03-09T16:15:52.958179+0000 mgr.y (mgr.14520) 831 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:54.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:54 vm01 bash[20728]: cluster 2026-03-09T16:15:52.958179+0000 mgr.y (mgr.14520) 831 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:54.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:54 vm01 bash[20728]: cluster 2026-03-09T16:15:52.958179+0000 mgr.y (mgr.14520) 831 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:54 vm09 bash[22983]: cluster 2026-03-09T16:15:52.958179+0000 mgr.y (mgr.14520) 831 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:54 vm09 bash[22983]: cluster 2026-03-09T16:15:52.958179+0000 mgr.y (mgr.14520) 831 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:55.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:55 vm09 bash[22983]: cluster 2026-03-09T16:15:54.958784+0000 mgr.y (mgr.14520) 832 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:55.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:55 vm09 bash[22983]: cluster 2026-03-09T16:15:54.958784+0000 mgr.y (mgr.14520) 832 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:55.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:55 vm01 bash[28152]: cluster 2026-03-09T16:15:54.958784+0000 mgr.y (mgr.14520) 832 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:55.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:55 vm01 bash[28152]: cluster 2026-03-09T16:15:54.958784+0000 mgr.y (mgr.14520) 832 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:55.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:55 vm01 bash[20728]: cluster 2026-03-09T16:15:54.958784+0000 mgr.y (mgr.14520) 832 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:55.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:55 vm01 bash[20728]: cluster 2026-03-09T16:15:54.958784+0000 mgr.y (mgr.14520) 832 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:15:57.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:15:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:15:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:58 vm09 bash[22983]: cluster 2026-03-09T16:15:56.959058+0000 mgr.y (mgr.14520) 833 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:58 vm09 bash[22983]: cluster 2026-03-09T16:15:56.959058+0000 mgr.y (mgr.14520) 833 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:58 vm09 bash[22983]: audit 2026-03-09T16:15:57.144516+0000 mgr.y (mgr.14520) 834 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:58.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:15:58 vm09 bash[22983]: audit 2026-03-09T16:15:57.144516+0000 mgr.y (mgr.14520) 834 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:58.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:58 vm01 bash[28152]: cluster 2026-03-09T16:15:56.959058+0000 mgr.y (mgr.14520) 833 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:58.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:58 vm01 bash[28152]: cluster 2026-03-09T16:15:56.959058+0000 mgr.y (mgr.14520) 833 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:58.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:58 vm01 bash[28152]: audit 2026-03-09T16:15:57.144516+0000 mgr.y (mgr.14520) 834 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:58.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:15:58 vm01 bash[28152]: audit 2026-03-09T16:15:57.144516+0000 mgr.y (mgr.14520) 834 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:58.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:58 vm01 bash[20728]: cluster 2026-03-09T16:15:56.959058+0000 mgr.y (mgr.14520) 833 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:58.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:58 vm01 bash[20728]: cluster 2026-03-09T16:15:56.959058+0000 mgr.y (mgr.14520) 833 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:15:58.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:58 vm01 bash[20728]: audit 2026-03-09T16:15:57.144516+0000 mgr.y (mgr.14520) 834 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:15:58.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:15:58 vm01 bash[20728]: audit 2026-03-09T16:15:57.144516+0000 mgr.y (mgr.14520) 834 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:00 vm09 bash[22983]: cluster 2026-03-09T16:15:58.959500+0000 mgr.y (mgr.14520) 835 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:00 vm09 bash[22983]: cluster 2026-03-09T16:15:58.959500+0000 mgr.y (mgr.14520) 835 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:00 vm09 bash[22983]: audit 2026-03-09T16:15:59.709009+0000 mon.a (mon.0) 3818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:00.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:00 vm09 bash[22983]: audit 2026-03-09T16:15:59.709009+0000 mon.a (mon.0) 3818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:00.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:00 vm01 bash[28152]: cluster 2026-03-09T16:15:58.959500+0000 mgr.y (mgr.14520) 835 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:00.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:00 vm01 bash[28152]: cluster 2026-03-09T16:15:58.959500+0000 mgr.y (mgr.14520) 835 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:00.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:00 vm01 bash[28152]: audit 2026-03-09T16:15:59.709009+0000 mon.a (mon.0) 3818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:00.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:00 vm01 bash[28152]: audit 2026-03-09T16:15:59.709009+0000 mon.a (mon.0) 3818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:00.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:00 vm01 bash[20728]: cluster 2026-03-09T16:15:58.959500+0000 mgr.y (mgr.14520) 835 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:00.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:00 vm01 bash[20728]: cluster 2026-03-09T16:15:58.959500+0000 mgr.y (mgr.14520) 835 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:00.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:00 vm01 bash[20728]: audit 2026-03-09T16:15:59.709009+0000 mon.a (mon.0) 3818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:00.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:00 vm01 bash[20728]: audit 2026-03-09T16:15:59.709009+0000 mon.a (mon.0) 3818 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:02 vm09 bash[22983]: cluster 2026-03-09T16:16:00.960021+0000 mgr.y (mgr.14520) 836 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:02 vm09 bash[22983]: cluster 2026-03-09T16:16:00.960021+0000 mgr.y (mgr.14520) 836 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:02.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:02 vm01 bash[28152]: cluster 2026-03-09T16:16:00.960021+0000 mgr.y (mgr.14520) 836 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:02.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:02 vm01 bash[28152]: cluster 2026-03-09T16:16:00.960021+0000 mgr.y (mgr.14520) 836 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:02.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:02 vm01 bash[20728]: cluster 2026-03-09T16:16:00.960021+0000 mgr.y (mgr.14520) 836 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:02.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:02 vm01 bash[20728]: cluster 2026-03-09T16:16:00.960021+0000 mgr.y (mgr.14520) 836 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:03.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:16:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:16:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:16:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:04 vm09 bash[22983]: cluster 2026-03-09T16:16:02.960354+0000 mgr.y (mgr.14520) 837 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:04 vm09 bash[22983]: cluster 2026-03-09T16:16:02.960354+0000 mgr.y (mgr.14520) 837 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:04.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:04 vm01 bash[28152]: cluster 2026-03-09T16:16:02.960354+0000 mgr.y (mgr.14520) 837 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:04.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:04 vm01 bash[28152]: cluster 2026-03-09T16:16:02.960354+0000 mgr.y (mgr.14520) 837 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:04.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:04 vm01 bash[20728]: cluster 2026-03-09T16:16:02.960354+0000 mgr.y (mgr.14520) 837 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:04.425 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:04 vm01 bash[20728]: cluster 2026-03-09T16:16:02.960354+0000 mgr.y (mgr.14520) 837 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:05 vm09 bash[22983]: cluster 2026-03-09T16:16:04.961062+0000 mgr.y (mgr.14520) 838 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:05 vm09 bash[22983]: cluster 2026-03-09T16:16:04.961062+0000 mgr.y (mgr.14520) 838 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:05.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:05 vm01 bash[28152]: cluster 2026-03-09T16:16:04.961062+0000 mgr.y (mgr.14520) 838 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:05.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:05 vm01 bash[28152]: cluster 2026-03-09T16:16:04.961062+0000 mgr.y (mgr.14520) 838 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:05.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:05 vm01 bash[20728]: cluster 2026-03-09T16:16:04.961062+0000 mgr.y (mgr.14520) 838 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:05.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:05 vm01 bash[20728]: cluster 2026-03-09T16:16:04.961062+0000 mgr.y (mgr.14520) 838 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:07.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:16:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:16:08.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:08 vm01 bash[28152]: cluster 2026-03-09T16:16:06.961610+0000 mgr.y (mgr.14520) 839 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:08.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:08 vm01 bash[28152]: cluster 2026-03-09T16:16:06.961610+0000 mgr.y (mgr.14520) 839 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:08.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:08 vm01 bash[28152]: audit 2026-03-09T16:16:07.152644+0000 mgr.y (mgr.14520) 840 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:08.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:08 vm01 bash[28152]: audit 2026-03-09T16:16:07.152644+0000 mgr.y (mgr.14520) 840 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:08.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:08 vm01 bash[20728]: cluster 2026-03-09T16:16:06.961610+0000 mgr.y (mgr.14520) 839 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:08.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:08 vm01 bash[20728]: cluster 2026-03-09T16:16:06.961610+0000 mgr.y (mgr.14520) 839 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:08.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:08 vm01 bash[20728]: audit 2026-03-09T16:16:07.152644+0000 mgr.y (mgr.14520) 840 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:08.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:08 vm01 bash[20728]: audit 2026-03-09T16:16:07.152644+0000 mgr.y (mgr.14520) 840 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:08 vm09 bash[22983]: cluster 2026-03-09T16:16:06.961610+0000 mgr.y (mgr.14520) 839 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:08 vm09 bash[22983]: cluster 2026-03-09T16:16:06.961610+0000 mgr.y (mgr.14520) 839 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:08 vm09 bash[22983]: audit 2026-03-09T16:16:07.152644+0000 mgr.y (mgr.14520) 840 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:08 vm09 bash[22983]: audit 2026-03-09T16:16:07.152644+0000 mgr.y (mgr.14520) 840 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:10 vm09 bash[22983]: cluster 2026-03-09T16:16:08.962125+0000 mgr.y (mgr.14520) 841 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:10 vm09 bash[22983]: cluster 2026-03-09T16:16:08.962125+0000 mgr.y (mgr.14520) 841 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:10.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:10 vm01 bash[28152]: cluster 2026-03-09T16:16:08.962125+0000 mgr.y (mgr.14520) 841 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:10.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:10 vm01 bash[28152]: cluster 2026-03-09T16:16:08.962125+0000 mgr.y (mgr.14520) 841 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:10.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:10 vm01 bash[20728]: cluster 2026-03-09T16:16:08.962125+0000 mgr.y (mgr.14520) 841 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:10.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:10 vm01 bash[20728]: cluster 2026-03-09T16:16:08.962125+0000 mgr.y (mgr.14520) 841 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:11 vm09 bash[22983]: cluster 2026-03-09T16:16:10.962818+0000 mgr.y (mgr.14520) 842 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:11 vm09 bash[22983]: cluster 2026-03-09T16:16:10.962818+0000 mgr.y (mgr.14520) 842 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:11.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:11 vm01 bash[28152]: cluster 2026-03-09T16:16:10.962818+0000 mgr.y (mgr.14520) 842 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:11.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:11 vm01 bash[28152]: cluster 2026-03-09T16:16:10.962818+0000 mgr.y (mgr.14520) 842 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:11.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:11 vm01 bash[20728]: cluster 2026-03-09T16:16:10.962818+0000 mgr.y (mgr.14520) 842 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:11.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:11 vm01 bash[20728]: cluster 2026-03-09T16:16:10.962818+0000 mgr.y (mgr.14520) 842 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:13.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:16:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:16:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:16:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:14 vm09 bash[22983]: cluster 2026-03-09T16:16:12.963088+0000 mgr.y (mgr.14520) 843 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:14 vm09 bash[22983]: cluster 2026-03-09T16:16:12.963088+0000 mgr.y (mgr.14520) 843 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:14.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:14 vm01 bash[28152]: cluster 2026-03-09T16:16:12.963088+0000 mgr.y (mgr.14520) 843 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:14.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:14 vm01 bash[28152]: cluster 2026-03-09T16:16:12.963088+0000 mgr.y (mgr.14520) 843 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:14.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:14 vm01 bash[20728]: cluster 2026-03-09T16:16:12.963088+0000 mgr.y (mgr.14520) 843 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:14.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:14 vm01 bash[20728]: cluster 2026-03-09T16:16:12.963088+0000 mgr.y (mgr.14520) 843 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:15 vm09 bash[22983]: audit 2026-03-09T16:16:14.715413+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:15 vm09 bash[22983]: audit 2026-03-09T16:16:14.715413+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:15.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:15 vm01 bash[28152]: audit 2026-03-09T16:16:14.715413+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:15.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:15 vm01 bash[28152]: audit 2026-03-09T16:16:14.715413+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:15.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:15 vm01 bash[20728]: audit 2026-03-09T16:16:14.715413+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:15.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:15 vm01 bash[20728]: audit 2026-03-09T16:16:14.715413+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:16 vm09 bash[22983]: cluster 2026-03-09T16:16:14.963683+0000 mgr.y (mgr.14520) 844 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:16 vm09 bash[22983]: cluster 2026-03-09T16:16:14.963683+0000 mgr.y (mgr.14520) 844 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:16.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:16 vm01 bash[28152]: cluster 2026-03-09T16:16:14.963683+0000 mgr.y (mgr.14520) 844 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:16.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:16 vm01 bash[28152]: cluster 2026-03-09T16:16:14.963683+0000 mgr.y (mgr.14520) 844 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:16.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:16 vm01 bash[20728]: cluster 2026-03-09T16:16:14.963683+0000 mgr.y (mgr.14520) 844 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:16.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:16 vm01 bash[20728]: cluster 2026-03-09T16:16:14.963683+0000 mgr.y (mgr.14520) 844 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:17.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:16:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:16:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:18 vm09 bash[22983]: cluster 2026-03-09T16:16:16.963988+0000 mgr.y (mgr.14520) 845 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:18 vm09 bash[22983]: cluster 2026-03-09T16:16:16.963988+0000 mgr.y (mgr.14520) 845 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:18 vm09 bash[22983]: audit 2026-03-09T16:16:17.160611+0000 mgr.y (mgr.14520) 846 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:18 vm09 bash[22983]: audit 2026-03-09T16:16:17.160611+0000 mgr.y (mgr.14520) 846 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:18.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:18 vm01 bash[28152]: cluster 2026-03-09T16:16:16.963988+0000 mgr.y (mgr.14520) 845 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:18.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:18 vm01 bash[28152]: cluster 2026-03-09T16:16:16.963988+0000 mgr.y (mgr.14520) 845 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:18.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:18 vm01 bash[28152]: audit 2026-03-09T16:16:17.160611+0000 mgr.y (mgr.14520) 846 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:18.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:18 vm01 bash[28152]: audit 2026-03-09T16:16:17.160611+0000 mgr.y (mgr.14520) 846 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:18.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:18 vm01 bash[20728]: cluster 2026-03-09T16:16:16.963988+0000 mgr.y (mgr.14520) 845 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:18.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:18 vm01 bash[20728]: cluster 2026-03-09T16:16:16.963988+0000 mgr.y (mgr.14520) 845 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:18.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:18 vm01 bash[20728]: audit 2026-03-09T16:16:17.160611+0000 mgr.y (mgr.14520) 846 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:18.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:18 vm01 bash[20728]: audit 2026-03-09T16:16:17.160611+0000 mgr.y (mgr.14520) 846 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:20 vm09 bash[22983]: cluster 2026-03-09T16:16:18.964491+0000 mgr.y (mgr.14520) 847 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:20 vm09 bash[22983]: cluster 2026-03-09T16:16:18.964491+0000 mgr.y (mgr.14520) 847 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:20.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:20 vm01 bash[28152]: cluster 2026-03-09T16:16:18.964491+0000 mgr.y (mgr.14520) 847 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:20.684 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:20 vm01 bash[28152]: cluster 2026-03-09T16:16:18.964491+0000 mgr.y (mgr.14520) 847 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:20.685 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:20 vm01 bash[20728]: cluster 2026-03-09T16:16:18.964491+0000 mgr.y (mgr.14520) 847 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:20.685 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:20 vm01 bash[20728]: cluster 2026-03-09T16:16:18.964491+0000 mgr.y (mgr.14520) 847 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:21.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:21 vm09 bash[22983]: cluster 2026-03-09T16:16:20.965061+0000 mgr.y (mgr.14520) 848 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:21.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:21 vm09 bash[22983]: cluster 2026-03-09T16:16:20.965061+0000 mgr.y (mgr.14520) 848 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:21.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:21 vm01 bash[28152]: cluster 2026-03-09T16:16:20.965061+0000 mgr.y (mgr.14520) 848 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:21.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:21 vm01 bash[28152]: cluster 2026-03-09T16:16:20.965061+0000 mgr.y (mgr.14520) 848 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:21.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:21 vm01 bash[20728]: cluster 2026-03-09T16:16:20.965061+0000 mgr.y (mgr.14520) 848 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:21.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:21 vm01 bash[20728]: cluster 2026-03-09T16:16:20.965061+0000 mgr.y (mgr.14520) 848 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:23.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:16:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:16:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:16:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:24 vm09 bash[22983]: cluster 2026-03-09T16:16:22.965362+0000 mgr.y (mgr.14520) 849 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:24 vm09 bash[22983]: cluster 2026-03-09T16:16:22.965362+0000 mgr.y (mgr.14520) 849 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:24.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:24 vm01 bash[28152]: cluster 2026-03-09T16:16:22.965362+0000 mgr.y (mgr.14520) 849 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:24.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:24 vm01 bash[28152]: cluster 2026-03-09T16:16:22.965362+0000 mgr.y (mgr.14520) 849 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:24.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:24 vm01 bash[20728]: cluster 2026-03-09T16:16:22.965362+0000 mgr.y (mgr.14520) 849 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:24.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:24 vm01 bash[20728]: cluster 2026-03-09T16:16:22.965362+0000 mgr.y (mgr.14520) 849 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:26 vm09 bash[22983]: cluster 2026-03-09T16:16:24.965994+0000 mgr.y (mgr.14520) 850 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:26 vm09 bash[22983]: cluster 2026-03-09T16:16:24.965994+0000 mgr.y (mgr.14520) 850 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:26.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:26 vm01 bash[28152]: cluster 2026-03-09T16:16:24.965994+0000 mgr.y (mgr.14520) 850 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:26.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:26 vm01 bash[28152]: cluster 2026-03-09T16:16:24.965994+0000 mgr.y (mgr.14520) 850 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:26.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:26 vm01 bash[20728]: cluster 2026-03-09T16:16:24.965994+0000 mgr.y (mgr.14520) 850 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:26.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:26 vm01 bash[20728]: cluster 2026-03-09T16:16:24.965994+0000 mgr.y (mgr.14520) 850 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:27.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:16:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:16:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:28 vm09 bash[22983]: cluster 2026-03-09T16:16:26.966335+0000 mgr.y (mgr.14520) 851 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:28 vm09 bash[22983]: cluster 2026-03-09T16:16:26.966335+0000 mgr.y (mgr.14520) 851 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:28 vm09 bash[22983]: audit 2026-03-09T16:16:27.171139+0000 mgr.y (mgr.14520) 852 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:28 vm09 bash[22983]: audit 2026-03-09T16:16:27.171139+0000 mgr.y (mgr.14520) 852 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:28.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:28 vm01 bash[28152]: cluster 2026-03-09T16:16:26.966335+0000 mgr.y (mgr.14520) 851 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:28.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:28 vm01 bash[28152]: cluster 2026-03-09T16:16:26.966335+0000 mgr.y (mgr.14520) 851 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:28.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:28 vm01 bash[28152]: audit 2026-03-09T16:16:27.171139+0000 mgr.y (mgr.14520) 852 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:28.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:28 vm01 bash[28152]: audit 2026-03-09T16:16:27.171139+0000 mgr.y (mgr.14520) 852 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:28.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:28 vm01 bash[20728]: cluster 2026-03-09T16:16:26.966335+0000 mgr.y (mgr.14520) 851 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:28.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:28 vm01 bash[20728]: cluster 2026-03-09T16:16:26.966335+0000 mgr.y (mgr.14520) 851 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:28.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:28 vm01 bash[20728]: audit 2026-03-09T16:16:27.171139+0000 mgr.y (mgr.14520) 852 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:28.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:28 vm01 bash[20728]: audit 2026-03-09T16:16:27.171139+0000 mgr.y (mgr.14520) 852 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:30 vm09 bash[22983]: cluster 2026-03-09T16:16:28.966926+0000 mgr.y (mgr.14520) 853 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:30 vm09 bash[22983]: cluster 2026-03-09T16:16:28.966926+0000 mgr.y (mgr.14520) 853 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:30 vm09 bash[22983]: audit 2026-03-09T16:16:29.721888+0000 mon.a (mon.0) 3820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:30.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:30 vm09 bash[22983]: audit 2026-03-09T16:16:29.721888+0000 mon.a (mon.0) 3820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:30 vm01 bash[28152]: cluster 2026-03-09T16:16:28.966926+0000 mgr.y (mgr.14520) 853 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:30 vm01 bash[28152]: cluster 2026-03-09T16:16:28.966926+0000 mgr.y (mgr.14520) 853 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:30 vm01 bash[28152]: audit 2026-03-09T16:16:29.721888+0000 mon.a (mon.0) 3820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:30 vm01 bash[28152]: audit 2026-03-09T16:16:29.721888+0000 mon.a (mon.0) 3820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:30.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:30 vm01 bash[20728]: cluster 2026-03-09T16:16:28.966926+0000 mgr.y (mgr.14520) 853 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:30.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:30 vm01 bash[20728]: cluster 2026-03-09T16:16:28.966926+0000 mgr.y (mgr.14520) 853 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:30.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:30 vm01 bash[20728]: audit 2026-03-09T16:16:29.721888+0000 mon.a (mon.0) 3820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:30.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:30 vm01 bash[20728]: audit 2026-03-09T16:16:29.721888+0000 mon.a (mon.0) 3820 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:32 vm09 bash[22983]: cluster 2026-03-09T16:16:30.967609+0000 mgr.y (mgr.14520) 854 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:32 vm09 bash[22983]: cluster 2026-03-09T16:16:30.967609+0000 mgr.y (mgr.14520) 854 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:32.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:32 vm01 bash[28152]: cluster 2026-03-09T16:16:30.967609+0000 mgr.y (mgr.14520) 854 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:32.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:32 vm01 bash[28152]: cluster 2026-03-09T16:16:30.967609+0000 mgr.y (mgr.14520) 854 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:32.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:32 vm01 bash[20728]: cluster 2026-03-09T16:16:30.967609+0000 mgr.y (mgr.14520) 854 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:32.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:32 vm01 bash[20728]: cluster 2026-03-09T16:16:30.967609+0000 mgr.y (mgr.14520) 854 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:33.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:16:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:16:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:16:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:34 vm09 bash[22983]: cluster 2026-03-09T16:16:32.967952+0000 mgr.y (mgr.14520) 855 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:34 vm09 bash[22983]: cluster 2026-03-09T16:16:32.967952+0000 mgr.y (mgr.14520) 855 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:34.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:34 vm01 bash[28152]: cluster 2026-03-09T16:16:32.967952+0000 mgr.y (mgr.14520) 855 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:34.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:34 vm01 bash[28152]: cluster 2026-03-09T16:16:32.967952+0000 mgr.y (mgr.14520) 855 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:34.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:34 vm01 bash[20728]: cluster 2026-03-09T16:16:32.967952+0000 mgr.y (mgr.14520) 855 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:34.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:34 vm01 bash[20728]: cluster 2026-03-09T16:16:32.967952+0000 mgr.y (mgr.14520) 855 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:36 vm09 bash[22983]: cluster 2026-03-09T16:16:34.968714+0000 mgr.y (mgr.14520) 856 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:36 vm09 bash[22983]: cluster 2026-03-09T16:16:34.968714+0000 mgr.y (mgr.14520) 856 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:36.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:36 vm01 bash[28152]: cluster 2026-03-09T16:16:34.968714+0000 mgr.y (mgr.14520) 856 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:36.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:36 vm01 bash[28152]: cluster 2026-03-09T16:16:34.968714+0000 mgr.y (mgr.14520) 856 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:36.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:36 vm01 bash[20728]: cluster 2026-03-09T16:16:34.968714+0000 mgr.y (mgr.14520) 856 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:36.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:36 vm01 bash[20728]: cluster 2026-03-09T16:16:34.968714+0000 mgr.y (mgr.14520) 856 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:37.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:16:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:16:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:38 vm09 bash[22983]: cluster 2026-03-09T16:16:36.969183+0000 mgr.y (mgr.14520) 857 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:38 vm09 bash[22983]: cluster 2026-03-09T16:16:36.969183+0000 mgr.y (mgr.14520) 857 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:38 vm09 bash[22983]: audit 2026-03-09T16:16:37.181749+0000 mgr.y (mgr.14520) 858 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:38 vm09 bash[22983]: audit 2026-03-09T16:16:37.181749+0000 mgr.y (mgr.14520) 858 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:38 vm01 bash[28152]: cluster 2026-03-09T16:16:36.969183+0000 mgr.y (mgr.14520) 857 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:38 vm01 bash[28152]: cluster 2026-03-09T16:16:36.969183+0000 mgr.y (mgr.14520) 857 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:38 vm01 bash[28152]: audit 2026-03-09T16:16:37.181749+0000 mgr.y (mgr.14520) 858 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:38 vm01 bash[28152]: audit 2026-03-09T16:16:37.181749+0000 mgr.y (mgr.14520) 858 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:38 vm01 bash[20728]: cluster 2026-03-09T16:16:36.969183+0000 mgr.y (mgr.14520) 857 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:38 vm01 bash[20728]: cluster 2026-03-09T16:16:36.969183+0000 mgr.y (mgr.14520) 857 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:38 vm01 bash[20728]: audit 2026-03-09T16:16:37.181749+0000 mgr.y (mgr.14520) 858 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:38 vm01 bash[20728]: audit 2026-03-09T16:16:37.181749+0000 mgr.y (mgr.14520) 858 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:40 vm09 bash[22983]: cluster 2026-03-09T16:16:38.969746+0000 mgr.y (mgr.14520) 859 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:40 vm09 bash[22983]: cluster 2026-03-09T16:16:38.969746+0000 mgr.y (mgr.14520) 859 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:40.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:40 vm01 bash[28152]: cluster 2026-03-09T16:16:38.969746+0000 mgr.y (mgr.14520) 859 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:40.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:40 vm01 bash[28152]: cluster 2026-03-09T16:16:38.969746+0000 mgr.y (mgr.14520) 859 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:40.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:40 vm01 bash[20728]: cluster 2026-03-09T16:16:38.969746+0000 mgr.y (mgr.14520) 859 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:40.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:40 vm01 bash[20728]: cluster 2026-03-09T16:16:38.969746+0000 mgr.y (mgr.14520) 859 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:42 vm09 bash[22983]: cluster 2026-03-09T16:16:40.970468+0000 mgr.y (mgr.14520) 860 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:42 vm09 bash[22983]: cluster 2026-03-09T16:16:40.970468+0000 mgr.y (mgr.14520) 860 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:42.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:42 vm01 bash[28152]: cluster 2026-03-09T16:16:40.970468+0000 mgr.y (mgr.14520) 860 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:42.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:42 vm01 bash[28152]: cluster 2026-03-09T16:16:40.970468+0000 mgr.y (mgr.14520) 860 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:42.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:42 vm01 bash[20728]: cluster 2026-03-09T16:16:40.970468+0000 mgr.y (mgr.14520) 860 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:42.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:42 vm01 bash[20728]: cluster 2026-03-09T16:16:40.970468+0000 mgr.y (mgr.14520) 860 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:43.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:16:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:16:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:16:44.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:44 vm01 bash[28152]: cluster 2026-03-09T16:16:42.970769+0000 mgr.y (mgr.14520) 861 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:44.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:44 vm01 bash[28152]: cluster 2026-03-09T16:16:42.970769+0000 mgr.y (mgr.14520) 861 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:44.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:44 vm01 bash[20728]: cluster 2026-03-09T16:16:42.970769+0000 mgr.y (mgr.14520) 861 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:44.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:44 vm01 bash[20728]: cluster 2026-03-09T16:16:42.970769+0000 mgr.y (mgr.14520) 861 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:44 vm09 bash[22983]: cluster 2026-03-09T16:16:42.970769+0000 mgr.y (mgr.14520) 861 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:44 vm09 bash[22983]: cluster 2026-03-09T16:16:42.970769+0000 mgr.y (mgr.14520) 861 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:45.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:45 vm01 bash[28152]: audit 2026-03-09T16:16:44.727649+0000 mon.a (mon.0) 3821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:45.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:45 vm01 bash[28152]: audit 2026-03-09T16:16:44.727649+0000 mon.a (mon.0) 3821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:45.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:45 vm01 bash[20728]: audit 2026-03-09T16:16:44.727649+0000 mon.a (mon.0) 3821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:45.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:45 vm01 bash[20728]: audit 2026-03-09T16:16:44.727649+0000 mon.a (mon.0) 3821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:45 vm09 bash[22983]: audit 2026-03-09T16:16:44.727649+0000 mon.a (mon.0) 3821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:45 vm09 bash[22983]: audit 2026-03-09T16:16:44.727649+0000 mon.a (mon.0) 3821 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:16:46.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:46 vm01 bash[28152]: cluster 2026-03-09T16:16:44.971424+0000 mgr.y (mgr.14520) 862 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:46.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:46 vm01 bash[28152]: cluster 2026-03-09T16:16:44.971424+0000 mgr.y (mgr.14520) 862 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:46.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:46 vm01 bash[28152]: audit 2026-03-09T16:16:46.127488+0000 mon.a (mon.0) 3822 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:16:46.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:46 vm01 bash[28152]: audit 2026-03-09T16:16:46.127488+0000 mon.a (mon.0) 3822 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:16:46.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:46 vm01 bash[20728]: cluster 2026-03-09T16:16:44.971424+0000 mgr.y (mgr.14520) 862 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:46.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:46 vm01 bash[20728]: cluster 2026-03-09T16:16:44.971424+0000 mgr.y (mgr.14520) 862 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:46.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:46 vm01 bash[20728]: audit 2026-03-09T16:16:46.127488+0000 mon.a (mon.0) 3822 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:16:46.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:46 vm01 bash[20728]: audit 2026-03-09T16:16:46.127488+0000 mon.a (mon.0) 3822 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:16:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:46 vm09 bash[22983]: cluster 2026-03-09T16:16:44.971424+0000 mgr.y (mgr.14520) 862 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:46 vm09 bash[22983]: cluster 2026-03-09T16:16:44.971424+0000 mgr.y (mgr.14520) 862 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:46 vm09 bash[22983]: audit 2026-03-09T16:16:46.127488+0000 mon.a (mon.0) 3822 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:16:46.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:46 vm09 bash[22983]: audit 2026-03-09T16:16:46.127488+0000 mon.a (mon.0) 3822 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:16:47.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:16:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:16:48.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:48 vm01 bash[28152]: cluster 2026-03-09T16:16:46.971799+0000 mgr.y (mgr.14520) 863 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:48.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:48 vm01 bash[28152]: cluster 2026-03-09T16:16:46.971799+0000 mgr.y (mgr.14520) 863 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:48.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:48 vm01 bash[28152]: audit 2026-03-09T16:16:47.188773+0000 mgr.y (mgr.14520) 864 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:48.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:48 vm01 bash[28152]: audit 2026-03-09T16:16:47.188773+0000 mgr.y (mgr.14520) 864 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:48 vm01 bash[20728]: cluster 2026-03-09T16:16:46.971799+0000 mgr.y (mgr.14520) 863 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:48 vm01 bash[20728]: cluster 2026-03-09T16:16:46.971799+0000 mgr.y (mgr.14520) 863 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:48 vm01 bash[20728]: audit 2026-03-09T16:16:47.188773+0000 mgr.y (mgr.14520) 864 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:48 vm01 bash[20728]: audit 2026-03-09T16:16:47.188773+0000 mgr.y (mgr.14520) 864 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:48 vm09 bash[22983]: cluster 2026-03-09T16:16:46.971799+0000 mgr.y (mgr.14520) 863 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:48 vm09 bash[22983]: cluster 2026-03-09T16:16:46.971799+0000 mgr.y (mgr.14520) 863 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:48 vm09 bash[22983]: audit 2026-03-09T16:16:47.188773+0000 mgr.y (mgr.14520) 864 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:48 vm09 bash[22983]: audit 2026-03-09T16:16:47.188773+0000 mgr.y (mgr.14520) 864 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:50.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:50 vm01 bash[28152]: cluster 2026-03-09T16:16:48.972338+0000 mgr.y (mgr.14520) 865 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:50.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:50 vm01 bash[28152]: cluster 2026-03-09T16:16:48.972338+0000 mgr.y (mgr.14520) 865 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:50.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:50 vm01 bash[20728]: cluster 2026-03-09T16:16:48.972338+0000 mgr.y (mgr.14520) 865 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:50.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:50 vm01 bash[20728]: cluster 2026-03-09T16:16:48.972338+0000 mgr.y (mgr.14520) 865 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:50 vm09 bash[22983]: cluster 2026-03-09T16:16:48.972338+0000 mgr.y (mgr.14520) 865 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:50 vm09 bash[22983]: cluster 2026-03-09T16:16:48.972338+0000 mgr.y (mgr.14520) 865 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: cluster 2026-03-09T16:16:50.972962+0000 mgr.y (mgr.14520) 866 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: cluster 2026-03-09T16:16:50.972962+0000 mgr.y (mgr.14520) 866 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.362862+0000 mon.a (mon.0) 3823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.362862+0000 mon.a (mon.0) 3823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.371002+0000 mon.a (mon.0) 3824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.371002+0000 mon.a (mon.0) 3824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.525157+0000 mon.a (mon.0) 3825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.525157+0000 mon.a (mon.0) 3825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.533480+0000 mon.a (mon.0) 3826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.533480+0000 mon.a (mon.0) 3826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.870097+0000 mon.a (mon.0) 3827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.870097+0000 mon.a (mon.0) 3827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.870670+0000 mon.a (mon.0) 3828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.870670+0000 mon.a (mon.0) 3828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.875866+0000 mon.a (mon.0) 3829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:52 vm09 bash[22983]: audit 2026-03-09T16:16:51.875866+0000 mon.a (mon.0) 3829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: cluster 2026-03-09T16:16:50.972962+0000 mgr.y (mgr.14520) 866 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: cluster 2026-03-09T16:16:50.972962+0000 mgr.y (mgr.14520) 866 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.362862+0000 mon.a (mon.0) 3823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.362862+0000 mon.a (mon.0) 3823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.371002+0000 mon.a (mon.0) 3824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.371002+0000 mon.a (mon.0) 3824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.525157+0000 mon.a (mon.0) 3825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.525157+0000 mon.a (mon.0) 3825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.533480+0000 mon.a (mon.0) 3826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.533480+0000 mon.a (mon.0) 3826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.870097+0000 mon.a (mon.0) 3827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.870097+0000 mon.a (mon.0) 3827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.870670+0000 mon.a (mon.0) 3828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.870670+0000 mon.a (mon.0) 3828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.875866+0000 mon.a (mon.0) 3829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:52 vm01 bash[28152]: audit 2026-03-09T16:16:51.875866+0000 mon.a (mon.0) 3829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: cluster 2026-03-09T16:16:50.972962+0000 mgr.y (mgr.14520) 866 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: cluster 2026-03-09T16:16:50.972962+0000 mgr.y (mgr.14520) 866 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.362862+0000 mon.a (mon.0) 3823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.362862+0000 mon.a (mon.0) 3823 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.371002+0000 mon.a (mon.0) 3824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.371002+0000 mon.a (mon.0) 3824 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.525157+0000 mon.a (mon.0) 3825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.525157+0000 mon.a (mon.0) 3825 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.533480+0000 mon.a (mon.0) 3826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.533480+0000 mon.a (mon.0) 3826 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.870097+0000 mon.a (mon.0) 3827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.870097+0000 mon.a (mon.0) 3827 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.870670+0000 mon.a (mon.0) 3828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.870670+0000 mon.a (mon.0) 3828 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.875866+0000 mon.a (mon.0) 3829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:52.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:52 vm01 bash[20728]: audit 2026-03-09T16:16:51.875866+0000 mon.a (mon.0) 3829 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:16:53.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:16:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:16:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:16:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:54 vm09 bash[22983]: cluster 2026-03-09T16:16:52.973372+0000 mgr.y (mgr.14520) 867 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:54 vm09 bash[22983]: cluster 2026-03-09T16:16:52.973372+0000 mgr.y (mgr.14520) 867 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:54.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:54 vm01 bash[28152]: cluster 2026-03-09T16:16:52.973372+0000 mgr.y (mgr.14520) 867 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:54.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:54 vm01 bash[28152]: cluster 2026-03-09T16:16:52.973372+0000 mgr.y (mgr.14520) 867 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:54.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:54 vm01 bash[20728]: cluster 2026-03-09T16:16:52.973372+0000 mgr.y (mgr.14520) 867 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:54.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:54 vm01 bash[20728]: cluster 2026-03-09T16:16:52.973372+0000 mgr.y (mgr.14520) 867 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:56 vm09 bash[22983]: cluster 2026-03-09T16:16:54.974002+0000 mgr.y (mgr.14520) 868 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:56 vm09 bash[22983]: cluster 2026-03-09T16:16:54.974002+0000 mgr.y (mgr.14520) 868 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:56.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:56 vm01 bash[28152]: cluster 2026-03-09T16:16:54.974002+0000 mgr.y (mgr.14520) 868 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:56.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:56 vm01 bash[28152]: cluster 2026-03-09T16:16:54.974002+0000 mgr.y (mgr.14520) 868 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:56.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:56 vm01 bash[20728]: cluster 2026-03-09T16:16:54.974002+0000 mgr.y (mgr.14520) 868 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:56.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:56 vm01 bash[20728]: cluster 2026-03-09T16:16:54.974002+0000 mgr.y (mgr.14520) 868 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:16:57.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:16:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:16:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:58 vm09 bash[22983]: cluster 2026-03-09T16:16:56.974332+0000 mgr.y (mgr.14520) 869 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:58 vm09 bash[22983]: cluster 2026-03-09T16:16:56.974332+0000 mgr.y (mgr.14520) 869 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:58 vm09 bash[22983]: audit 2026-03-09T16:16:57.199437+0000 mgr.y (mgr.14520) 870 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:58 vm09 bash[22983]: audit 2026-03-09T16:16:57.199437+0000 mgr.y (mgr.14520) 870 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:58.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:58 vm01 bash[28152]: cluster 2026-03-09T16:16:56.974332+0000 mgr.y (mgr.14520) 869 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:58.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:58 vm01 bash[28152]: cluster 2026-03-09T16:16:56.974332+0000 mgr.y (mgr.14520) 869 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:58.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:58 vm01 bash[28152]: audit 2026-03-09T16:16:57.199437+0000 mgr.y (mgr.14520) 870 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:58.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:58 vm01 bash[28152]: audit 2026-03-09T16:16:57.199437+0000 mgr.y (mgr.14520) 870 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:58.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:58 vm01 bash[20728]: cluster 2026-03-09T16:16:56.974332+0000 mgr.y (mgr.14520) 869 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:58.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:58 vm01 bash[20728]: cluster 2026-03-09T16:16:56.974332+0000 mgr.y (mgr.14520) 869 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:58.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:58 vm01 bash[20728]: audit 2026-03-09T16:16:57.199437+0000 mgr.y (mgr.14520) 870 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:58.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:58 vm01 bash[20728]: audit 2026-03-09T16:16:57.199437+0000 mgr.y (mgr.14520) 870 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:16:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:59 vm09 bash[22983]: cluster 2026-03-09T16:16:58.974861+0000 mgr.y (mgr.14520) 871 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:16:59 vm09 bash[22983]: cluster 2026-03-09T16:16:58.974861+0000 mgr.y (mgr.14520) 871 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:59.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:59 vm01 bash[28152]: cluster 2026-03-09T16:16:58.974861+0000 mgr.y (mgr.14520) 871 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:59.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:16:59 vm01 bash[28152]: cluster 2026-03-09T16:16:58.974861+0000 mgr.y (mgr.14520) 871 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:59.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:59 vm01 bash[20728]: cluster 2026-03-09T16:16:58.974861+0000 mgr.y (mgr.14520) 871 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:16:59.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:16:59 vm01 bash[20728]: cluster 2026-03-09T16:16:58.974861+0000 mgr.y (mgr.14520) 871 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:00.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:00 vm09 bash[22983]: audit 2026-03-09T16:16:59.734425+0000 mon.a (mon.0) 3830 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:00.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:00 vm09 bash[22983]: audit 2026-03-09T16:16:59.734425+0000 mon.a (mon.0) 3830 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:00.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:00 vm01 bash[28152]: audit 2026-03-09T16:16:59.734425+0000 mon.a (mon.0) 3830 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:00.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:00 vm01 bash[28152]: audit 2026-03-09T16:16:59.734425+0000 mon.a (mon.0) 3830 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:00.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:00 vm01 bash[20728]: audit 2026-03-09T16:16:59.734425+0000 mon.a (mon.0) 3830 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:00.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:00 vm01 bash[20728]: audit 2026-03-09T16:16:59.734425+0000 mon.a (mon.0) 3830 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:01.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:01 vm09 bash[22983]: cluster 2026-03-09T16:17:00.975432+0000 mgr.y (mgr.14520) 872 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:01.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:01 vm09 bash[22983]: cluster 2026-03-09T16:17:00.975432+0000 mgr.y (mgr.14520) 872 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:01.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:01 vm01 bash[28152]: cluster 2026-03-09T16:17:00.975432+0000 mgr.y (mgr.14520) 872 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:01.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:01 vm01 bash[28152]: cluster 2026-03-09T16:17:00.975432+0000 mgr.y (mgr.14520) 872 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:01.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:01 vm01 bash[20728]: cluster 2026-03-09T16:17:00.975432+0000 mgr.y (mgr.14520) 872 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:01.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:01 vm01 bash[20728]: cluster 2026-03-09T16:17:00.975432+0000 mgr.y (mgr.14520) 872 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:03.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:17:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:17:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:17:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:04 vm09 bash[22983]: cluster 2026-03-09T16:17:02.975737+0000 mgr.y (mgr.14520) 873 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:04 vm09 bash[22983]: cluster 2026-03-09T16:17:02.975737+0000 mgr.y (mgr.14520) 873 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:04.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:04 vm01 bash[28152]: cluster 2026-03-09T16:17:02.975737+0000 mgr.y (mgr.14520) 873 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:04.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:04 vm01 bash[28152]: cluster 2026-03-09T16:17:02.975737+0000 mgr.y (mgr.14520) 873 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:04.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:04 vm01 bash[20728]: cluster 2026-03-09T16:17:02.975737+0000 mgr.y (mgr.14520) 873 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:04.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:04 vm01 bash[20728]: cluster 2026-03-09T16:17:02.975737+0000 mgr.y (mgr.14520) 873 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:06.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:06 vm01 bash[28152]: cluster 2026-03-09T16:17:04.976459+0000 mgr.y (mgr.14520) 874 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:06.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:06 vm01 bash[28152]: cluster 2026-03-09T16:17:04.976459+0000 mgr.y (mgr.14520) 874 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:06.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:06 vm01 bash[20728]: cluster 2026-03-09T16:17:04.976459+0000 mgr.y (mgr.14520) 874 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:06.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:06 vm01 bash[20728]: cluster 2026-03-09T16:17:04.976459+0000 mgr.y (mgr.14520) 874 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:06 vm09 bash[22983]: cluster 2026-03-09T16:17:04.976459+0000 mgr.y (mgr.14520) 874 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:06 vm09 bash[22983]: cluster 2026-03-09T16:17:04.976459+0000 mgr.y (mgr.14520) 874 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:07.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:17:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:17:08.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:08 vm01 bash[28152]: cluster 2026-03-09T16:17:06.976767+0000 mgr.y (mgr.14520) 875 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:08.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:08 vm01 bash[28152]: cluster 2026-03-09T16:17:06.976767+0000 mgr.y (mgr.14520) 875 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:08.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:08 vm01 bash[20728]: cluster 2026-03-09T16:17:06.976767+0000 mgr.y (mgr.14520) 875 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:08.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:08 vm01 bash[20728]: cluster 2026-03-09T16:17:06.976767+0000 mgr.y (mgr.14520) 875 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:08 vm09 bash[22983]: cluster 2026-03-09T16:17:06.976767+0000 mgr.y (mgr.14520) 875 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:08 vm09 bash[22983]: cluster 2026-03-09T16:17:06.976767+0000 mgr.y (mgr.14520) 875 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:09.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:09 vm01 bash[28152]: audit 2026-03-09T16:17:07.207460+0000 mgr.y (mgr.14520) 876 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:09.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:09 vm01 bash[28152]: audit 2026-03-09T16:17:07.207460+0000 mgr.y (mgr.14520) 876 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:09.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:09 vm01 bash[20728]: audit 2026-03-09T16:17:07.207460+0000 mgr.y (mgr.14520) 876 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:09.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:09 vm01 bash[20728]: audit 2026-03-09T16:17:07.207460+0000 mgr.y (mgr.14520) 876 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:09 vm09 bash[22983]: audit 2026-03-09T16:17:07.207460+0000 mgr.y (mgr.14520) 876 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:09 vm09 bash[22983]: audit 2026-03-09T16:17:07.207460+0000 mgr.y (mgr.14520) 876 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:10.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:10 vm01 bash[28152]: cluster 2026-03-09T16:17:08.977296+0000 mgr.y (mgr.14520) 877 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:10.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:10 vm01 bash[28152]: cluster 2026-03-09T16:17:08.977296+0000 mgr.y (mgr.14520) 877 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:10.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:10 vm01 bash[20728]: cluster 2026-03-09T16:17:08.977296+0000 mgr.y (mgr.14520) 877 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:10.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:10 vm01 bash[20728]: cluster 2026-03-09T16:17:08.977296+0000 mgr.y (mgr.14520) 877 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:10 vm09 bash[22983]: cluster 2026-03-09T16:17:08.977296+0000 mgr.y (mgr.14520) 877 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:10 vm09 bash[22983]: cluster 2026-03-09T16:17:08.977296+0000 mgr.y (mgr.14520) 877 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:12.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:12 vm01 bash[28152]: cluster 2026-03-09T16:17:10.977929+0000 mgr.y (mgr.14520) 878 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:12.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:12 vm01 bash[28152]: cluster 2026-03-09T16:17:10.977929+0000 mgr.y (mgr.14520) 878 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:12.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:12 vm01 bash[20728]: cluster 2026-03-09T16:17:10.977929+0000 mgr.y (mgr.14520) 878 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:12.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:12 vm01 bash[20728]: cluster 2026-03-09T16:17:10.977929+0000 mgr.y (mgr.14520) 878 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:12 vm09 bash[22983]: cluster 2026-03-09T16:17:10.977929+0000 mgr.y (mgr.14520) 878 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:12 vm09 bash[22983]: cluster 2026-03-09T16:17:10.977929+0000 mgr.y (mgr.14520) 878 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:13.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:17:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:17:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:17:14.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:14 vm01 bash[28152]: cluster 2026-03-09T16:17:12.978208+0000 mgr.y (mgr.14520) 879 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:14.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:14 vm01 bash[28152]: cluster 2026-03-09T16:17:12.978208+0000 mgr.y (mgr.14520) 879 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:14.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:14 vm01 bash[20728]: cluster 2026-03-09T16:17:12.978208+0000 mgr.y (mgr.14520) 879 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:14.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:14 vm01 bash[20728]: cluster 2026-03-09T16:17:12.978208+0000 mgr.y (mgr.14520) 879 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:14 vm09 bash[22983]: cluster 2026-03-09T16:17:12.978208+0000 mgr.y (mgr.14520) 879 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:14 vm09 bash[22983]: cluster 2026-03-09T16:17:12.978208+0000 mgr.y (mgr.14520) 879 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:15 vm09 bash[22983]: audit 2026-03-09T16:17:14.741659+0000 mon.a (mon.0) 3831 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:15 vm09 bash[22983]: audit 2026-03-09T16:17:14.741659+0000 mon.a (mon.0) 3831 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:15.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:15 vm01 bash[28152]: audit 2026-03-09T16:17:14.741659+0000 mon.a (mon.0) 3831 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:15.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:15 vm01 bash[28152]: audit 2026-03-09T16:17:14.741659+0000 mon.a (mon.0) 3831 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:15.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:15 vm01 bash[20728]: audit 2026-03-09T16:17:14.741659+0000 mon.a (mon.0) 3831 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:15.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:15 vm01 bash[20728]: audit 2026-03-09T16:17:14.741659+0000 mon.a (mon.0) 3831 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:16 vm09 bash[22983]: cluster 2026-03-09T16:17:14.978905+0000 mgr.y (mgr.14520) 880 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:16 vm09 bash[22983]: cluster 2026-03-09T16:17:14.978905+0000 mgr.y (mgr.14520) 880 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:16 vm01 bash[28152]: cluster 2026-03-09T16:17:14.978905+0000 mgr.y (mgr.14520) 880 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:16 vm01 bash[28152]: cluster 2026-03-09T16:17:14.978905+0000 mgr.y (mgr.14520) 880 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:16.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:16 vm01 bash[20728]: cluster 2026-03-09T16:17:14.978905+0000 mgr.y (mgr.14520) 880 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:16.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:16 vm01 bash[20728]: cluster 2026-03-09T16:17:14.978905+0000 mgr.y (mgr.14520) 880 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:17.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:17:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:17:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:18 vm09 bash[22983]: cluster 2026-03-09T16:17:16.979200+0000 mgr.y (mgr.14520) 881 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:18 vm09 bash[22983]: cluster 2026-03-09T16:17:16.979200+0000 mgr.y (mgr.14520) 881 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:18.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:18 vm01 bash[28152]: cluster 2026-03-09T16:17:16.979200+0000 mgr.y (mgr.14520) 881 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:18.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:18 vm01 bash[28152]: cluster 2026-03-09T16:17:16.979200+0000 mgr.y (mgr.14520) 881 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:18.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:18 vm01 bash[20728]: cluster 2026-03-09T16:17:16.979200+0000 mgr.y (mgr.14520) 881 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:18.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:18 vm01 bash[20728]: cluster 2026-03-09T16:17:16.979200+0000 mgr.y (mgr.14520) 881 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:19 vm09 bash[22983]: audit 2026-03-09T16:17:17.217047+0000 mgr.y (mgr.14520) 882 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:19 vm09 bash[22983]: audit 2026-03-09T16:17:17.217047+0000 mgr.y (mgr.14520) 882 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:19.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:19 vm01 bash[28152]: audit 2026-03-09T16:17:17.217047+0000 mgr.y (mgr.14520) 882 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:19.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:19 vm01 bash[28152]: audit 2026-03-09T16:17:17.217047+0000 mgr.y (mgr.14520) 882 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:19.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:19 vm01 bash[20728]: audit 2026-03-09T16:17:17.217047+0000 mgr.y (mgr.14520) 882 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:19.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:19 vm01 bash[20728]: audit 2026-03-09T16:17:17.217047+0000 mgr.y (mgr.14520) 882 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:20 vm09 bash[22983]: cluster 2026-03-09T16:17:18.979780+0000 mgr.y (mgr.14520) 883 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:20 vm09 bash[22983]: cluster 2026-03-09T16:17:18.979780+0000 mgr.y (mgr.14520) 883 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:20.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:20 vm01 bash[28152]: cluster 2026-03-09T16:17:18.979780+0000 mgr.y (mgr.14520) 883 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:20.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:20 vm01 bash[28152]: cluster 2026-03-09T16:17:18.979780+0000 mgr.y (mgr.14520) 883 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:20.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:20 vm01 bash[20728]: cluster 2026-03-09T16:17:18.979780+0000 mgr.y (mgr.14520) 883 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:20.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:20 vm01 bash[20728]: cluster 2026-03-09T16:17:18.979780+0000 mgr.y (mgr.14520) 883 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:21.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:21 vm09 bash[22983]: cluster 2026-03-09T16:17:20.980304+0000 mgr.y (mgr.14520) 884 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:21.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:21 vm09 bash[22983]: cluster 2026-03-09T16:17:20.980304+0000 mgr.y (mgr.14520) 884 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:21.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:21 vm01 bash[28152]: cluster 2026-03-09T16:17:20.980304+0000 mgr.y (mgr.14520) 884 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:21.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:21 vm01 bash[28152]: cluster 2026-03-09T16:17:20.980304+0000 mgr.y (mgr.14520) 884 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:21.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:21 vm01 bash[20728]: cluster 2026-03-09T16:17:20.980304+0000 mgr.y (mgr.14520) 884 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:21.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:21 vm01 bash[20728]: cluster 2026-03-09T16:17:20.980304+0000 mgr.y (mgr.14520) 884 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:23.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:17:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:17:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:17:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:24 vm09 bash[22983]: cluster 2026-03-09T16:17:22.980738+0000 mgr.y (mgr.14520) 885 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:24 vm09 bash[22983]: cluster 2026-03-09T16:17:22.980738+0000 mgr.y (mgr.14520) 885 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:24.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:24 vm01 bash[28152]: cluster 2026-03-09T16:17:22.980738+0000 mgr.y (mgr.14520) 885 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:24.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:24 vm01 bash[28152]: cluster 2026-03-09T16:17:22.980738+0000 mgr.y (mgr.14520) 885 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:24.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:24 vm01 bash[20728]: cluster 2026-03-09T16:17:22.980738+0000 mgr.y (mgr.14520) 885 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:24.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:24 vm01 bash[20728]: cluster 2026-03-09T16:17:22.980738+0000 mgr.y (mgr.14520) 885 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:26 vm09 bash[22983]: cluster 2026-03-09T16:17:24.981384+0000 mgr.y (mgr.14520) 886 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:26 vm09 bash[22983]: cluster 2026-03-09T16:17:24.981384+0000 mgr.y (mgr.14520) 886 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:26.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:26 vm01 bash[28152]: cluster 2026-03-09T16:17:24.981384+0000 mgr.y (mgr.14520) 886 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:26.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:26 vm01 bash[28152]: cluster 2026-03-09T16:17:24.981384+0000 mgr.y (mgr.14520) 886 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:26.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:26 vm01 bash[20728]: cluster 2026-03-09T16:17:24.981384+0000 mgr.y (mgr.14520) 886 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:26.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:26 vm01 bash[20728]: cluster 2026-03-09T16:17:24.981384+0000 mgr.y (mgr.14520) 886 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:27.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:17:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:17:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:28 vm09 bash[22983]: cluster 2026-03-09T16:17:26.981665+0000 mgr.y (mgr.14520) 887 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:28 vm09 bash[22983]: cluster 2026-03-09T16:17:26.981665+0000 mgr.y (mgr.14520) 887 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:28.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:28 vm01 bash[28152]: cluster 2026-03-09T16:17:26.981665+0000 mgr.y (mgr.14520) 887 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:28.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:28 vm01 bash[28152]: cluster 2026-03-09T16:17:26.981665+0000 mgr.y (mgr.14520) 887 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:28.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:28 vm01 bash[20728]: cluster 2026-03-09T16:17:26.981665+0000 mgr.y (mgr.14520) 887 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:28.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:28 vm01 bash[20728]: cluster 2026-03-09T16:17:26.981665+0000 mgr.y (mgr.14520) 887 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:29.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:29 vm09 bash[22983]: audit 2026-03-09T16:17:27.227816+0000 mgr.y (mgr.14520) 888 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:29.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:29 vm09 bash[22983]: audit 2026-03-09T16:17:27.227816+0000 mgr.y (mgr.14520) 888 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:29.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:29 vm01 bash[28152]: audit 2026-03-09T16:17:27.227816+0000 mgr.y (mgr.14520) 888 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:29.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:29 vm01 bash[28152]: audit 2026-03-09T16:17:27.227816+0000 mgr.y (mgr.14520) 888 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:29.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:29 vm01 bash[20728]: audit 2026-03-09T16:17:27.227816+0000 mgr.y (mgr.14520) 888 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:29.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:29 vm01 bash[20728]: audit 2026-03-09T16:17:27.227816+0000 mgr.y (mgr.14520) 888 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:30 vm09 bash[22983]: cluster 2026-03-09T16:17:28.982185+0000 mgr.y (mgr.14520) 889 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:30 vm09 bash[22983]: cluster 2026-03-09T16:17:28.982185+0000 mgr.y (mgr.14520) 889 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:30 vm09 bash[22983]: audit 2026-03-09T16:17:29.747986+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:30 vm09 bash[22983]: audit 2026-03-09T16:17:29.747986+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:30 vm01 bash[28152]: cluster 2026-03-09T16:17:28.982185+0000 mgr.y (mgr.14520) 889 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:30 vm01 bash[28152]: cluster 2026-03-09T16:17:28.982185+0000 mgr.y (mgr.14520) 889 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:30 vm01 bash[28152]: audit 2026-03-09T16:17:29.747986+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:30 vm01 bash[28152]: audit 2026-03-09T16:17:29.747986+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:30.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:30 vm01 bash[20728]: cluster 2026-03-09T16:17:28.982185+0000 mgr.y (mgr.14520) 889 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:30.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:30 vm01 bash[20728]: cluster 2026-03-09T16:17:28.982185+0000 mgr.y (mgr.14520) 889 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:30.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:30 vm01 bash[20728]: audit 2026-03-09T16:17:29.747986+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:30.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:30 vm01 bash[20728]: audit 2026-03-09T16:17:29.747986+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:32 vm09 bash[22983]: cluster 2026-03-09T16:17:30.982702+0000 mgr.y (mgr.14520) 890 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:32 vm09 bash[22983]: cluster 2026-03-09T16:17:30.982702+0000 mgr.y (mgr.14520) 890 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:32.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:32 vm01 bash[28152]: cluster 2026-03-09T16:17:30.982702+0000 mgr.y (mgr.14520) 890 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:32.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:32 vm01 bash[28152]: cluster 2026-03-09T16:17:30.982702+0000 mgr.y (mgr.14520) 890 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:32.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:32 vm01 bash[20728]: cluster 2026-03-09T16:17:30.982702+0000 mgr.y (mgr.14520) 890 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:32.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:32 vm01 bash[20728]: cluster 2026-03-09T16:17:30.982702+0000 mgr.y (mgr.14520) 890 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:33.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:17:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:17:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:17:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:34 vm09 bash[22983]: cluster 2026-03-09T16:17:32.983121+0000 mgr.y (mgr.14520) 891 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:34 vm09 bash[22983]: cluster 2026-03-09T16:17:32.983121+0000 mgr.y (mgr.14520) 891 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:34.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:34 vm01 bash[28152]: cluster 2026-03-09T16:17:32.983121+0000 mgr.y (mgr.14520) 891 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:34.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:34 vm01 bash[28152]: cluster 2026-03-09T16:17:32.983121+0000 mgr.y (mgr.14520) 891 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:34.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:34 vm01 bash[20728]: cluster 2026-03-09T16:17:32.983121+0000 mgr.y (mgr.14520) 891 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:34.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:34 vm01 bash[20728]: cluster 2026-03-09T16:17:32.983121+0000 mgr.y (mgr.14520) 891 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:36 vm09 bash[22983]: cluster 2026-03-09T16:17:34.983807+0000 mgr.y (mgr.14520) 892 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:36 vm09 bash[22983]: cluster 2026-03-09T16:17:34.983807+0000 mgr.y (mgr.14520) 892 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:36.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:36 vm01 bash[28152]: cluster 2026-03-09T16:17:34.983807+0000 mgr.y (mgr.14520) 892 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:36.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:36 vm01 bash[28152]: cluster 2026-03-09T16:17:34.983807+0000 mgr.y (mgr.14520) 892 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:36.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:36 vm01 bash[20728]: cluster 2026-03-09T16:17:34.983807+0000 mgr.y (mgr.14520) 892 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:36.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:36 vm01 bash[20728]: cluster 2026-03-09T16:17:34.983807+0000 mgr.y (mgr.14520) 892 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:37.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:17:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:17:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:38 vm09 bash[22983]: cluster 2026-03-09T16:17:36.984192+0000 mgr.y (mgr.14520) 893 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:38 vm09 bash[22983]: cluster 2026-03-09T16:17:36.984192+0000 mgr.y (mgr.14520) 893 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:38 vm01 bash[28152]: cluster 2026-03-09T16:17:36.984192+0000 mgr.y (mgr.14520) 893 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:38 vm01 bash[28152]: cluster 2026-03-09T16:17:36.984192+0000 mgr.y (mgr.14520) 893 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:38 vm01 bash[20728]: cluster 2026-03-09T16:17:36.984192+0000 mgr.y (mgr.14520) 893 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:38 vm01 bash[20728]: cluster 2026-03-09T16:17:36.984192+0000 mgr.y (mgr.14520) 893 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:39 vm09 bash[22983]: audit 2026-03-09T16:17:37.237863+0000 mgr.y (mgr.14520) 894 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:39 vm09 bash[22983]: audit 2026-03-09T16:17:37.237863+0000 mgr.y (mgr.14520) 894 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:39.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:39 vm01 bash[28152]: audit 2026-03-09T16:17:37.237863+0000 mgr.y (mgr.14520) 894 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:39.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:39 vm01 bash[28152]: audit 2026-03-09T16:17:37.237863+0000 mgr.y (mgr.14520) 894 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:39.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:39 vm01 bash[20728]: audit 2026-03-09T16:17:37.237863+0000 mgr.y (mgr.14520) 894 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:39.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:39 vm01 bash[20728]: audit 2026-03-09T16:17:37.237863+0000 mgr.y (mgr.14520) 894 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:40 vm09 bash[22983]: cluster 2026-03-09T16:17:38.984736+0000 mgr.y (mgr.14520) 895 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:40.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:40 vm09 bash[22983]: cluster 2026-03-09T16:17:38.984736+0000 mgr.y (mgr.14520) 895 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:40.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:40 vm01 bash[28152]: cluster 2026-03-09T16:17:38.984736+0000 mgr.y (mgr.14520) 895 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:40.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:40 vm01 bash[28152]: cluster 2026-03-09T16:17:38.984736+0000 mgr.y (mgr.14520) 895 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:40.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:40 vm01 bash[20728]: cluster 2026-03-09T16:17:38.984736+0000 mgr.y (mgr.14520) 895 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:40.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:40 vm01 bash[20728]: cluster 2026-03-09T16:17:38.984736+0000 mgr.y (mgr.14520) 895 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:42 vm09 bash[22983]: cluster 2026-03-09T16:17:40.985504+0000 mgr.y (mgr.14520) 896 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:42 vm09 bash[22983]: cluster 2026-03-09T16:17:40.985504+0000 mgr.y (mgr.14520) 896 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:42.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:42 vm01 bash[28152]: cluster 2026-03-09T16:17:40.985504+0000 mgr.y (mgr.14520) 896 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:42.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:42 vm01 bash[28152]: cluster 2026-03-09T16:17:40.985504+0000 mgr.y (mgr.14520) 896 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:42.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:42 vm01 bash[20728]: cluster 2026-03-09T16:17:40.985504+0000 mgr.y (mgr.14520) 896 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:42.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:42 vm01 bash[20728]: cluster 2026-03-09T16:17:40.985504+0000 mgr.y (mgr.14520) 896 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:43.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:17:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:17:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:17:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:44 vm09 bash[22983]: cluster 2026-03-09T16:17:42.985808+0000 mgr.y (mgr.14520) 897 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:44 vm09 bash[22983]: cluster 2026-03-09T16:17:42.985808+0000 mgr.y (mgr.14520) 897 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:44.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:44 vm01 bash[28152]: cluster 2026-03-09T16:17:42.985808+0000 mgr.y (mgr.14520) 897 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:44.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:44 vm01 bash[28152]: cluster 2026-03-09T16:17:42.985808+0000 mgr.y (mgr.14520) 897 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:44.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:44 vm01 bash[20728]: cluster 2026-03-09T16:17:42.985808+0000 mgr.y (mgr.14520) 897 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:44.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:44 vm01 bash[20728]: cluster 2026-03-09T16:17:42.985808+0000 mgr.y (mgr.14520) 897 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:45 vm09 bash[22983]: audit 2026-03-09T16:17:44.754028+0000 mon.a (mon.0) 3833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:45 vm09 bash[22983]: audit 2026-03-09T16:17:44.754028+0000 mon.a (mon.0) 3833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:45.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:45 vm01 bash[28152]: audit 2026-03-09T16:17:44.754028+0000 mon.a (mon.0) 3833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:45.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:45 vm01 bash[28152]: audit 2026-03-09T16:17:44.754028+0000 mon.a (mon.0) 3833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:45 vm01 bash[20728]: audit 2026-03-09T16:17:44.754028+0000 mon.a (mon.0) 3833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:45 vm01 bash[20728]: audit 2026-03-09T16:17:44.754028+0000 mon.a (mon.0) 3833 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:17:46.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:46 vm01 bash[28152]: cluster 2026-03-09T16:17:44.986353+0000 mgr.y (mgr.14520) 898 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:46.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:46 vm01 bash[28152]: cluster 2026-03-09T16:17:44.986353+0000 mgr.y (mgr.14520) 898 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:46.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:46 vm01 bash[20728]: cluster 2026-03-09T16:17:44.986353+0000 mgr.y (mgr.14520) 898 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:46.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:46 vm01 bash[20728]: cluster 2026-03-09T16:17:44.986353+0000 mgr.y (mgr.14520) 898 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:46 vm09 bash[22983]: cluster 2026-03-09T16:17:44.986353+0000 mgr.y (mgr.14520) 898 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:46 vm09 bash[22983]: cluster 2026-03-09T16:17:44.986353+0000 mgr.y (mgr.14520) 898 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:47.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:17:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:17:48.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:48 vm01 bash[28152]: cluster 2026-03-09T16:17:46.986649+0000 mgr.y (mgr.14520) 899 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:48.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:48 vm01 bash[28152]: cluster 2026-03-09T16:17:46.986649+0000 mgr.y (mgr.14520) 899 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:48 vm01 bash[20728]: cluster 2026-03-09T16:17:46.986649+0000 mgr.y (mgr.14520) 899 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:48.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:48 vm01 bash[20728]: cluster 2026-03-09T16:17:46.986649+0000 mgr.y (mgr.14520) 899 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:48 vm09 bash[22983]: cluster 2026-03-09T16:17:46.986649+0000 mgr.y (mgr.14520) 899 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:48 vm09 bash[22983]: cluster 2026-03-09T16:17:46.986649+0000 mgr.y (mgr.14520) 899 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:49.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:49 vm01 bash[28152]: audit 2026-03-09T16:17:47.248518+0000 mgr.y (mgr.14520) 900 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:49.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:49 vm01 bash[28152]: audit 2026-03-09T16:17:47.248518+0000 mgr.y (mgr.14520) 900 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:49.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:49 vm01 bash[20728]: audit 2026-03-09T16:17:47.248518+0000 mgr.y (mgr.14520) 900 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:49.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:49 vm01 bash[20728]: audit 2026-03-09T16:17:47.248518+0000 mgr.y (mgr.14520) 900 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:49 vm09 bash[22983]: audit 2026-03-09T16:17:47.248518+0000 mgr.y (mgr.14520) 900 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:49 vm09 bash[22983]: audit 2026-03-09T16:17:47.248518+0000 mgr.y (mgr.14520) 900 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:50.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:50 vm01 bash[28152]: cluster 2026-03-09T16:17:48.987069+0000 mgr.y (mgr.14520) 901 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:50.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:50 vm01 bash[28152]: cluster 2026-03-09T16:17:48.987069+0000 mgr.y (mgr.14520) 901 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:50.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:50 vm01 bash[20728]: cluster 2026-03-09T16:17:48.987069+0000 mgr.y (mgr.14520) 901 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:50.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:50 vm01 bash[20728]: cluster 2026-03-09T16:17:48.987069+0000 mgr.y (mgr.14520) 901 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:50 vm09 bash[22983]: cluster 2026-03-09T16:17:48.987069+0000 mgr.y (mgr.14520) 901 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:50 vm09 bash[22983]: cluster 2026-03-09T16:17:48.987069+0000 mgr.y (mgr.14520) 901 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:17:52.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:52 vm01 bash[28152]: cluster 2026-03-09T16:17:50.987656+0000 mgr.y (mgr.14520) 902 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:52.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:52 vm01 bash[28152]: cluster 2026-03-09T16:17:50.987656+0000 mgr.y (mgr.14520) 902 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:52.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:52 vm01 bash[28152]: audit 2026-03-09T16:17:51.917258+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:17:52.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:52 vm01 bash[28152]: audit 2026-03-09T16:17:51.917258+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:17:52.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:52 vm01 bash[20728]: cluster 2026-03-09T16:17:50.987656+0000 mgr.y (mgr.14520) 902 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:52.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:52 vm01 bash[20728]: cluster 2026-03-09T16:17:50.987656+0000 mgr.y (mgr.14520) 902 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:52.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:52 vm01 bash[20728]: audit 2026-03-09T16:17:51.917258+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:17:52.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:52 vm01 bash[20728]: audit 2026-03-09T16:17:51.917258+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:17:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:52 vm09 bash[22983]: cluster 2026-03-09T16:17:50.987656+0000 mgr.y (mgr.14520) 902 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:52 vm09 bash[22983]: cluster 2026-03-09T16:17:50.987656+0000 mgr.y (mgr.14520) 902 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:52 vm09 bash[22983]: audit 2026-03-09T16:17:51.917258+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:17:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:52 vm09 bash[22983]: audit 2026-03-09T16:17:51.917258+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:17:53.153 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:17:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:17:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:17:53.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:53 vm01 bash[28152]: audit 2026-03-09T16:17:52.249930+0000 mon.a (mon.0) 3835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:53 vm01 bash[28152]: audit 2026-03-09T16:17:52.249930+0000 mon.a (mon.0) 3835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:53 vm01 bash[28152]: audit 2026-03-09T16:17:52.250649+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:53 vm01 bash[28152]: audit 2026-03-09T16:17:52.250649+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:53 vm01 bash[28152]: audit 2026-03-09T16:17:52.257566+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:53 vm01 bash[28152]: audit 2026-03-09T16:17:52.257566+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:53 vm01 bash[20728]: audit 2026-03-09T16:17:52.249930+0000 mon.a (mon.0) 3835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:53 vm01 bash[20728]: audit 2026-03-09T16:17:52.249930+0000 mon.a (mon.0) 3835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:53 vm01 bash[20728]: audit 2026-03-09T16:17:52.250649+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:53 vm01 bash[20728]: audit 2026-03-09T16:17:52.250649+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:53 vm01 bash[20728]: audit 2026-03-09T16:17:52.257566+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:17:53.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:53 vm01 bash[20728]: audit 2026-03-09T16:17:52.257566+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:17:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:53 vm09 bash[22983]: audit 2026-03-09T16:17:52.249930+0000 mon.a (mon.0) 3835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:17:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:53 vm09 bash[22983]: audit 2026-03-09T16:17:52.249930+0000 mon.a (mon.0) 3835 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:17:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:53 vm09 bash[22983]: audit 2026-03-09T16:17:52.250649+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:17:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:53 vm09 bash[22983]: audit 2026-03-09T16:17:52.250649+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:17:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:53 vm09 bash[22983]: audit 2026-03-09T16:17:52.257566+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:17:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:53 vm09 bash[22983]: audit 2026-03-09T16:17:52.257566+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:17:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:54 vm09 bash[22983]: cluster 2026-03-09T16:17:52.988013+0000 mgr.y (mgr.14520) 903 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:54 vm09 bash[22983]: cluster 2026-03-09T16:17:52.988013+0000 mgr.y (mgr.14520) 903 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:54.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:54 vm01 bash[28152]: cluster 2026-03-09T16:17:52.988013+0000 mgr.y (mgr.14520) 903 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:54.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:54 vm01 bash[28152]: cluster 2026-03-09T16:17:52.988013+0000 mgr.y (mgr.14520) 903 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:54.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:54 vm01 bash[20728]: cluster 2026-03-09T16:17:52.988013+0000 mgr.y (mgr.14520) 903 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:54.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:54 vm01 bash[20728]: cluster 2026-03-09T16:17:52.988013+0000 mgr.y (mgr.14520) 903 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:55 vm09 bash[22983]: cluster 2026-03-09T16:17:54.988699+0000 mgr.y (mgr.14520) 904 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:55 vm09 bash[22983]: cluster 2026-03-09T16:17:54.988699+0000 mgr.y (mgr.14520) 904 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:55.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:55 vm01 bash[28152]: cluster 2026-03-09T16:17:54.988699+0000 mgr.y (mgr.14520) 904 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:55.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:55 vm01 bash[28152]: cluster 2026-03-09T16:17:54.988699+0000 mgr.y (mgr.14520) 904 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:55.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:55 vm01 bash[20728]: cluster 2026-03-09T16:17:54.988699+0000 mgr.y (mgr.14520) 904 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:55.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:55 vm01 bash[20728]: cluster 2026-03-09T16:17:54.988699+0000 mgr.y (mgr.14520) 904 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:17:57.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:17:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:17:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:58 vm09 bash[22983]: cluster 2026-03-09T16:17:56.989002+0000 mgr.y (mgr.14520) 905 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:58 vm09 bash[22983]: cluster 2026-03-09T16:17:56.989002+0000 mgr.y (mgr.14520) 905 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:58.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:58 vm01 bash[28152]: cluster 2026-03-09T16:17:56.989002+0000 mgr.y (mgr.14520) 905 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:58.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:58 vm01 bash[28152]: cluster 2026-03-09T16:17:56.989002+0000 mgr.y (mgr.14520) 905 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:58.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:58 vm01 bash[20728]: cluster 2026-03-09T16:17:56.989002+0000 mgr.y (mgr.14520) 905 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:58.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:58 vm01 bash[20728]: cluster 2026-03-09T16:17:56.989002+0000 mgr.y (mgr.14520) 905 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:59 vm09 bash[22983]: audit 2026-03-09T16:17:57.258193+0000 mgr.y (mgr.14520) 906 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:59 vm09 bash[22983]: audit 2026-03-09T16:17:57.258193+0000 mgr.y (mgr.14520) 906 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:59 vm09 bash[22983]: cluster 2026-03-09T16:17:58.989620+0000 mgr.y (mgr.14520) 907 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:17:59 vm09 bash[22983]: cluster 2026-03-09T16:17:58.989620+0000 mgr.y (mgr.14520) 907 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:59.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:59 vm01 bash[28152]: audit 2026-03-09T16:17:57.258193+0000 mgr.y (mgr.14520) 906 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:59.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:59 vm01 bash[28152]: audit 2026-03-09T16:17:57.258193+0000 mgr.y (mgr.14520) 906 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:59.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:59 vm01 bash[28152]: cluster 2026-03-09T16:17:58.989620+0000 mgr.y (mgr.14520) 907 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:59.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:17:59 vm01 bash[28152]: cluster 2026-03-09T16:17:58.989620+0000 mgr.y (mgr.14520) 907 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:59.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:59 vm01 bash[20728]: audit 2026-03-09T16:17:57.258193+0000 mgr.y (mgr.14520) 906 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:59.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:59 vm01 bash[20728]: audit 2026-03-09T16:17:57.258193+0000 mgr.y (mgr.14520) 906 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:17:59.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:59 vm01 bash[20728]: cluster 2026-03-09T16:17:58.989620+0000 mgr.y (mgr.14520) 907 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:17:59.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:17:59 vm01 bash[20728]: cluster 2026-03-09T16:17:58.989620+0000 mgr.y (mgr.14520) 907 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:00.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:00 vm01 bash[28152]: audit 2026-03-09T16:17:59.760079+0000 mon.a (mon.0) 3838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:00.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:00 vm01 bash[28152]: audit 2026-03-09T16:17:59.760079+0000 mon.a (mon.0) 3838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:00.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:00 vm01 bash[20728]: audit 2026-03-09T16:17:59.760079+0000 mon.a (mon.0) 3838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:00.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:00 vm01 bash[20728]: audit 2026-03-09T16:17:59.760079+0000 mon.a (mon.0) 3838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:00.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:00 vm09 bash[22983]: audit 2026-03-09T16:17:59.760079+0000 mon.a (mon.0) 3838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:00.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:00 vm09 bash[22983]: audit 2026-03-09T16:17:59.760079+0000 mon.a (mon.0) 3838 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:01.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:01 vm09 bash[22983]: cluster 2026-03-09T16:18:00.990091+0000 mgr.y (mgr.14520) 908 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:01.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:01 vm09 bash[22983]: cluster 2026-03-09T16:18:00.990091+0000 mgr.y (mgr.14520) 908 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:01.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:01 vm01 bash[20728]: cluster 2026-03-09T16:18:00.990091+0000 mgr.y (mgr.14520) 908 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:01.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:01 vm01 bash[20728]: cluster 2026-03-09T16:18:00.990091+0000 mgr.y (mgr.14520) 908 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:01.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:01 vm01 bash[28152]: cluster 2026-03-09T16:18:00.990091+0000 mgr.y (mgr.14520) 908 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:01.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:01 vm01 bash[28152]: cluster 2026-03-09T16:18:00.990091+0000 mgr.y (mgr.14520) 908 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:03.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:18:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:18:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:18:04.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:04 vm01 bash[28152]: cluster 2026-03-09T16:18:02.993751+0000 mgr.y (mgr.14520) 909 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:04.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:04 vm01 bash[28152]: cluster 2026-03-09T16:18:02.993751+0000 mgr.y (mgr.14520) 909 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:04.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:04 vm01 bash[20728]: cluster 2026-03-09T16:18:02.993751+0000 mgr.y (mgr.14520) 909 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:04.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:04 vm01 bash[20728]: cluster 2026-03-09T16:18:02.993751+0000 mgr.y (mgr.14520) 909 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:04 vm09 bash[22983]: cluster 2026-03-09T16:18:02.993751+0000 mgr.y (mgr.14520) 909 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:04 vm09 bash[22983]: cluster 2026-03-09T16:18:02.993751+0000 mgr.y (mgr.14520) 909 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:05 vm09 bash[22983]: cluster 2026-03-09T16:18:04.994842+0000 mgr.y (mgr.14520) 910 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:05 vm09 bash[22983]: cluster 2026-03-09T16:18:04.994842+0000 mgr.y (mgr.14520) 910 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:05.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:05 vm01 bash[28152]: cluster 2026-03-09T16:18:04.994842+0000 mgr.y (mgr.14520) 910 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:05.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:05 vm01 bash[28152]: cluster 2026-03-09T16:18:04.994842+0000 mgr.y (mgr.14520) 910 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:05.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:05 vm01 bash[20728]: cluster 2026-03-09T16:18:04.994842+0000 mgr.y (mgr.14520) 910 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:05.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:05 vm01 bash[20728]: cluster 2026-03-09T16:18:04.994842+0000 mgr.y (mgr.14520) 910 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:07.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:18:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:18:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:08 vm09 bash[22983]: cluster 2026-03-09T16:18:06.995145+0000 mgr.y (mgr.14520) 911 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:08 vm09 bash[22983]: cluster 2026-03-09T16:18:06.995145+0000 mgr.y (mgr.14520) 911 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:08.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:08 vm01 bash[20728]: cluster 2026-03-09T16:18:06.995145+0000 mgr.y (mgr.14520) 911 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:08.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:08 vm01 bash[20728]: cluster 2026-03-09T16:18:06.995145+0000 mgr.y (mgr.14520) 911 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:08.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:08 vm01 bash[28152]: cluster 2026-03-09T16:18:06.995145+0000 mgr.y (mgr.14520) 911 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:08.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:08 vm01 bash[28152]: cluster 2026-03-09T16:18:06.995145+0000 mgr.y (mgr.14520) 911 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:09 vm09 bash[22983]: audit 2026-03-09T16:18:07.267357+0000 mgr.y (mgr.14520) 912 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:09 vm09 bash[22983]: audit 2026-03-09T16:18:07.267357+0000 mgr.y (mgr.14520) 912 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:09.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:09 vm01 bash[28152]: audit 2026-03-09T16:18:07.267357+0000 mgr.y (mgr.14520) 912 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:09.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:09 vm01 bash[28152]: audit 2026-03-09T16:18:07.267357+0000 mgr.y (mgr.14520) 912 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:09.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:09 vm01 bash[20728]: audit 2026-03-09T16:18:07.267357+0000 mgr.y (mgr.14520) 912 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:09.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:09 vm01 bash[20728]: audit 2026-03-09T16:18:07.267357+0000 mgr.y (mgr.14520) 912 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:10 vm09 bash[22983]: cluster 2026-03-09T16:18:08.995633+0000 mgr.y (mgr.14520) 913 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:10 vm09 bash[22983]: cluster 2026-03-09T16:18:08.995633+0000 mgr.y (mgr.14520) 913 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:10.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:10 vm01 bash[28152]: cluster 2026-03-09T16:18:08.995633+0000 mgr.y (mgr.14520) 913 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:10.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:10 vm01 bash[28152]: cluster 2026-03-09T16:18:08.995633+0000 mgr.y (mgr.14520) 913 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:10.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:10 vm01 bash[20728]: cluster 2026-03-09T16:18:08.995633+0000 mgr.y (mgr.14520) 913 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:10.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:10 vm01 bash[20728]: cluster 2026-03-09T16:18:08.995633+0000 mgr.y (mgr.14520) 913 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:11.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:11 vm09 bash[22983]: cluster 2026-03-09T16:18:10.996285+0000 mgr.y (mgr.14520) 914 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:11.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:11 vm09 bash[22983]: cluster 2026-03-09T16:18:10.996285+0000 mgr.y (mgr.14520) 914 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:11 vm01 bash[28152]: cluster 2026-03-09T16:18:10.996285+0000 mgr.y (mgr.14520) 914 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:11.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:11 vm01 bash[28152]: cluster 2026-03-09T16:18:10.996285+0000 mgr.y (mgr.14520) 914 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:11.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:11 vm01 bash[20728]: cluster 2026-03-09T16:18:10.996285+0000 mgr.y (mgr.14520) 914 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:11.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:11 vm01 bash[20728]: cluster 2026-03-09T16:18:10.996285+0000 mgr.y (mgr.14520) 914 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:13.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:18:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:18:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:18:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:14 vm09 bash[22983]: cluster 2026-03-09T16:18:12.996656+0000 mgr.y (mgr.14520) 915 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:14 vm09 bash[22983]: cluster 2026-03-09T16:18:12.996656+0000 mgr.y (mgr.14520) 915 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:14 vm01 bash[20728]: cluster 2026-03-09T16:18:12.996656+0000 mgr.y (mgr.14520) 915 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:14.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:14 vm01 bash[20728]: cluster 2026-03-09T16:18:12.996656+0000 mgr.y (mgr.14520) 915 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:14.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:14 vm01 bash[28152]: cluster 2026-03-09T16:18:12.996656+0000 mgr.y (mgr.14520) 915 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:14.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:14 vm01 bash[28152]: cluster 2026-03-09T16:18:12.996656+0000 mgr.y (mgr.14520) 915 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:18:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:15 vm09 bash[22983]: audit 2026-03-09T16:18:14.766011+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:15 vm09 bash[22983]: audit 2026-03-09T16:18:14.766011+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:15.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:15 vm01 bash[20728]: audit 2026-03-09T16:18:14.766011+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:15.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:15 vm01 bash[20728]: audit 2026-03-09T16:18:14.766011+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:15.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:15 vm01 bash[28152]: audit 2026-03-09T16:18:14.766011+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:15.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:15 vm01 bash[28152]: audit 2026-03-09T16:18:14.766011+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:16 vm09 bash[22983]: cluster 2026-03-09T16:18:14.997573+0000 mgr.y (mgr.14520) 916 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:16 vm09 bash[22983]: cluster 2026-03-09T16:18:14.997573+0000 mgr.y (mgr.14520) 916 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:16 vm01 bash[20728]: cluster 2026-03-09T16:18:14.997573+0000 mgr.y (mgr.14520) 916 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:16.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:16 vm01 bash[20728]: cluster 2026-03-09T16:18:14.997573+0000 mgr.y (mgr.14520) 916 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:16 vm01 bash[28152]: cluster 2026-03-09T16:18:14.997573+0000 mgr.y (mgr.14520) 916 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:16 vm01 bash[28152]: cluster 2026-03-09T16:18:14.997573+0000 mgr.y (mgr.14520) 916 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:17.565 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:18:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:18:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:17 vm09 bash[22983]: cluster 2026-03-09T16:18:16.997958+0000 mgr.y (mgr.14520) 917 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:17.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:17 vm09 bash[22983]: cluster 2026-03-09T16:18:16.997958+0000 mgr.y (mgr.14520) 917 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:17.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:17 vm01 bash[20728]: cluster 2026-03-09T16:18:16.997958+0000 mgr.y (mgr.14520) 917 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:17.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:17 vm01 bash[20728]: cluster 2026-03-09T16:18:16.997958+0000 mgr.y (mgr.14520) 917 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:17.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:17 vm01 bash[28152]: cluster 2026-03-09T16:18:16.997958+0000 mgr.y (mgr.14520) 917 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:17.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:17 vm01 bash[28152]: cluster 2026-03-09T16:18:16.997958+0000 mgr.y (mgr.14520) 917 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:18.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:18 vm09 bash[22983]: audit 2026-03-09T16:18:17.269414+0000 mgr.y (mgr.14520) 918 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:18.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:18 vm09 bash[22983]: audit 2026-03-09T16:18:17.269414+0000 mgr.y (mgr.14520) 918 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:18.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:18 vm01 bash[20728]: audit 2026-03-09T16:18:17.269414+0000 mgr.y (mgr.14520) 918 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:18.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:18 vm01 bash[20728]: audit 2026-03-09T16:18:17.269414+0000 mgr.y (mgr.14520) 918 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:18.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:18 vm01 bash[28152]: audit 2026-03-09T16:18:17.269414+0000 mgr.y (mgr.14520) 918 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:18.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:18 vm01 bash[28152]: audit 2026-03-09T16:18:17.269414+0000 mgr.y (mgr.14520) 918 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:20.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:19 vm09 bash[22983]: cluster 2026-03-09T16:18:18.998675+0000 mgr.y (mgr.14520) 919 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:20.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:19 vm09 bash[22983]: cluster 2026-03-09T16:18:18.998675+0000 mgr.y (mgr.14520) 919 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:20.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:19 vm01 bash[20728]: cluster 2026-03-09T16:18:18.998675+0000 mgr.y (mgr.14520) 919 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:20.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:19 vm01 bash[20728]: cluster 2026-03-09T16:18:18.998675+0000 mgr.y (mgr.14520) 919 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:20.174 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:19 vm01 bash[28152]: cluster 2026-03-09T16:18:18.998675+0000 mgr.y (mgr.14520) 919 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:20.174 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:19 vm01 bash[28152]: cluster 2026-03-09T16:18:18.998675+0000 mgr.y (mgr.14520) 919 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:22 vm09 bash[22983]: cluster 2026-03-09T16:18:20.999342+0000 mgr.y (mgr.14520) 920 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:22 vm09 bash[22983]: cluster 2026-03-09T16:18:20.999342+0000 mgr.y (mgr.14520) 920 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:22 vm01 bash[20728]: cluster 2026-03-09T16:18:20.999342+0000 mgr.y (mgr.14520) 920 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:22 vm01 bash[20728]: cluster 2026-03-09T16:18:20.999342+0000 mgr.y (mgr.14520) 920 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:22.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:22 vm01 bash[28152]: cluster 2026-03-09T16:18:20.999342+0000 mgr.y (mgr.14520) 920 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:22.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:22 vm01 bash[28152]: cluster 2026-03-09T16:18:20.999342+0000 mgr.y (mgr.14520) 920 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:18:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:18:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:18:23.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:23 vm09 bash[22983]: cluster 2026-03-09T16:18:22.999650+0000 mgr.y (mgr.14520) 921 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:23.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:23 vm09 bash[22983]: cluster 2026-03-09T16:18:22.999650+0000 mgr.y (mgr.14520) 921 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:23.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:23 vm01 bash[20728]: cluster 2026-03-09T16:18:22.999650+0000 mgr.y (mgr.14520) 921 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:23.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:23 vm01 bash[20728]: cluster 2026-03-09T16:18:22.999650+0000 mgr.y (mgr.14520) 921 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:23.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:23 vm01 bash[28152]: cluster 2026-03-09T16:18:22.999650+0000 mgr.y (mgr.14520) 921 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:23.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:23 vm01 bash[28152]: cluster 2026-03-09T16:18:22.999650+0000 mgr.y (mgr.14520) 921 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:26 vm09 bash[22983]: cluster 2026-03-09T16:18:25.000653+0000 mgr.y (mgr.14520) 922 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:26 vm09 bash[22983]: cluster 2026-03-09T16:18:25.000653+0000 mgr.y (mgr.14520) 922 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:26.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:26 vm01 bash[20728]: cluster 2026-03-09T16:18:25.000653+0000 mgr.y (mgr.14520) 922 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:26.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:26 vm01 bash[20728]: cluster 2026-03-09T16:18:25.000653+0000 mgr.y (mgr.14520) 922 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:26.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:26 vm01 bash[28152]: cluster 2026-03-09T16:18:25.000653+0000 mgr.y (mgr.14520) 922 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:26.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:26 vm01 bash[28152]: cluster 2026-03-09T16:18:25.000653+0000 mgr.y (mgr.14520) 922 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:27.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:18:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:18:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:28 vm09 bash[22983]: cluster 2026-03-09T16:18:27.001050+0000 mgr.y (mgr.14520) 923 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:28.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:28 vm09 bash[22983]: cluster 2026-03-09T16:18:27.001050+0000 mgr.y (mgr.14520) 923 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:28.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:28 vm01 bash[20728]: cluster 2026-03-09T16:18:27.001050+0000 mgr.y (mgr.14520) 923 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:28.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:28 vm01 bash[20728]: cluster 2026-03-09T16:18:27.001050+0000 mgr.y (mgr.14520) 923 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:28.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:28 vm01 bash[28152]: cluster 2026-03-09T16:18:27.001050+0000 mgr.y (mgr.14520) 923 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:28.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:28 vm01 bash[28152]: cluster 2026-03-09T16:18:27.001050+0000 mgr.y (mgr.14520) 923 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:29.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:29 vm09 bash[22983]: audit 2026-03-09T16:18:27.275970+0000 mgr.y (mgr.14520) 924 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:29.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:29 vm09 bash[22983]: audit 2026-03-09T16:18:27.275970+0000 mgr.y (mgr.14520) 924 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:29.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:29 vm01 bash[20728]: audit 2026-03-09T16:18:27.275970+0000 mgr.y (mgr.14520) 924 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:29.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:29 vm01 bash[20728]: audit 2026-03-09T16:18:27.275970+0000 mgr.y (mgr.14520) 924 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:29.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:29 vm01 bash[28152]: audit 2026-03-09T16:18:27.275970+0000 mgr.y (mgr.14520) 924 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:29.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:29 vm01 bash[28152]: audit 2026-03-09T16:18:27.275970+0000 mgr.y (mgr.14520) 924 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:30 vm09 bash[22983]: cluster 2026-03-09T16:18:29.001707+0000 mgr.y (mgr.14520) 925 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:30 vm09 bash[22983]: cluster 2026-03-09T16:18:29.001707+0000 mgr.y (mgr.14520) 925 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:30 vm09 bash[22983]: audit 2026-03-09T16:18:29.772427+0000 mon.a (mon.0) 3840 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:30.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:30 vm09 bash[22983]: audit 2026-03-09T16:18:29.772427+0000 mon.a (mon.0) 3840 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:30.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:30 vm01 bash[20728]: cluster 2026-03-09T16:18:29.001707+0000 mgr.y (mgr.14520) 925 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:30.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:30 vm01 bash[20728]: cluster 2026-03-09T16:18:29.001707+0000 mgr.y (mgr.14520) 925 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:30.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:30 vm01 bash[20728]: audit 2026-03-09T16:18:29.772427+0000 mon.a (mon.0) 3840 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:30.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:30 vm01 bash[20728]: audit 2026-03-09T16:18:29.772427+0000 mon.a (mon.0) 3840 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:30.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:30 vm01 bash[28152]: cluster 2026-03-09T16:18:29.001707+0000 mgr.y (mgr.14520) 925 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:30.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:30 vm01 bash[28152]: cluster 2026-03-09T16:18:29.001707+0000 mgr.y (mgr.14520) 925 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:30 vm01 bash[28152]: audit 2026-03-09T16:18:29.772427+0000 mon.a (mon.0) 3840 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:30.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:30 vm01 bash[28152]: audit 2026-03-09T16:18:29.772427+0000 mon.a (mon.0) 3840 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:32 vm09 bash[22983]: cluster 2026-03-09T16:18:31.002284+0000 mgr.y (mgr.14520) 926 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:32.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:32 vm09 bash[22983]: cluster 2026-03-09T16:18:31.002284+0000 mgr.y (mgr.14520) 926 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:32.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:32 vm01 bash[20728]: cluster 2026-03-09T16:18:31.002284+0000 mgr.y (mgr.14520) 926 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:32.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:32 vm01 bash[20728]: cluster 2026-03-09T16:18:31.002284+0000 mgr.y (mgr.14520) 926 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:32.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:32 vm01 bash[28152]: cluster 2026-03-09T16:18:31.002284+0000 mgr.y (mgr.14520) 926 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:32.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:32 vm01 bash[28152]: cluster 2026-03-09T16:18:31.002284+0000 mgr.y (mgr.14520) 926 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:33.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:18:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:18:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:18:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:34 vm01 bash[20728]: cluster 2026-03-09T16:18:33.002563+0000 mgr.y (mgr.14520) 927 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:34 vm01 bash[20728]: cluster 2026-03-09T16:18:33.002563+0000 mgr.y (mgr.14520) 927 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:34 vm01 bash[28152]: cluster 2026-03-09T16:18:33.002563+0000 mgr.y (mgr.14520) 927 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:34.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:34 vm01 bash[28152]: cluster 2026-03-09T16:18:33.002563+0000 mgr.y (mgr.14520) 927 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:34 vm09 bash[22983]: cluster 2026-03-09T16:18:33.002563+0000 mgr.y (mgr.14520) 927 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:34 vm09 bash[22983]: cluster 2026-03-09T16:18:33.002563+0000 mgr.y (mgr.14520) 927 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:36 vm01 bash[20728]: cluster 2026-03-09T16:18:35.003394+0000 mgr.y (mgr.14520) 928 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:36 vm01 bash[20728]: cluster 2026-03-09T16:18:35.003394+0000 mgr.y (mgr.14520) 928 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:36.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:36 vm01 bash[28152]: cluster 2026-03-09T16:18:35.003394+0000 mgr.y (mgr.14520) 928 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:36.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:36 vm01 bash[28152]: cluster 2026-03-09T16:18:35.003394+0000 mgr.y (mgr.14520) 928 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:36 vm09 bash[22983]: cluster 2026-03-09T16:18:35.003394+0000 mgr.y (mgr.14520) 928 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:36 vm09 bash[22983]: cluster 2026-03-09T16:18:35.003394+0000 mgr.y (mgr.14520) 928 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:37.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:18:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:18:38.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:38 vm01 bash[20728]: cluster 2026-03-09T16:18:37.003700+0000 mgr.y (mgr.14520) 929 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:38.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:38 vm01 bash[20728]: cluster 2026-03-09T16:18:37.003700+0000 mgr.y (mgr.14520) 929 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:38 vm01 bash[28152]: cluster 2026-03-09T16:18:37.003700+0000 mgr.y (mgr.14520) 929 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:38.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:38 vm01 bash[28152]: cluster 2026-03-09T16:18:37.003700+0000 mgr.y (mgr.14520) 929 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:38 vm09 bash[22983]: cluster 2026-03-09T16:18:37.003700+0000 mgr.y (mgr.14520) 929 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:38 vm09 bash[22983]: cluster 2026-03-09T16:18:37.003700+0000 mgr.y (mgr.14520) 929 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:39.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:39 vm01 bash[20728]: audit 2026-03-09T16:18:37.285820+0000 mgr.y (mgr.14520) 930 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:39.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:39 vm01 bash[20728]: audit 2026-03-09T16:18:37.285820+0000 mgr.y (mgr.14520) 930 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:39.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:39 vm01 bash[28152]: audit 2026-03-09T16:18:37.285820+0000 mgr.y (mgr.14520) 930 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:39.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:39 vm01 bash[28152]: audit 2026-03-09T16:18:37.285820+0000 mgr.y (mgr.14520) 930 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:39 vm09 bash[22983]: audit 2026-03-09T16:18:37.285820+0000 mgr.y (mgr.14520) 930 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:39 vm09 bash[22983]: audit 2026-03-09T16:18:37.285820+0000 mgr.y (mgr.14520) 930 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:40.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:40 vm01 bash[20728]: cluster 2026-03-09T16:18:39.004253+0000 mgr.y (mgr.14520) 931 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:40.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:40 vm01 bash[20728]: cluster 2026-03-09T16:18:39.004253+0000 mgr.y (mgr.14520) 931 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:40.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:40 vm01 bash[28152]: cluster 2026-03-09T16:18:39.004253+0000 mgr.y (mgr.14520) 931 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:40.424 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:40 vm01 bash[28152]: cluster 2026-03-09T16:18:39.004253+0000 mgr.y (mgr.14520) 931 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:40 vm09 bash[22983]: cluster 2026-03-09T16:18:39.004253+0000 mgr.y (mgr.14520) 931 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:40 vm09 bash[22983]: cluster 2026-03-09T16:18:39.004253+0000 mgr.y (mgr.14520) 931 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:42 vm09 bash[22983]: cluster 2026-03-09T16:18:41.004794+0000 mgr.y (mgr.14520) 932 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:42 vm09 bash[22983]: cluster 2026-03-09T16:18:41.004794+0000 mgr.y (mgr.14520) 932 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:42 vm01 bash[20728]: cluster 2026-03-09T16:18:41.004794+0000 mgr.y (mgr.14520) 932 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:42.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:42 vm01 bash[20728]: cluster 2026-03-09T16:18:41.004794+0000 mgr.y (mgr.14520) 932 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:42.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:42 vm01 bash[28152]: cluster 2026-03-09T16:18:41.004794+0000 mgr.y (mgr.14520) 932 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:42.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:42 vm01 bash[28152]: cluster 2026-03-09T16:18:41.004794+0000 mgr.y (mgr.14520) 932 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:43.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:18:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:18:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:18:43.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:43 vm09 bash[22983]: cluster 2026-03-09T16:18:43.005804+0000 mgr.y (mgr.14520) 933 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:43.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:43 vm09 bash[22983]: cluster 2026-03-09T16:18:43.005804+0000 mgr.y (mgr.14520) 933 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:43.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:43 vm01 bash[28152]: cluster 2026-03-09T16:18:43.005804+0000 mgr.y (mgr.14520) 933 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:43.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:43 vm01 bash[28152]: cluster 2026-03-09T16:18:43.005804+0000 mgr.y (mgr.14520) 933 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:43.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:43 vm01 bash[20728]: cluster 2026-03-09T16:18:43.005804+0000 mgr.y (mgr.14520) 933 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:43.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:43 vm01 bash[20728]: cluster 2026-03-09T16:18:43.005804+0000 mgr.y (mgr.14520) 933 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:45.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:44 vm01 bash[28152]: audit 2026-03-09T16:18:44.778459+0000 mon.a (mon.0) 3841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:45.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:44 vm01 bash[28152]: audit 2026-03-09T16:18:44.778459+0000 mon.a (mon.0) 3841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:45.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:44 vm01 bash[20728]: audit 2026-03-09T16:18:44.778459+0000 mon.a (mon.0) 3841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:45.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:44 vm01 bash[20728]: audit 2026-03-09T16:18:44.778459+0000 mon.a (mon.0) 3841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:44 vm09 bash[22983]: audit 2026-03-09T16:18:44.778459+0000 mon.a (mon.0) 3841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:45.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:44 vm09 bash[22983]: audit 2026-03-09T16:18:44.778459+0000 mon.a (mon.0) 3841 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:18:46.174 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:45 vm01 bash[28152]: cluster 2026-03-09T16:18:45.006460+0000 mgr.y (mgr.14520) 934 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:46.174 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:45 vm01 bash[28152]: cluster 2026-03-09T16:18:45.006460+0000 mgr.y (mgr.14520) 934 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:46.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:45 vm01 bash[20728]: cluster 2026-03-09T16:18:45.006460+0000 mgr.y (mgr.14520) 934 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:46.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:45 vm01 bash[20728]: cluster 2026-03-09T16:18:45.006460+0000 mgr.y (mgr.14520) 934 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:45 vm09 bash[22983]: cluster 2026-03-09T16:18:45.006460+0000 mgr.y (mgr.14520) 934 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:45 vm09 bash[22983]: cluster 2026-03-09T16:18:45.006460+0000 mgr.y (mgr.14520) 934 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:47.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:18:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:18:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:48 vm09 bash[22983]: cluster 2026-03-09T16:18:47.006752+0000 mgr.y (mgr.14520) 935 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:48 vm09 bash[22983]: cluster 2026-03-09T16:18:47.006752+0000 mgr.y (mgr.14520) 935 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:48.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:48 vm01 bash[28152]: cluster 2026-03-09T16:18:47.006752+0000 mgr.y (mgr.14520) 935 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:48.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:48 vm01 bash[28152]: cluster 2026-03-09T16:18:47.006752+0000 mgr.y (mgr.14520) 935 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:48 vm01 bash[20728]: cluster 2026-03-09T16:18:47.006752+0000 mgr.y (mgr.14520) 935 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:48 vm01 bash[20728]: cluster 2026-03-09T16:18:47.006752+0000 mgr.y (mgr.14520) 935 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:49.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:49 vm09 bash[22983]: audit 2026-03-09T16:18:47.296639+0000 mgr.y (mgr.14520) 936 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:49.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:49 vm09 bash[22983]: audit 2026-03-09T16:18:47.296639+0000 mgr.y (mgr.14520) 936 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:49.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:49 vm09 bash[22983]: cluster 2026-03-09T16:18:49.007375+0000 mgr.y (mgr.14520) 937 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:49.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:49 vm09 bash[22983]: cluster 2026-03-09T16:18:49.007375+0000 mgr.y (mgr.14520) 937 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:49.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:49 vm01 bash[28152]: audit 2026-03-09T16:18:47.296639+0000 mgr.y (mgr.14520) 936 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:49.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:49 vm01 bash[28152]: audit 2026-03-09T16:18:47.296639+0000 mgr.y (mgr.14520) 936 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:49.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:49 vm01 bash[28152]: cluster 2026-03-09T16:18:49.007375+0000 mgr.y (mgr.14520) 937 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:49.925 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:49 vm01 bash[28152]: cluster 2026-03-09T16:18:49.007375+0000 mgr.y (mgr.14520) 937 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:49.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:49 vm01 bash[20728]: audit 2026-03-09T16:18:47.296639+0000 mgr.y (mgr.14520) 936 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:49.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:49 vm01 bash[20728]: audit 2026-03-09T16:18:47.296639+0000 mgr.y (mgr.14520) 936 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:49.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:49 vm01 bash[20728]: cluster 2026-03-09T16:18:49.007375+0000 mgr.y (mgr.14520) 937 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:49.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:49 vm01 bash[20728]: cluster 2026-03-09T16:18:49.007375+0000 mgr.y (mgr.14520) 937 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:52 vm09 bash[22983]: cluster 2026-03-09T16:18:51.007960+0000 mgr.y (mgr.14520) 938 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:52 vm09 bash[22983]: cluster 2026-03-09T16:18:51.007960+0000 mgr.y (mgr.14520) 938 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:52.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:52 vm01 bash[28152]: cluster 2026-03-09T16:18:51.007960+0000 mgr.y (mgr.14520) 938 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:52.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:52 vm01 bash[28152]: cluster 2026-03-09T16:18:51.007960+0000 mgr.y (mgr.14520) 938 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:52.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:52 vm01 bash[20728]: cluster 2026-03-09T16:18:51.007960+0000 mgr.y (mgr.14520) 938 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:52.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:52 vm01 bash[20728]: cluster 2026-03-09T16:18:51.007960+0000 mgr.y (mgr.14520) 938 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:53.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:18:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:18:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:18:53.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:52.299157+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:52.299157+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:52.670384+0000 mon.a (mon.0) 3843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:52.670384+0000 mon.a (mon.0) 3843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:52.687596+0000 mon.a (mon.0) 3844 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:52.687596+0000 mon.a (mon.0) 3844 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: cluster 2026-03-09T16:18:53.008374+0000 mgr.y (mgr.14520) 939 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: cluster 2026-03-09T16:18:53.008374+0000 mgr.y (mgr.14520) 939 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:53.054867+0000 mon.a (mon.0) 3845 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:53.054867+0000 mon.a (mon.0) 3845 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:53.055709+0000 mon.a (mon.0) 3846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:53.055709+0000 mon.a (mon.0) 3846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:53.090615+0000 mon.a (mon.0) 3847 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:53 vm01 bash[28152]: audit 2026-03-09T16:18:53.090615+0000 mon.a (mon.0) 3847 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:52.299157+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:52.299157+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:52.670384+0000 mon.a (mon.0) 3843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:52.670384+0000 mon.a (mon.0) 3843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:52.687596+0000 mon.a (mon.0) 3844 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:52.687596+0000 mon.a (mon.0) 3844 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: cluster 2026-03-09T16:18:53.008374+0000 mgr.y (mgr.14520) 939 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: cluster 2026-03-09T16:18:53.008374+0000 mgr.y (mgr.14520) 939 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:53.054867+0000 mon.a (mon.0) 3845 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:53.054867+0000 mon.a (mon.0) 3845 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:53.055709+0000 mon.a (mon.0) 3846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:53.055709+0000 mon.a (mon.0) 3846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:53.090615+0000 mon.a (mon.0) 3847 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:53 vm01 bash[20728]: audit 2026-03-09T16:18:53.090615+0000 mon.a (mon.0) 3847 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:52.299157+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:52.299157+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:52.670384+0000 mon.a (mon.0) 3843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:52.670384+0000 mon.a (mon.0) 3843 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:52.687596+0000 mon.a (mon.0) 3844 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:52.687596+0000 mon.a (mon.0) 3844 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: cluster 2026-03-09T16:18:53.008374+0000 mgr.y (mgr.14520) 939 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: cluster 2026-03-09T16:18:53.008374+0000 mgr.y (mgr.14520) 939 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:53.054867+0000 mon.a (mon.0) 3845 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:53.054867+0000 mon.a (mon.0) 3845 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:53.055709+0000 mon.a (mon.0) 3846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:53.055709+0000 mon.a (mon.0) 3846 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:53.090615+0000 mon.a (mon.0) 3847 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:53 vm09 bash[22983]: audit 2026-03-09T16:18:53.090615+0000 mon.a (mon.0) 3847 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:18:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:56 vm09 bash[22983]: cluster 2026-03-09T16:18:55.009001+0000 mgr.y (mgr.14520) 940 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:56 vm09 bash[22983]: cluster 2026-03-09T16:18:55.009001+0000 mgr.y (mgr.14520) 940 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:56.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:56 vm01 bash[28152]: cluster 2026-03-09T16:18:55.009001+0000 mgr.y (mgr.14520) 940 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:56.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:56 vm01 bash[28152]: cluster 2026-03-09T16:18:55.009001+0000 mgr.y (mgr.14520) 940 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:56.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:56 vm01 bash[20728]: cluster 2026-03-09T16:18:55.009001+0000 mgr.y (mgr.14520) 940 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:56.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:56 vm01 bash[20728]: cluster 2026-03-09T16:18:55.009001+0000 mgr.y (mgr.14520) 940 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:18:57.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:18:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:18:57.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:57 vm09 bash[22983]: cluster 2026-03-09T16:18:57.009379+0000 mgr.y (mgr.14520) 941 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:57.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:57 vm09 bash[22983]: cluster 2026-03-09T16:18:57.009379+0000 mgr.y (mgr.14520) 941 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:57.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:57 vm01 bash[28152]: cluster 2026-03-09T16:18:57.009379+0000 mgr.y (mgr.14520) 941 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:57.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:57 vm01 bash[28152]: cluster 2026-03-09T16:18:57.009379+0000 mgr.y (mgr.14520) 941 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:57.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:57 vm01 bash[20728]: cluster 2026-03-09T16:18:57.009379+0000 mgr.y (mgr.14520) 941 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:57.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:57 vm01 bash[20728]: cluster 2026-03-09T16:18:57.009379+0000 mgr.y (mgr.14520) 941 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:58.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:58 vm09 bash[22983]: audit 2026-03-09T16:18:57.299683+0000 mgr.y (mgr.14520) 942 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:58.634 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:58 vm09 bash[22983]: audit 2026-03-09T16:18:57.299683+0000 mgr.y (mgr.14520) 942 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:58.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:58 vm01 bash[28152]: audit 2026-03-09T16:18:57.299683+0000 mgr.y (mgr.14520) 942 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:58.675 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:58 vm01 bash[28152]: audit 2026-03-09T16:18:57.299683+0000 mgr.y (mgr.14520) 942 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:58.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:58 vm01 bash[20728]: audit 2026-03-09T16:18:57.299683+0000 mgr.y (mgr.14520) 942 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:58.675 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:58 vm01 bash[20728]: audit 2026-03-09T16:18:57.299683+0000 mgr.y (mgr.14520) 942 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:18:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:59 vm09 bash[22983]: cluster 2026-03-09T16:18:59.009935+0000 mgr.y (mgr.14520) 943 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:59.883 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:18:59 vm09 bash[22983]: cluster 2026-03-09T16:18:59.009935+0000 mgr.y (mgr.14520) 943 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:59.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:59 vm01 bash[28152]: cluster 2026-03-09T16:18:59.009935+0000 mgr.y (mgr.14520) 943 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:59.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:18:59 vm01 bash[28152]: cluster 2026-03-09T16:18:59.009935+0000 mgr.y (mgr.14520) 943 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:59.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:59 vm01 bash[20728]: cluster 2026-03-09T16:18:59.009935+0000 mgr.y (mgr.14520) 943 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:18:59.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:18:59 vm01 bash[20728]: cluster 2026-03-09T16:18:59.009935+0000 mgr.y (mgr.14520) 943 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:00.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:00 vm01 bash[20728]: audit 2026-03-09T16:18:59.784391+0000 mon.a (mon.0) 3848 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:00.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:00 vm01 bash[20728]: audit 2026-03-09T16:18:59.784391+0000 mon.a (mon.0) 3848 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:00.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:00 vm01 bash[28152]: audit 2026-03-09T16:18:59.784391+0000 mon.a (mon.0) 3848 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:00.924 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:00 vm01 bash[28152]: audit 2026-03-09T16:18:59.784391+0000 mon.a (mon.0) 3848 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:01.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:00 vm09 bash[22983]: audit 2026-03-09T16:18:59.784391+0000 mon.a (mon.0) 3848 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:01.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:00 vm09 bash[22983]: audit 2026-03-09T16:18:59.784391+0000 mon.a (mon.0) 3848 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:02.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:01 vm01 bash[20728]: cluster 2026-03-09T16:19:01.011171+0000 mgr.y (mgr.14520) 944 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:02.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:01 vm01 bash[20728]: cluster 2026-03-09T16:19:01.011171+0000 mgr.y (mgr.14520) 944 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:02.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:01 vm01 bash[28152]: cluster 2026-03-09T16:19:01.011171+0000 mgr.y (mgr.14520) 944 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:02.174 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:01 vm01 bash[28152]: cluster 2026-03-09T16:19:01.011171+0000 mgr.y (mgr.14520) 944 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:01 vm09 bash[22983]: cluster 2026-03-09T16:19:01.011171+0000 mgr.y (mgr.14520) 944 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:02.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:01 vm09 bash[22983]: cluster 2026-03-09T16:19:01.011171+0000 mgr.y (mgr.14520) 944 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:03.174 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:19:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:19:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:19:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:04 vm09 bash[22983]: cluster 2026-03-09T16:19:03.011699+0000 mgr.y (mgr.14520) 945 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:04 vm09 bash[22983]: cluster 2026-03-09T16:19:03.011699+0000 mgr.y (mgr.14520) 945 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:04 vm01 bash[20728]: cluster 2026-03-09T16:19:03.011699+0000 mgr.y (mgr.14520) 945 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:04 vm01 bash[20728]: cluster 2026-03-09T16:19:03.011699+0000 mgr.y (mgr.14520) 945 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:04.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:04 vm01 bash[28152]: cluster 2026-03-09T16:19:03.011699+0000 mgr.y (mgr.14520) 945 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:04.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:04 vm01 bash[28152]: cluster 2026-03-09T16:19:03.011699+0000 mgr.y (mgr.14520) 945 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:05 vm09 bash[22983]: cluster 2026-03-09T16:19:05.012471+0000 mgr.y (mgr.14520) 946 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:05.884 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:05 vm09 bash[22983]: cluster 2026-03-09T16:19:05.012471+0000 mgr.y (mgr.14520) 946 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:05.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:05 vm01 bash[20728]: cluster 2026-03-09T16:19:05.012471+0000 mgr.y (mgr.14520) 946 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:05.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:05 vm01 bash[20728]: cluster 2026-03-09T16:19:05.012471+0000 mgr.y (mgr.14520) 946 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:05.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:05 vm01 bash[28152]: cluster 2026-03-09T16:19:05.012471+0000 mgr.y (mgr.14520) 946 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:05.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:05 vm01 bash[28152]: cluster 2026-03-09T16:19:05.012471+0000 mgr.y (mgr.14520) 946 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:07.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:19:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:19:08.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:08 vm01 bash[20728]: cluster 2026-03-09T16:19:07.012984+0000 mgr.y (mgr.14520) 947 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:08.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:08 vm01 bash[20728]: cluster 2026-03-09T16:19:07.012984+0000 mgr.y (mgr.14520) 947 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:08.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:08 vm01 bash[28152]: cluster 2026-03-09T16:19:07.012984+0000 mgr.y (mgr.14520) 947 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:08.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:08 vm01 bash[28152]: cluster 2026-03-09T16:19:07.012984+0000 mgr.y (mgr.14520) 947 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:08 vm09 bash[22983]: cluster 2026-03-09T16:19:07.012984+0000 mgr.y (mgr.14520) 947 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:08.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:08 vm09 bash[22983]: cluster 2026-03-09T16:19:07.012984+0000 mgr.y (mgr.14520) 947 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:09 vm09 bash[22983]: audit 2026-03-09T16:19:07.308362+0000 mgr.y (mgr.14520) 948 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:09 vm09 bash[22983]: audit 2026-03-09T16:19:07.308362+0000 mgr.y (mgr.14520) 948 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:09.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:09 vm01 bash[20728]: audit 2026-03-09T16:19:07.308362+0000 mgr.y (mgr.14520) 948 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:09.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:09 vm01 bash[20728]: audit 2026-03-09T16:19:07.308362+0000 mgr.y (mgr.14520) 948 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:09.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:09 vm01 bash[28152]: audit 2026-03-09T16:19:07.308362+0000 mgr.y (mgr.14520) 948 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:09.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:09 vm01 bash[28152]: audit 2026-03-09T16:19:07.308362+0000 mgr.y (mgr.14520) 948 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:10 vm09 bash[22983]: cluster 2026-03-09T16:19:09.014279+0000 mgr.y (mgr.14520) 949 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:10 vm09 bash[22983]: cluster 2026-03-09T16:19:09.014279+0000 mgr.y (mgr.14520) 949 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:10.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:10 vm01 bash[28152]: cluster 2026-03-09T16:19:09.014279+0000 mgr.y (mgr.14520) 949 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:10.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:10 vm01 bash[28152]: cluster 2026-03-09T16:19:09.014279+0000 mgr.y (mgr.14520) 949 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:10.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:10 vm01 bash[20728]: cluster 2026-03-09T16:19:09.014279+0000 mgr.y (mgr.14520) 949 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:10.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:10 vm01 bash[20728]: cluster 2026-03-09T16:19:09.014279+0000 mgr.y (mgr.14520) 949 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:11.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:11 vm01 bash[28152]: cluster 2026-03-09T16:19:11.014770+0000 mgr.y (mgr.14520) 950 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:11.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:11 vm01 bash[28152]: cluster 2026-03-09T16:19:11.014770+0000 mgr.y (mgr.14520) 950 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:11.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:11 vm01 bash[20728]: cluster 2026-03-09T16:19:11.014770+0000 mgr.y (mgr.14520) 950 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:11.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:11 vm01 bash[20728]: cluster 2026-03-09T16:19:11.014770+0000 mgr.y (mgr.14520) 950 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:11.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:11 vm09 bash[22983]: cluster 2026-03-09T16:19:11.014770+0000 mgr.y (mgr.14520) 950 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:11.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:11 vm09 bash[22983]: cluster 2026-03-09T16:19:11.014770+0000 mgr.y (mgr.14520) 950 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:13.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:19:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:19:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:19:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:14 vm09 bash[22983]: cluster 2026-03-09T16:19:13.015219+0000 mgr.y (mgr.14520) 951 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:19:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:14 vm09 bash[22983]: cluster 2026-03-09T16:19:13.015219+0000 mgr.y (mgr.14520) 951 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:19:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:14 vm01 bash[28152]: cluster 2026-03-09T16:19:13.015219+0000 mgr.y (mgr.14520) 951 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:19:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:14 vm01 bash[28152]: cluster 2026-03-09T16:19:13.015219+0000 mgr.y (mgr.14520) 951 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:19:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:14 vm01 bash[20728]: cluster 2026-03-09T16:19:13.015219+0000 mgr.y (mgr.14520) 951 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:19:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:14 vm01 bash[20728]: cluster 2026-03-09T16:19:13.015219+0000 mgr.y (mgr.14520) 951 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:19:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:15 vm09 bash[22983]: audit 2026-03-09T16:19:14.790948+0000 mon.a (mon.0) 3849 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:15 vm09 bash[22983]: audit 2026-03-09T16:19:14.790948+0000 mon.a (mon.0) 3849 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:15 vm01 bash[28152]: audit 2026-03-09T16:19:14.790948+0000 mon.a (mon.0) 3849 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:15 vm01 bash[28152]: audit 2026-03-09T16:19:14.790948+0000 mon.a (mon.0) 3849 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:15 vm01 bash[20728]: audit 2026-03-09T16:19:14.790948+0000 mon.a (mon.0) 3849 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:15 vm01 bash[20728]: audit 2026-03-09T16:19:14.790948+0000 mon.a (mon.0) 3849 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:16 vm09 bash[22983]: cluster 2026-03-09T16:19:15.015917+0000 mgr.y (mgr.14520) 952 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:16 vm09 bash[22983]: cluster 2026-03-09T16:19:15.015917+0000 mgr.y (mgr.14520) 952 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:16 vm01 bash[28152]: cluster 2026-03-09T16:19:15.015917+0000 mgr.y (mgr.14520) 952 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:16 vm01 bash[28152]: cluster 2026-03-09T16:19:15.015917+0000 mgr.y (mgr.14520) 952 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:16 vm01 bash[20728]: cluster 2026-03-09T16:19:15.015917+0000 mgr.y (mgr.14520) 952 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:16 vm01 bash[20728]: cluster 2026-03-09T16:19:15.015917+0000 mgr.y (mgr.14520) 952 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:17.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:19:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:19:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:18 vm09 bash[22983]: cluster 2026-03-09T16:19:17.016207+0000 mgr.y (mgr.14520) 953 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:18.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:18 vm09 bash[22983]: cluster 2026-03-09T16:19:17.016207+0000 mgr.y (mgr.14520) 953 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:18.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:18 vm01 bash[28152]: cluster 2026-03-09T16:19:17.016207+0000 mgr.y (mgr.14520) 953 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:18.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:18 vm01 bash[28152]: cluster 2026-03-09T16:19:17.016207+0000 mgr.y (mgr.14520) 953 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:18.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:18 vm01 bash[20728]: cluster 2026-03-09T16:19:17.016207+0000 mgr.y (mgr.14520) 953 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:18.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:18 vm01 bash[20728]: cluster 2026-03-09T16:19:17.016207+0000 mgr.y (mgr.14520) 953 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:19.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:19 vm09 bash[22983]: audit 2026-03-09T16:19:17.311459+0000 mgr.y (mgr.14520) 954 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:19.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:19 vm09 bash[22983]: audit 2026-03-09T16:19:17.311459+0000 mgr.y (mgr.14520) 954 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:19.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:19 vm01 bash[28152]: audit 2026-03-09T16:19:17.311459+0000 mgr.y (mgr.14520) 954 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:19.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:19 vm01 bash[28152]: audit 2026-03-09T16:19:17.311459+0000 mgr.y (mgr.14520) 954 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:19.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:19 vm01 bash[20728]: audit 2026-03-09T16:19:17.311459+0000 mgr.y (mgr.14520) 954 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:19.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:19 vm01 bash[20728]: audit 2026-03-09T16:19:17.311459+0000 mgr.y (mgr.14520) 954 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:20 vm09 bash[22983]: cluster 2026-03-09T16:19:19.016762+0000 mgr.y (mgr.14520) 955 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:20.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:20 vm09 bash[22983]: cluster 2026-03-09T16:19:19.016762+0000 mgr.y (mgr.14520) 955 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:20.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:20 vm01 bash[28152]: cluster 2026-03-09T16:19:19.016762+0000 mgr.y (mgr.14520) 955 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:20.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:20 vm01 bash[28152]: cluster 2026-03-09T16:19:19.016762+0000 mgr.y (mgr.14520) 955 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:20.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:20 vm01 bash[20728]: cluster 2026-03-09T16:19:19.016762+0000 mgr.y (mgr.14520) 955 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:20.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:20 vm01 bash[20728]: cluster 2026-03-09T16:19:19.016762+0000 mgr.y (mgr.14520) 955 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:22.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:22 vm01 bash[28152]: cluster 2026-03-09T16:19:21.017373+0000 mgr.y (mgr.14520) 956 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:22.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:22 vm01 bash[28152]: cluster 2026-03-09T16:19:21.017373+0000 mgr.y (mgr.14520) 956 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:22.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:22 vm01 bash[20728]: cluster 2026-03-09T16:19:21.017373+0000 mgr.y (mgr.14520) 956 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:22.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:22 vm01 bash[20728]: cluster 2026-03-09T16:19:21.017373+0000 mgr.y (mgr.14520) 956 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:22 vm09 bash[22983]: cluster 2026-03-09T16:19:21.017373+0000 mgr.y (mgr.14520) 956 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:22 vm09 bash[22983]: cluster 2026-03-09T16:19:21.017373+0000 mgr.y (mgr.14520) 956 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:19:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:19:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:19:24.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:24 vm01 bash[28152]: cluster 2026-03-09T16:19:23.017647+0000 mgr.y (mgr.14520) 957 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:24.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:24 vm01 bash[28152]: cluster 2026-03-09T16:19:23.017647+0000 mgr.y (mgr.14520) 957 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:24.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:24 vm01 bash[20728]: cluster 2026-03-09T16:19:23.017647+0000 mgr.y (mgr.14520) 957 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:24.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:24 vm01 bash[20728]: cluster 2026-03-09T16:19:23.017647+0000 mgr.y (mgr.14520) 957 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:24 vm09 bash[22983]: cluster 2026-03-09T16:19:23.017647+0000 mgr.y (mgr.14520) 957 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:24 vm09 bash[22983]: cluster 2026-03-09T16:19:23.017647+0000 mgr.y (mgr.14520) 957 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:26.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:26 vm01 bash[28152]: cluster 2026-03-09T16:19:25.018287+0000 mgr.y (mgr.14520) 958 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:26.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:26 vm01 bash[28152]: cluster 2026-03-09T16:19:25.018287+0000 mgr.y (mgr.14520) 958 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:26.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:26 vm01 bash[20728]: cluster 2026-03-09T16:19:25.018287+0000 mgr.y (mgr.14520) 958 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:26.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:26 vm01 bash[20728]: cluster 2026-03-09T16:19:25.018287+0000 mgr.y (mgr.14520) 958 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:26 vm09 bash[22983]: cluster 2026-03-09T16:19:25.018287+0000 mgr.y (mgr.14520) 958 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:26 vm09 bash[22983]: cluster 2026-03-09T16:19:25.018287+0000 mgr.y (mgr.14520) 958 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:27.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:19:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:19:28.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:28 vm01 bash[28152]: cluster 2026-03-09T16:19:27.018551+0000 mgr.y (mgr.14520) 959 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:28.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:28 vm01 bash[28152]: cluster 2026-03-09T16:19:27.018551+0000 mgr.y (mgr.14520) 959 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:28.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:28 vm01 bash[20728]: cluster 2026-03-09T16:19:27.018551+0000 mgr.y (mgr.14520) 959 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:28.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:28 vm01 bash[20728]: cluster 2026-03-09T16:19:27.018551+0000 mgr.y (mgr.14520) 959 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:28 vm09 bash[22983]: cluster 2026-03-09T16:19:27.018551+0000 mgr.y (mgr.14520) 959 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:28 vm09 bash[22983]: cluster 2026-03-09T16:19:27.018551+0000 mgr.y (mgr.14520) 959 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:29 vm09 bash[22983]: audit 2026-03-09T16:19:27.321625+0000 mgr.y (mgr.14520) 960 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:29 vm09 bash[22983]: audit 2026-03-09T16:19:27.321625+0000 mgr.y (mgr.14520) 960 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:29.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:29 vm01 bash[28152]: audit 2026-03-09T16:19:27.321625+0000 mgr.y (mgr.14520) 960 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:29.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:29 vm01 bash[28152]: audit 2026-03-09T16:19:27.321625+0000 mgr.y (mgr.14520) 960 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:29 vm01 bash[20728]: audit 2026-03-09T16:19:27.321625+0000 mgr.y (mgr.14520) 960 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:29 vm01 bash[20728]: audit 2026-03-09T16:19:27.321625+0000 mgr.y (mgr.14520) 960 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:30 vm09 bash[22983]: cluster 2026-03-09T16:19:29.018964+0000 mgr.y (mgr.14520) 961 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:30 vm09 bash[22983]: cluster 2026-03-09T16:19:29.018964+0000 mgr.y (mgr.14520) 961 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:30 vm09 bash[22983]: audit 2026-03-09T16:19:29.796922+0000 mon.a (mon.0) 3850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:30 vm09 bash[22983]: audit 2026-03-09T16:19:29.796922+0000 mon.a (mon.0) 3850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:30 vm01 bash[20728]: cluster 2026-03-09T16:19:29.018964+0000 mgr.y (mgr.14520) 961 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:30 vm01 bash[20728]: cluster 2026-03-09T16:19:29.018964+0000 mgr.y (mgr.14520) 961 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:30 vm01 bash[20728]: audit 2026-03-09T16:19:29.796922+0000 mon.a (mon.0) 3850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:30 vm01 bash[20728]: audit 2026-03-09T16:19:29.796922+0000 mon.a (mon.0) 3850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:30 vm01 bash[28152]: cluster 2026-03-09T16:19:29.018964+0000 mgr.y (mgr.14520) 961 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:30 vm01 bash[28152]: cluster 2026-03-09T16:19:29.018964+0000 mgr.y (mgr.14520) 961 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:30 vm01 bash[28152]: audit 2026-03-09T16:19:29.796922+0000 mon.a (mon.0) 3850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:30 vm01 bash[28152]: audit 2026-03-09T16:19:29.796922+0000 mon.a (mon.0) 3850 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:31 vm09 bash[22983]: cluster 2026-03-09T16:19:31.019434+0000 mgr.y (mgr.14520) 962 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:31 vm09 bash[22983]: cluster 2026-03-09T16:19:31.019434+0000 mgr.y (mgr.14520) 962 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:31.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:31 vm01 bash[20728]: cluster 2026-03-09T16:19:31.019434+0000 mgr.y (mgr.14520) 962 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:31.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:31 vm01 bash[20728]: cluster 2026-03-09T16:19:31.019434+0000 mgr.y (mgr.14520) 962 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:31.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:31 vm01 bash[28152]: cluster 2026-03-09T16:19:31.019434+0000 mgr.y (mgr.14520) 962 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:31.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:31 vm01 bash[28152]: cluster 2026-03-09T16:19:31.019434+0000 mgr.y (mgr.14520) 962 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:33.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:19:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:19:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:19:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:34 vm09 bash[22983]: cluster 2026-03-09T16:19:33.019723+0000 mgr.y (mgr.14520) 963 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:34 vm09 bash[22983]: cluster 2026-03-09T16:19:33.019723+0000 mgr.y (mgr.14520) 963 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:34 vm01 bash[20728]: cluster 2026-03-09T16:19:33.019723+0000 mgr.y (mgr.14520) 963 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:34 vm01 bash[20728]: cluster 2026-03-09T16:19:33.019723+0000 mgr.y (mgr.14520) 963 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:34 vm01 bash[28152]: cluster 2026-03-09T16:19:33.019723+0000 mgr.y (mgr.14520) 963 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:34 vm01 bash[28152]: cluster 2026-03-09T16:19:33.019723+0000 mgr.y (mgr.14520) 963 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:36 vm09 bash[22983]: cluster 2026-03-09T16:19:35.020323+0000 mgr.y (mgr.14520) 964 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:36 vm09 bash[22983]: cluster 2026-03-09T16:19:35.020323+0000 mgr.y (mgr.14520) 964 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:36 vm01 bash[20728]: cluster 2026-03-09T16:19:35.020323+0000 mgr.y (mgr.14520) 964 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:36 vm01 bash[20728]: cluster 2026-03-09T16:19:35.020323+0000 mgr.y (mgr.14520) 964 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:36.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:36 vm01 bash[28152]: cluster 2026-03-09T16:19:35.020323+0000 mgr.y (mgr.14520) 964 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:36.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:36 vm01 bash[28152]: cluster 2026-03-09T16:19:35.020323+0000 mgr.y (mgr.14520) 964 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:37.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:19:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:19:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:38 vm09 bash[22983]: cluster 2026-03-09T16:19:37.020641+0000 mgr.y (mgr.14520) 965 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:38 vm09 bash[22983]: cluster 2026-03-09T16:19:37.020641+0000 mgr.y (mgr.14520) 965 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:38.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:38 vm01 bash[20728]: cluster 2026-03-09T16:19:37.020641+0000 mgr.y (mgr.14520) 965 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:38.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:38 vm01 bash[20728]: cluster 2026-03-09T16:19:37.020641+0000 mgr.y (mgr.14520) 965 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:38.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:38 vm01 bash[28152]: cluster 2026-03-09T16:19:37.020641+0000 mgr.y (mgr.14520) 965 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:38.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:38 vm01 bash[28152]: cluster 2026-03-09T16:19:37.020641+0000 mgr.y (mgr.14520) 965 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:39 vm09 bash[22983]: audit 2026-03-09T16:19:37.332172+0000 mgr.y (mgr.14520) 966 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:39 vm09 bash[22983]: audit 2026-03-09T16:19:37.332172+0000 mgr.y (mgr.14520) 966 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:39.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:39 vm01 bash[20728]: audit 2026-03-09T16:19:37.332172+0000 mgr.y (mgr.14520) 966 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:39.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:39 vm01 bash[20728]: audit 2026-03-09T16:19:37.332172+0000 mgr.y (mgr.14520) 966 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:39.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:39 vm01 bash[28152]: audit 2026-03-09T16:19:37.332172+0000 mgr.y (mgr.14520) 966 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:39.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:39 vm01 bash[28152]: audit 2026-03-09T16:19:37.332172+0000 mgr.y (mgr.14520) 966 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:40.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:40 vm01 bash[20728]: cluster 2026-03-09T16:19:39.021122+0000 mgr.y (mgr.14520) 967 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:40.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:40 vm01 bash[20728]: cluster 2026-03-09T16:19:39.021122+0000 mgr.y (mgr.14520) 967 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:40.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:40 vm01 bash[28152]: cluster 2026-03-09T16:19:39.021122+0000 mgr.y (mgr.14520) 967 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:40.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:40 vm01 bash[28152]: cluster 2026-03-09T16:19:39.021122+0000 mgr.y (mgr.14520) 967 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:40 vm09 bash[22983]: cluster 2026-03-09T16:19:39.021122+0000 mgr.y (mgr.14520) 967 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:40 vm09 bash[22983]: cluster 2026-03-09T16:19:39.021122+0000 mgr.y (mgr.14520) 967 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:42 vm09 bash[22983]: cluster 2026-03-09T16:19:41.021723+0000 mgr.y (mgr.14520) 968 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:42 vm09 bash[22983]: cluster 2026-03-09T16:19:41.021723+0000 mgr.y (mgr.14520) 968 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:42 vm01 bash[20728]: cluster 2026-03-09T16:19:41.021723+0000 mgr.y (mgr.14520) 968 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:42 vm01 bash[20728]: cluster 2026-03-09T16:19:41.021723+0000 mgr.y (mgr.14520) 968 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:42.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:42 vm01 bash[28152]: cluster 2026-03-09T16:19:41.021723+0000 mgr.y (mgr.14520) 968 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:42.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:42 vm01 bash[28152]: cluster 2026-03-09T16:19:41.021723+0000 mgr.y (mgr.14520) 968 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:43.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:19:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:19:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:19:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:44 vm09 bash[22983]: cluster 2026-03-09T16:19:43.021974+0000 mgr.y (mgr.14520) 969 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:44 vm09 bash[22983]: cluster 2026-03-09T16:19:43.021974+0000 mgr.y (mgr.14520) 969 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:44.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:44 vm01 bash[20728]: cluster 2026-03-09T16:19:43.021974+0000 mgr.y (mgr.14520) 969 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:44.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:44 vm01 bash[20728]: cluster 2026-03-09T16:19:43.021974+0000 mgr.y (mgr.14520) 969 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:44.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:44 vm01 bash[28152]: cluster 2026-03-09T16:19:43.021974+0000 mgr.y (mgr.14520) 969 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:44.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:44 vm01 bash[28152]: cluster 2026-03-09T16:19:43.021974+0000 mgr.y (mgr.14520) 969 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:45 vm09 bash[22983]: audit 2026-03-09T16:19:44.803027+0000 mon.a (mon.0) 3851 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:45 vm09 bash[22983]: audit 2026-03-09T16:19:44.803027+0000 mon.a (mon.0) 3851 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:45 vm01 bash[20728]: audit 2026-03-09T16:19:44.803027+0000 mon.a (mon.0) 3851 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:45 vm01 bash[20728]: audit 2026-03-09T16:19:44.803027+0000 mon.a (mon.0) 3851 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:45.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:45 vm01 bash[28152]: audit 2026-03-09T16:19:44.803027+0000 mon.a (mon.0) 3851 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:45.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:45 vm01 bash[28152]: audit 2026-03-09T16:19:44.803027+0000 mon.a (mon.0) 3851 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:19:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:46 vm09 bash[22983]: cluster 2026-03-09T16:19:45.022707+0000 mgr.y (mgr.14520) 970 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:46 vm09 bash[22983]: cluster 2026-03-09T16:19:45.022707+0000 mgr.y (mgr.14520) 970 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:46.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:46 vm01 bash[28152]: cluster 2026-03-09T16:19:45.022707+0000 mgr.y (mgr.14520) 970 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:46.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:46 vm01 bash[28152]: cluster 2026-03-09T16:19:45.022707+0000 mgr.y (mgr.14520) 970 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:46 vm01 bash[20728]: cluster 2026-03-09T16:19:45.022707+0000 mgr.y (mgr.14520) 970 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:46 vm01 bash[20728]: cluster 2026-03-09T16:19:45.022707+0000 mgr.y (mgr.14520) 970 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:47.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:19:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:19:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:47 vm09 bash[22983]: cluster 2026-03-09T16:19:47.022968+0000 mgr.y (mgr.14520) 971 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:47 vm09 bash[22983]: cluster 2026-03-09T16:19:47.022968+0000 mgr.y (mgr.14520) 971 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:47.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:47 vm01 bash[28152]: cluster 2026-03-09T16:19:47.022968+0000 mgr.y (mgr.14520) 971 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:47.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:47 vm01 bash[28152]: cluster 2026-03-09T16:19:47.022968+0000 mgr.y (mgr.14520) 971 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:47.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:47 vm01 bash[20728]: cluster 2026-03-09T16:19:47.022968+0000 mgr.y (mgr.14520) 971 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:47.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:47 vm01 bash[20728]: cluster 2026-03-09T16:19:47.022968+0000 mgr.y (mgr.14520) 971 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:48 vm09 bash[22983]: audit 2026-03-09T16:19:47.341798+0000 mgr.y (mgr.14520) 972 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:48 vm09 bash[22983]: audit 2026-03-09T16:19:47.341798+0000 mgr.y (mgr.14520) 972 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:48.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:48 vm01 bash[28152]: audit 2026-03-09T16:19:47.341798+0000 mgr.y (mgr.14520) 972 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:48.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:48 vm01 bash[28152]: audit 2026-03-09T16:19:47.341798+0000 mgr.y (mgr.14520) 972 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:48 vm01 bash[20728]: audit 2026-03-09T16:19:47.341798+0000 mgr.y (mgr.14520) 972 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:48 vm01 bash[20728]: audit 2026-03-09T16:19:47.341798+0000 mgr.y (mgr.14520) 972 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:49 vm09 bash[22983]: cluster 2026-03-09T16:19:49.023428+0000 mgr.y (mgr.14520) 973 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:49 vm09 bash[22983]: cluster 2026-03-09T16:19:49.023428+0000 mgr.y (mgr.14520) 973 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:49 vm01 bash[28152]: cluster 2026-03-09T16:19:49.023428+0000 mgr.y (mgr.14520) 973 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:49 vm01 bash[28152]: cluster 2026-03-09T16:19:49.023428+0000 mgr.y (mgr.14520) 973 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:49 vm01 bash[20728]: cluster 2026-03-09T16:19:49.023428+0000 mgr.y (mgr.14520) 973 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:49 vm01 bash[20728]: cluster 2026-03-09T16:19:49.023428+0000 mgr.y (mgr.14520) 973 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:52.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:52 vm09 bash[22983]: cluster 2026-03-09T16:19:51.023905+0000 mgr.y (mgr.14520) 974 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:52.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:52 vm09 bash[22983]: cluster 2026-03-09T16:19:51.023905+0000 mgr.y (mgr.14520) 974 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:52.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:52 vm01 bash[28152]: cluster 2026-03-09T16:19:51.023905+0000 mgr.y (mgr.14520) 974 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:52.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:52 vm01 bash[28152]: cluster 2026-03-09T16:19:51.023905+0000 mgr.y (mgr.14520) 974 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:52.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:52 vm01 bash[20728]: cluster 2026-03-09T16:19:51.023905+0000 mgr.y (mgr.14520) 974 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:52.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:52 vm01 bash[20728]: cluster 2026-03-09T16:19:51.023905+0000 mgr.y (mgr.14520) 974 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:53.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:19:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:19:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:19:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: cluster 2026-03-09T16:19:53.024283+0000 mgr.y (mgr.14520) 975 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:54.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: cluster 2026-03-09T16:19:53.024283+0000 mgr.y (mgr.14520) 975 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.141464+0000 mon.a (mon.0) 3852 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.141464+0000 mon.a (mon.0) 3852 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.439730+0000 mon.a (mon.0) 3853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.439730+0000 mon.a (mon.0) 3853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.455091+0000 mon.a (mon.0) 3854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.455091+0000 mon.a (mon.0) 3854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.773034+0000 mon.a (mon.0) 3855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.773034+0000 mon.a (mon.0) 3855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.773625+0000 mon.a (mon.0) 3856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.773625+0000 mon.a (mon.0) 3856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.782077+0000 mon.a (mon.0) 3857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.383 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:54 vm09 bash[22983]: audit 2026-03-09T16:19:53.782077+0000 mon.a (mon.0) 3857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: cluster 2026-03-09T16:19:53.024283+0000 mgr.y (mgr.14520) 975 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: cluster 2026-03-09T16:19:53.024283+0000 mgr.y (mgr.14520) 975 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.141464+0000 mon.a (mon.0) 3852 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.141464+0000 mon.a (mon.0) 3852 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.439730+0000 mon.a (mon.0) 3853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.439730+0000 mon.a (mon.0) 3853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.455091+0000 mon.a (mon.0) 3854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.455091+0000 mon.a (mon.0) 3854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.773034+0000 mon.a (mon.0) 3855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.773034+0000 mon.a (mon.0) 3855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.773625+0000 mon.a (mon.0) 3856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.773625+0000 mon.a (mon.0) 3856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.782077+0000 mon.a (mon.0) 3857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:54 vm01 bash[28152]: audit 2026-03-09T16:19:53.782077+0000 mon.a (mon.0) 3857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: cluster 2026-03-09T16:19:53.024283+0000 mgr.y (mgr.14520) 975 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: cluster 2026-03-09T16:19:53.024283+0000 mgr.y (mgr.14520) 975 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.141464+0000 mon.a (mon.0) 3852 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.141464+0000 mon.a (mon.0) 3852 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.439730+0000 mon.a (mon.0) 3853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.439730+0000 mon.a (mon.0) 3853 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.455091+0000 mon.a (mon.0) 3854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.455091+0000 mon.a (mon.0) 3854 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.773034+0000 mon.a (mon.0) 3855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:19:54.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.773034+0000 mon.a (mon.0) 3855 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:19:54.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.773625+0000 mon.a (mon.0) 3856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:19:54.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.773625+0000 mon.a (mon.0) 3856 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:19:54.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.782077+0000 mon.a (mon.0) 3857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:54.424 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:54 vm01 bash[20728]: audit 2026-03-09T16:19:53.782077+0000 mon.a (mon.0) 3857 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:19:56.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:56 vm01 bash[28152]: cluster 2026-03-09T16:19:55.024990+0000 mgr.y (mgr.14520) 976 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:56.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:56 vm01 bash[28152]: cluster 2026-03-09T16:19:55.024990+0000 mgr.y (mgr.14520) 976 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:56.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:56 vm01 bash[20728]: cluster 2026-03-09T16:19:55.024990+0000 mgr.y (mgr.14520) 976 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:56.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:56 vm01 bash[20728]: cluster 2026-03-09T16:19:55.024990+0000 mgr.y (mgr.14520) 976 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:56 vm09 bash[22983]: cluster 2026-03-09T16:19:55.024990+0000 mgr.y (mgr.14520) 976 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:56 vm09 bash[22983]: cluster 2026-03-09T16:19:55.024990+0000 mgr.y (mgr.14520) 976 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:19:57.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:19:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:19:57.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:57 vm09 bash[22983]: cluster 2026-03-09T16:19:57.025296+0000 mgr.y (mgr.14520) 977 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:57.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:57 vm09 bash[22983]: cluster 2026-03-09T16:19:57.025296+0000 mgr.y (mgr.14520) 977 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:57.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:57 vm01 bash[28152]: cluster 2026-03-09T16:19:57.025296+0000 mgr.y (mgr.14520) 977 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:57.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:57 vm01 bash[28152]: cluster 2026-03-09T16:19:57.025296+0000 mgr.y (mgr.14520) 977 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:57.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:57 vm01 bash[20728]: cluster 2026-03-09T16:19:57.025296+0000 mgr.y (mgr.14520) 977 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:57.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:57 vm01 bash[20728]: cluster 2026-03-09T16:19:57.025296+0000 mgr.y (mgr.14520) 977 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:58 vm09 bash[22983]: audit 2026-03-09T16:19:57.349964+0000 mgr.y (mgr.14520) 978 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:58 vm09 bash[22983]: audit 2026-03-09T16:19:57.349964+0000 mgr.y (mgr.14520) 978 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:58 vm01 bash[28152]: audit 2026-03-09T16:19:57.349964+0000 mgr.y (mgr.14520) 978 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:58.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:58 vm01 bash[28152]: audit 2026-03-09T16:19:57.349964+0000 mgr.y (mgr.14520) 978 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:58 vm01 bash[20728]: audit 2026-03-09T16:19:57.349964+0000 mgr.y (mgr.14520) 978 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:58 vm01 bash[20728]: audit 2026-03-09T16:19:57.349964+0000 mgr.y (mgr.14520) 978 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:19:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:59 vm09 bash[22983]: cluster 2026-03-09T16:19:59.025692+0000 mgr.y (mgr.14520) 979 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:19:59 vm09 bash[22983]: cluster 2026-03-09T16:19:59.025692+0000 mgr.y (mgr.14520) 979 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:59.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:59 vm01 bash[28152]: cluster 2026-03-09T16:19:59.025692+0000 mgr.y (mgr.14520) 979 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:59.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:19:59 vm01 bash[28152]: cluster 2026-03-09T16:19:59.025692+0000 mgr.y (mgr.14520) 979 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:59 vm01 bash[20728]: cluster 2026-03-09T16:19:59.025692+0000 mgr.y (mgr.14520) 979 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:19:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:19:59 vm01 bash[20728]: cluster 2026-03-09T16:19:59.025692+0000 mgr.y (mgr.14520) 979 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:00.325 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: audit 2026-03-09T16:19:59.809654+0000 mon.a (mon.0) 3858 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: audit 2026-03-09T16:19:59.809654+0000 mon.a (mon.0) 3858 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000113+0000 mon.a (mon.0) 3859 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000113+0000 mon.a (mon.0) 3859 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000130+0000 mon.a (mon.0) 3860 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000130+0000 mon.a (mon.0) 3860 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000137+0000 mon.a (mon.0) 3861 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000137+0000 mon.a (mon.0) 3861 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000144+0000 mon.a (mon.0) 3862 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000144+0000 mon.a (mon.0) 3862 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000151+0000 mon.a (mon.0) 3863 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000151+0000 mon.a (mon.0) 3863 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:20:00.326 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000158+0000 mon.a (mon.0) 3864 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: audit 2026-03-09T16:19:59.809654+0000 mon.a (mon.0) 3858 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: audit 2026-03-09T16:19:59.809654+0000 mon.a (mon.0) 3858 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000113+0000 mon.a (mon.0) 3859 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000113+0000 mon.a (mon.0) 3859 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000130+0000 mon.a (mon.0) 3860 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000130+0000 mon.a (mon.0) 3860 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000137+0000 mon.a (mon.0) 3861 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000137+0000 mon.a (mon.0) 3861 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000144+0000 mon.a (mon.0) 3862 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000144+0000 mon.a (mon.0) 3862 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000151+0000 mon.a (mon.0) 3863 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000151+0000 mon.a (mon.0) 3863 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000158+0000 mon.a (mon.0) 3864 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:20:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:00 vm09 bash[22983]: cluster 2026-03-09T16:20:00.000158+0000 mon.a (mon.0) 3864 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: audit 2026-03-09T16:19:59.809654+0000 mon.a (mon.0) 3858 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: audit 2026-03-09T16:19:59.809654+0000 mon.a (mon.0) 3858 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000113+0000 mon.a (mon.0) 3859 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000113+0000 mon.a (mon.0) 3859 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000130+0000 mon.a (mon.0) 3860 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000130+0000 mon.a (mon.0) 3860 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000137+0000 mon.a (mon.0) 3861 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000137+0000 mon.a (mon.0) 3861 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000144+0000 mon.a (mon.0) 3862 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000144+0000 mon.a (mon.0) 3862 : cluster [WRN] application not enabled on pool 'WatchNotifyvm01-60622-1' 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000151+0000 mon.a (mon.0) 3863 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000151+0000 mon.a (mon.0) 3863 : cluster [WRN] application not enabled on pool 'AssertExistsvm01-60645-1' 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000158+0000 mon.a (mon.0) 3864 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:00 vm01 bash[28152]: cluster 2026-03-09T16:20:00.000158+0000 mon.a (mon.0) 3864 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:20:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:00 vm01 bash[20728]: cluster 2026-03-09T16:20:00.000158+0000 mon.a (mon.0) 3864 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T16:20:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:01 vm09 bash[22983]: cluster 2026-03-09T16:20:01.026200+0000 mgr.y (mgr.14520) 980 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:01 vm09 bash[22983]: cluster 2026-03-09T16:20:01.026200+0000 mgr.y (mgr.14520) 980 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:01 vm01 bash[28152]: cluster 2026-03-09T16:20:01.026200+0000 mgr.y (mgr.14520) 980 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:01 vm01 bash[28152]: cluster 2026-03-09T16:20:01.026200+0000 mgr.y (mgr.14520) 980 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:01 vm01 bash[20728]: cluster 2026-03-09T16:20:01.026200+0000 mgr.y (mgr.14520) 980 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:01 vm01 bash[20728]: cluster 2026-03-09T16:20:01.026200+0000 mgr.y (mgr.14520) 980 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:03.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:20:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:20:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:20:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:04 vm09 bash[22983]: cluster 2026-03-09T16:20:03.026498+0000 mgr.y (mgr.14520) 981 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:04 vm09 bash[22983]: cluster 2026-03-09T16:20:03.026498+0000 mgr.y (mgr.14520) 981 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:04.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:04 vm01 bash[28152]: cluster 2026-03-09T16:20:03.026498+0000 mgr.y (mgr.14520) 981 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:04.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:04 vm01 bash[28152]: cluster 2026-03-09T16:20:03.026498+0000 mgr.y (mgr.14520) 981 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:04.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:04 vm01 bash[20728]: cluster 2026-03-09T16:20:03.026498+0000 mgr.y (mgr.14520) 981 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:04.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:04 vm01 bash[20728]: cluster 2026-03-09T16:20:03.026498+0000 mgr.y (mgr.14520) 981 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:05 vm09 bash[22983]: cluster 2026-03-09T16:20:05.027120+0000 mgr.y (mgr.14520) 982 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:05 vm09 bash[22983]: cluster 2026-03-09T16:20:05.027120+0000 mgr.y (mgr.14520) 982 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:05.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:05 vm01 bash[28152]: cluster 2026-03-09T16:20:05.027120+0000 mgr.y (mgr.14520) 982 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:05.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:05 vm01 bash[28152]: cluster 2026-03-09T16:20:05.027120+0000 mgr.y (mgr.14520) 982 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:05.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:05 vm01 bash[20728]: cluster 2026-03-09T16:20:05.027120+0000 mgr.y (mgr.14520) 982 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:05.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:05 vm01 bash[20728]: cluster 2026-03-09T16:20:05.027120+0000 mgr.y (mgr.14520) 982 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:07.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:20:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:20:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:08 vm09 bash[22983]: cluster 2026-03-09T16:20:07.027390+0000 mgr.y (mgr.14520) 983 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:08 vm09 bash[22983]: cluster 2026-03-09T16:20:07.027390+0000 mgr.y (mgr.14520) 983 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:08.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:08 vm01 bash[28152]: cluster 2026-03-09T16:20:07.027390+0000 mgr.y (mgr.14520) 983 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:08.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:08 vm01 bash[28152]: cluster 2026-03-09T16:20:07.027390+0000 mgr.y (mgr.14520) 983 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:08.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:08 vm01 bash[20728]: cluster 2026-03-09T16:20:07.027390+0000 mgr.y (mgr.14520) 983 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:08.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:08 vm01 bash[20728]: cluster 2026-03-09T16:20:07.027390+0000 mgr.y (mgr.14520) 983 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:09 vm09 bash[22983]: audit 2026-03-09T16:20:07.360859+0000 mgr.y (mgr.14520) 984 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:09 vm09 bash[22983]: audit 2026-03-09T16:20:07.360859+0000 mgr.y (mgr.14520) 984 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:09.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:09 vm01 bash[28152]: audit 2026-03-09T16:20:07.360859+0000 mgr.y (mgr.14520) 984 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:09.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:09 vm01 bash[28152]: audit 2026-03-09T16:20:07.360859+0000 mgr.y (mgr.14520) 984 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:09.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:09 vm01 bash[20728]: audit 2026-03-09T16:20:07.360859+0000 mgr.y (mgr.14520) 984 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:09.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:09 vm01 bash[20728]: audit 2026-03-09T16:20:07.360859+0000 mgr.y (mgr.14520) 984 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:10 vm09 bash[22983]: cluster 2026-03-09T16:20:09.027834+0000 mgr.y (mgr.14520) 985 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:10 vm09 bash[22983]: cluster 2026-03-09T16:20:09.027834+0000 mgr.y (mgr.14520) 985 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:10.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:10 vm01 bash[28152]: cluster 2026-03-09T16:20:09.027834+0000 mgr.y (mgr.14520) 985 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:10.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:10 vm01 bash[28152]: cluster 2026-03-09T16:20:09.027834+0000 mgr.y (mgr.14520) 985 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:10.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:10 vm01 bash[20728]: cluster 2026-03-09T16:20:09.027834+0000 mgr.y (mgr.14520) 985 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:10.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:10 vm01 bash[20728]: cluster 2026-03-09T16:20:09.027834+0000 mgr.y (mgr.14520) 985 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:12 vm09 bash[22983]: cluster 2026-03-09T16:20:11.028297+0000 mgr.y (mgr.14520) 986 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:12 vm09 bash[22983]: cluster 2026-03-09T16:20:11.028297+0000 mgr.y (mgr.14520) 986 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:12.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:12 vm01 bash[28152]: cluster 2026-03-09T16:20:11.028297+0000 mgr.y (mgr.14520) 986 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:12.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:12 vm01 bash[28152]: cluster 2026-03-09T16:20:11.028297+0000 mgr.y (mgr.14520) 986 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:12.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:12 vm01 bash[20728]: cluster 2026-03-09T16:20:11.028297+0000 mgr.y (mgr.14520) 986 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:12.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:12 vm01 bash[20728]: cluster 2026-03-09T16:20:11.028297+0000 mgr.y (mgr.14520) 986 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:13.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:20:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:20:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:20:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:14 vm09 bash[22983]: cluster 2026-03-09T16:20:13.028553+0000 mgr.y (mgr.14520) 987 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:14 vm09 bash[22983]: cluster 2026-03-09T16:20:13.028553+0000 mgr.y (mgr.14520) 987 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:14 vm01 bash[20728]: cluster 2026-03-09T16:20:13.028553+0000 mgr.y (mgr.14520) 987 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:14 vm01 bash[20728]: cluster 2026-03-09T16:20:13.028553+0000 mgr.y (mgr.14520) 987 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:14 vm01 bash[28152]: cluster 2026-03-09T16:20:13.028553+0000 mgr.y (mgr.14520) 987 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:14 vm01 bash[28152]: cluster 2026-03-09T16:20:13.028553+0000 mgr.y (mgr.14520) 987 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:15 vm01 bash[28152]: audit 2026-03-09T16:20:14.816222+0000 mon.a (mon.0) 3865 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:15 vm01 bash[28152]: audit 2026-03-09T16:20:14.816222+0000 mon.a (mon.0) 3865 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:15 vm01 bash[20728]: audit 2026-03-09T16:20:14.816222+0000 mon.a (mon.0) 3865 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:15 vm01 bash[20728]: audit 2026-03-09T16:20:14.816222+0000 mon.a (mon.0) 3865 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:15 vm09 bash[22983]: audit 2026-03-09T16:20:14.816222+0000 mon.a (mon.0) 3865 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:15 vm09 bash[22983]: audit 2026-03-09T16:20:14.816222+0000 mon.a (mon.0) 3865 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:16 vm01 bash[28152]: cluster 2026-03-09T16:20:15.029230+0000 mgr.y (mgr.14520) 988 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:16 vm01 bash[28152]: cluster 2026-03-09T16:20:15.029230+0000 mgr.y (mgr.14520) 988 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:16 vm01 bash[20728]: cluster 2026-03-09T16:20:15.029230+0000 mgr.y (mgr.14520) 988 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:16 vm01 bash[20728]: cluster 2026-03-09T16:20:15.029230+0000 mgr.y (mgr.14520) 988 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:16 vm09 bash[22983]: cluster 2026-03-09T16:20:15.029230+0000 mgr.y (mgr.14520) 988 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:16 vm09 bash[22983]: cluster 2026-03-09T16:20:15.029230+0000 mgr.y (mgr.14520) 988 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:17.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:20:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:20:18.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:18 vm01 bash[28152]: cluster 2026-03-09T16:20:17.029567+0000 mgr.y (mgr.14520) 989 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:18.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:18 vm01 bash[28152]: cluster 2026-03-09T16:20:17.029567+0000 mgr.y (mgr.14520) 989 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:18.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:18 vm01 bash[20728]: cluster 2026-03-09T16:20:17.029567+0000 mgr.y (mgr.14520) 989 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:18.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:18 vm01 bash[20728]: cluster 2026-03-09T16:20:17.029567+0000 mgr.y (mgr.14520) 989 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:18 vm09 bash[22983]: cluster 2026-03-09T16:20:17.029567+0000 mgr.y (mgr.14520) 989 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:18 vm09 bash[22983]: cluster 2026-03-09T16:20:17.029567+0000 mgr.y (mgr.14520) 989 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:19.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:19 vm01 bash[28152]: audit 2026-03-09T16:20:17.366533+0000 mgr.y (mgr.14520) 990 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:19.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:19 vm01 bash[28152]: audit 2026-03-09T16:20:17.366533+0000 mgr.y (mgr.14520) 990 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:19.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:19 vm01 bash[20728]: audit 2026-03-09T16:20:17.366533+0000 mgr.y (mgr.14520) 990 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:19.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:19 vm01 bash[20728]: audit 2026-03-09T16:20:17.366533+0000 mgr.y (mgr.14520) 990 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:19 vm09 bash[22983]: audit 2026-03-09T16:20:17.366533+0000 mgr.y (mgr.14520) 990 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:19 vm09 bash[22983]: audit 2026-03-09T16:20:17.366533+0000 mgr.y (mgr.14520) 990 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:20 vm09 bash[22983]: cluster 2026-03-09T16:20:19.029986+0000 mgr.y (mgr.14520) 991 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:20 vm09 bash[22983]: cluster 2026-03-09T16:20:19.029986+0000 mgr.y (mgr.14520) 991 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:20 vm01 bash[28152]: cluster 2026-03-09T16:20:19.029986+0000 mgr.y (mgr.14520) 991 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:20.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:20 vm01 bash[28152]: cluster 2026-03-09T16:20:19.029986+0000 mgr.y (mgr.14520) 991 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:20 vm01 bash[20728]: cluster 2026-03-09T16:20:19.029986+0000 mgr.y (mgr.14520) 991 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:20 vm01 bash[20728]: cluster 2026-03-09T16:20:19.029986+0000 mgr.y (mgr.14520) 991 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:22 vm09 bash[22983]: cluster 2026-03-09T16:20:21.030472+0000 mgr.y (mgr.14520) 992 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:22 vm09 bash[22983]: cluster 2026-03-09T16:20:21.030472+0000 mgr.y (mgr.14520) 992 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:22 vm01 bash[28152]: cluster 2026-03-09T16:20:21.030472+0000 mgr.y (mgr.14520) 992 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:22 vm01 bash[28152]: cluster 2026-03-09T16:20:21.030472+0000 mgr.y (mgr.14520) 992 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:22 vm01 bash[20728]: cluster 2026-03-09T16:20:21.030472+0000 mgr.y (mgr.14520) 992 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:22 vm01 bash[20728]: cluster 2026-03-09T16:20:21.030472+0000 mgr.y (mgr.14520) 992 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:20:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:20:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:20:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:24 vm09 bash[22983]: cluster 2026-03-09T16:20:23.030725+0000 mgr.y (mgr.14520) 993 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:24 vm09 bash[22983]: cluster 2026-03-09T16:20:23.030725+0000 mgr.y (mgr.14520) 993 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:24.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:24 vm01 bash[28152]: cluster 2026-03-09T16:20:23.030725+0000 mgr.y (mgr.14520) 993 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:24.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:24 vm01 bash[28152]: cluster 2026-03-09T16:20:23.030725+0000 mgr.y (mgr.14520) 993 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:24 vm01 bash[20728]: cluster 2026-03-09T16:20:23.030725+0000 mgr.y (mgr.14520) 993 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:24 vm01 bash[20728]: cluster 2026-03-09T16:20:23.030725+0000 mgr.y (mgr.14520) 993 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:26 vm09 bash[22983]: cluster 2026-03-09T16:20:25.031320+0000 mgr.y (mgr.14520) 994 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:26 vm09 bash[22983]: cluster 2026-03-09T16:20:25.031320+0000 mgr.y (mgr.14520) 994 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:26.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:26 vm01 bash[28152]: cluster 2026-03-09T16:20:25.031320+0000 mgr.y (mgr.14520) 994 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:26.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:26 vm01 bash[28152]: cluster 2026-03-09T16:20:25.031320+0000 mgr.y (mgr.14520) 994 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:26 vm01 bash[20728]: cluster 2026-03-09T16:20:25.031320+0000 mgr.y (mgr.14520) 994 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:26 vm01 bash[20728]: cluster 2026-03-09T16:20:25.031320+0000 mgr.y (mgr.14520) 994 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:27.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:20:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:20:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:28 vm09 bash[22983]: cluster 2026-03-09T16:20:27.031586+0000 mgr.y (mgr.14520) 995 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:28 vm09 bash[22983]: cluster 2026-03-09T16:20:27.031586+0000 mgr.y (mgr.14520) 995 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:28 vm01 bash[28152]: cluster 2026-03-09T16:20:27.031586+0000 mgr.y (mgr.14520) 995 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:28 vm01 bash[28152]: cluster 2026-03-09T16:20:27.031586+0000 mgr.y (mgr.14520) 995 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:28.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:28 vm01 bash[20728]: cluster 2026-03-09T16:20:27.031586+0000 mgr.y (mgr.14520) 995 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:28 vm01 bash[20728]: cluster 2026-03-09T16:20:27.031586+0000 mgr.y (mgr.14520) 995 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:29 vm09 bash[22983]: audit 2026-03-09T16:20:27.372639+0000 mgr.y (mgr.14520) 996 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:29 vm09 bash[22983]: audit 2026-03-09T16:20:27.372639+0000 mgr.y (mgr.14520) 996 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:29 vm01 bash[28152]: audit 2026-03-09T16:20:27.372639+0000 mgr.y (mgr.14520) 996 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:29 vm01 bash[28152]: audit 2026-03-09T16:20:27.372639+0000 mgr.y (mgr.14520) 996 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:29 vm01 bash[20728]: audit 2026-03-09T16:20:27.372639+0000 mgr.y (mgr.14520) 996 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:29 vm01 bash[20728]: audit 2026-03-09T16:20:27.372639+0000 mgr.y (mgr.14520) 996 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:30 vm09 bash[22983]: cluster 2026-03-09T16:20:29.032016+0000 mgr.y (mgr.14520) 997 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:30 vm09 bash[22983]: cluster 2026-03-09T16:20:29.032016+0000 mgr.y (mgr.14520) 997 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:30 vm09 bash[22983]: audit 2026-03-09T16:20:29.822018+0000 mon.a (mon.0) 3866 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:30 vm09 bash[22983]: audit 2026-03-09T16:20:29.822018+0000 mon.a (mon.0) 3866 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:30 vm01 bash[28152]: cluster 2026-03-09T16:20:29.032016+0000 mgr.y (mgr.14520) 997 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:30 vm01 bash[28152]: cluster 2026-03-09T16:20:29.032016+0000 mgr.y (mgr.14520) 997 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:30 vm01 bash[28152]: audit 2026-03-09T16:20:29.822018+0000 mon.a (mon.0) 3866 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:30 vm01 bash[28152]: audit 2026-03-09T16:20:29.822018+0000 mon.a (mon.0) 3866 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:30 vm01 bash[20728]: cluster 2026-03-09T16:20:29.032016+0000 mgr.y (mgr.14520) 997 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:30 vm01 bash[20728]: cluster 2026-03-09T16:20:29.032016+0000 mgr.y (mgr.14520) 997 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:30 vm01 bash[20728]: audit 2026-03-09T16:20:29.822018+0000 mon.a (mon.0) 3866 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:30 vm01 bash[20728]: audit 2026-03-09T16:20:29.822018+0000 mon.a (mon.0) 3866 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:31 vm09 bash[22983]: cluster 2026-03-09T16:20:31.032492+0000 mgr.y (mgr.14520) 998 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:31 vm09 bash[22983]: cluster 2026-03-09T16:20:31.032492+0000 mgr.y (mgr.14520) 998 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:31.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:31 vm01 bash[28152]: cluster 2026-03-09T16:20:31.032492+0000 mgr.y (mgr.14520) 998 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:31.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:31 vm01 bash[28152]: cluster 2026-03-09T16:20:31.032492+0000 mgr.y (mgr.14520) 998 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:31.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:31 vm01 bash[20728]: cluster 2026-03-09T16:20:31.032492+0000 mgr.y (mgr.14520) 998 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:31.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:31 vm01 bash[20728]: cluster 2026-03-09T16:20:31.032492+0000 mgr.y (mgr.14520) 998 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:33.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:20:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:20:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:20:34.083 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:34 vm01 bash[20728]: cluster 2026-03-09T16:20:33.032737+0000 mgr.y (mgr.14520) 999 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:34 vm09 bash[22983]: cluster 2026-03-09T16:20:33.032737+0000 mgr.y (mgr.14520) 999 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:34 vm09 bash[22983]: cluster 2026-03-09T16:20:33.032737+0000 mgr.y (mgr.14520) 999 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:34 vm01 bash[28152]: cluster 2026-03-09T16:20:33.032737+0000 mgr.y (mgr.14520) 999 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:34 vm01 bash[28152]: cluster 2026-03-09T16:20:33.032737+0000 mgr.y (mgr.14520) 999 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:34 vm01 bash[20728]: cluster 2026-03-09T16:20:33.032737+0000 mgr.y (mgr.14520) 999 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:36 vm09 bash[22983]: cluster 2026-03-09T16:20:35.033315+0000 mgr.y (mgr.14520) 1000 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:36 vm09 bash[22983]: cluster 2026-03-09T16:20:35.033315+0000 mgr.y (mgr.14520) 1000 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:36.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:36 vm01 bash[28152]: cluster 2026-03-09T16:20:35.033315+0000 mgr.y (mgr.14520) 1000 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:36.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:36 vm01 bash[28152]: cluster 2026-03-09T16:20:35.033315+0000 mgr.y (mgr.14520) 1000 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:36 vm01 bash[20728]: cluster 2026-03-09T16:20:35.033315+0000 mgr.y (mgr.14520) 1000 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:36 vm01 bash[20728]: cluster 2026-03-09T16:20:35.033315+0000 mgr.y (mgr.14520) 1000 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:37.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:20:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:20:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:38 vm09 bash[22983]: cluster 2026-03-09T16:20:37.033580+0000 mgr.y (mgr.14520) 1001 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:38 vm09 bash[22983]: cluster 2026-03-09T16:20:37.033580+0000 mgr.y (mgr.14520) 1001 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:38.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:38 vm01 bash[28152]: cluster 2026-03-09T16:20:37.033580+0000 mgr.y (mgr.14520) 1001 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:38.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:38 vm01 bash[28152]: cluster 2026-03-09T16:20:37.033580+0000 mgr.y (mgr.14520) 1001 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:38.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:38 vm01 bash[20728]: cluster 2026-03-09T16:20:37.033580+0000 mgr.y (mgr.14520) 1001 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:38.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:38 vm01 bash[20728]: cluster 2026-03-09T16:20:37.033580+0000 mgr.y (mgr.14520) 1001 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:39 vm09 bash[22983]: audit 2026-03-09T16:20:37.383319+0000 mgr.y (mgr.14520) 1002 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:39 vm09 bash[22983]: audit 2026-03-09T16:20:37.383319+0000 mgr.y (mgr.14520) 1002 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:39.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:39 vm01 bash[28152]: audit 2026-03-09T16:20:37.383319+0000 mgr.y (mgr.14520) 1002 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:39.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:39 vm01 bash[28152]: audit 2026-03-09T16:20:37.383319+0000 mgr.y (mgr.14520) 1002 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:39.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:39 vm01 bash[20728]: audit 2026-03-09T16:20:37.383319+0000 mgr.y (mgr.14520) 1002 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:39.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:39 vm01 bash[20728]: audit 2026-03-09T16:20:37.383319+0000 mgr.y (mgr.14520) 1002 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:40 vm09 bash[22983]: cluster 2026-03-09T16:20:39.033981+0000 mgr.y (mgr.14520) 1003 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:40 vm09 bash[22983]: cluster 2026-03-09T16:20:39.033981+0000 mgr.y (mgr.14520) 1003 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:40.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:40 vm01 bash[28152]: cluster 2026-03-09T16:20:39.033981+0000 mgr.y (mgr.14520) 1003 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:40.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:40 vm01 bash[28152]: cluster 2026-03-09T16:20:39.033981+0000 mgr.y (mgr.14520) 1003 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:40.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:40 vm01 bash[20728]: cluster 2026-03-09T16:20:39.033981+0000 mgr.y (mgr.14520) 1003 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:40.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:40 vm01 bash[20728]: cluster 2026-03-09T16:20:39.033981+0000 mgr.y (mgr.14520) 1003 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:42 vm09 bash[22983]: cluster 2026-03-09T16:20:41.034477+0000 mgr.y (mgr.14520) 1004 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:42 vm09 bash[22983]: cluster 2026-03-09T16:20:41.034477+0000 mgr.y (mgr.14520) 1004 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:42.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:42 vm01 bash[28152]: cluster 2026-03-09T16:20:41.034477+0000 mgr.y (mgr.14520) 1004 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:42.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:42 vm01 bash[28152]: cluster 2026-03-09T16:20:41.034477+0000 mgr.y (mgr.14520) 1004 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:42.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:42 vm01 bash[20728]: cluster 2026-03-09T16:20:41.034477+0000 mgr.y (mgr.14520) 1004 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:42.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:42 vm01 bash[20728]: cluster 2026-03-09T16:20:41.034477+0000 mgr.y (mgr.14520) 1004 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:43.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:20:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:20:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:20:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:44 vm09 bash[22983]: cluster 2026-03-09T16:20:43.034729+0000 mgr.y (mgr.14520) 1005 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:44.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:44 vm09 bash[22983]: cluster 2026-03-09T16:20:43.034729+0000 mgr.y (mgr.14520) 1005 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:44.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:44 vm01 bash[28152]: cluster 2026-03-09T16:20:43.034729+0000 mgr.y (mgr.14520) 1005 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:44.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:44 vm01 bash[28152]: cluster 2026-03-09T16:20:43.034729+0000 mgr.y (mgr.14520) 1005 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:44 vm01 bash[20728]: cluster 2026-03-09T16:20:43.034729+0000 mgr.y (mgr.14520) 1005 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:44 vm01 bash[20728]: cluster 2026-03-09T16:20:43.034729+0000 mgr.y (mgr.14520) 1005 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:45 vm09 bash[22983]: audit 2026-03-09T16:20:44.828971+0000 mon.a (mon.0) 3867 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:45 vm09 bash[22983]: audit 2026-03-09T16:20:44.828971+0000 mon.a (mon.0) 3867 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:45.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:45 vm01 bash[28152]: audit 2026-03-09T16:20:44.828971+0000 mon.a (mon.0) 3867 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:45.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:45 vm01 bash[28152]: audit 2026-03-09T16:20:44.828971+0000 mon.a (mon.0) 3867 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:45.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:45 vm01 bash[20728]: audit 2026-03-09T16:20:44.828971+0000 mon.a (mon.0) 3867 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:45.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:45 vm01 bash[20728]: audit 2026-03-09T16:20:44.828971+0000 mon.a (mon.0) 3867 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:20:46.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:46 vm01 bash[28152]: cluster 2026-03-09T16:20:45.035348+0000 mgr.y (mgr.14520) 1006 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:46.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:46 vm01 bash[28152]: cluster 2026-03-09T16:20:45.035348+0000 mgr.y (mgr.14520) 1006 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:46 vm01 bash[20728]: cluster 2026-03-09T16:20:45.035348+0000 mgr.y (mgr.14520) 1006 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:46 vm01 bash[20728]: cluster 2026-03-09T16:20:45.035348+0000 mgr.y (mgr.14520) 1006 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:46 vm09 bash[22983]: cluster 2026-03-09T16:20:45.035348+0000 mgr.y (mgr.14520) 1006 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:46 vm09 bash[22983]: cluster 2026-03-09T16:20:45.035348+0000 mgr.y (mgr.14520) 1006 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:47.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:20:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:20:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:48 vm09 bash[22983]: cluster 2026-03-09T16:20:47.035616+0000 mgr.y (mgr.14520) 1007 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:48 vm09 bash[22983]: cluster 2026-03-09T16:20:47.035616+0000 mgr.y (mgr.14520) 1007 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:48 vm01 bash[28152]: cluster 2026-03-09T16:20:47.035616+0000 mgr.y (mgr.14520) 1007 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:48 vm01 bash[28152]: cluster 2026-03-09T16:20:47.035616+0000 mgr.y (mgr.14520) 1007 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:48 vm01 bash[20728]: cluster 2026-03-09T16:20:47.035616+0000 mgr.y (mgr.14520) 1007 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:48 vm01 bash[20728]: cluster 2026-03-09T16:20:47.035616+0000 mgr.y (mgr.14520) 1007 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:49 vm09 bash[22983]: audit 2026-03-09T16:20:47.387132+0000 mgr.y (mgr.14520) 1008 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:49 vm09 bash[22983]: audit 2026-03-09T16:20:47.387132+0000 mgr.y (mgr.14520) 1008 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:49 vm01 bash[28152]: audit 2026-03-09T16:20:47.387132+0000 mgr.y (mgr.14520) 1008 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:49 vm01 bash[28152]: audit 2026-03-09T16:20:47.387132+0000 mgr.y (mgr.14520) 1008 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:49 vm01 bash[20728]: audit 2026-03-09T16:20:47.387132+0000 mgr.y (mgr.14520) 1008 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:49 vm01 bash[20728]: audit 2026-03-09T16:20:47.387132+0000 mgr.y (mgr.14520) 1008 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:50 vm09 bash[22983]: cluster 2026-03-09T16:20:49.036085+0000 mgr.y (mgr.14520) 1009 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:50 vm09 bash[22983]: cluster 2026-03-09T16:20:49.036085+0000 mgr.y (mgr.14520) 1009 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:50 vm01 bash[28152]: cluster 2026-03-09T16:20:49.036085+0000 mgr.y (mgr.14520) 1009 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:50 vm01 bash[28152]: cluster 2026-03-09T16:20:49.036085+0000 mgr.y (mgr.14520) 1009 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:50.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:50 vm01 bash[20728]: cluster 2026-03-09T16:20:49.036085+0000 mgr.y (mgr.14520) 1009 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:50 vm01 bash[20728]: cluster 2026-03-09T16:20:49.036085+0000 mgr.y (mgr.14520) 1009 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:52 vm09 bash[22983]: cluster 2026-03-09T16:20:51.036614+0000 mgr.y (mgr.14520) 1010 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:52 vm09 bash[22983]: cluster 2026-03-09T16:20:51.036614+0000 mgr.y (mgr.14520) 1010 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:52 vm01 bash[28152]: cluster 2026-03-09T16:20:51.036614+0000 mgr.y (mgr.14520) 1010 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:52 vm01 bash[28152]: cluster 2026-03-09T16:20:51.036614+0000 mgr.y (mgr.14520) 1010 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:52 vm01 bash[20728]: cluster 2026-03-09T16:20:51.036614+0000 mgr.y (mgr.14520) 1010 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:52 vm01 bash[20728]: cluster 2026-03-09T16:20:51.036614+0000 mgr.y (mgr.14520) 1010 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:53.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:20:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:20:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: cluster 2026-03-09T16:20:53.036875+0000 mgr.y (mgr.14520) 1011 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: cluster 2026-03-09T16:20:53.036875+0000 mgr.y (mgr.14520) 1011 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: audit 2026-03-09T16:20:53.824710+0000 mon.a (mon.0) 3868 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: audit 2026-03-09T16:20:53.824710+0000 mon.a (mon.0) 3868 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: audit 2026-03-09T16:20:54.153025+0000 mon.a (mon.0) 3869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: audit 2026-03-09T16:20:54.153025+0000 mon.a (mon.0) 3869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: audit 2026-03-09T16:20:54.153619+0000 mon.a (mon.0) 3870 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: audit 2026-03-09T16:20:54.153619+0000 mon.a (mon.0) 3870 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: audit 2026-03-09T16:20:54.158866+0000 mon.a (mon.0) 3871 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:20:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:54 vm09 bash[22983]: audit 2026-03-09T16:20:54.158866+0000 mon.a (mon.0) 3871 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: cluster 2026-03-09T16:20:53.036875+0000 mgr.y (mgr.14520) 1011 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: cluster 2026-03-09T16:20:53.036875+0000 mgr.y (mgr.14520) 1011 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: audit 2026-03-09T16:20:53.824710+0000 mon.a (mon.0) 3868 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: audit 2026-03-09T16:20:53.824710+0000 mon.a (mon.0) 3868 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: audit 2026-03-09T16:20:54.153025+0000 mon.a (mon.0) 3869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: audit 2026-03-09T16:20:54.153025+0000 mon.a (mon.0) 3869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: audit 2026-03-09T16:20:54.153619+0000 mon.a (mon.0) 3870 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: audit 2026-03-09T16:20:54.153619+0000 mon.a (mon.0) 3870 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: audit 2026-03-09T16:20:54.158866+0000 mon.a (mon.0) 3871 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:20:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:54 vm01 bash[28152]: audit 2026-03-09T16:20:54.158866+0000 mon.a (mon.0) 3871 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: cluster 2026-03-09T16:20:53.036875+0000 mgr.y (mgr.14520) 1011 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: cluster 2026-03-09T16:20:53.036875+0000 mgr.y (mgr.14520) 1011 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: audit 2026-03-09T16:20:53.824710+0000 mon.a (mon.0) 3868 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: audit 2026-03-09T16:20:53.824710+0000 mon.a (mon.0) 3868 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: audit 2026-03-09T16:20:54.153025+0000 mon.a (mon.0) 3869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: audit 2026-03-09T16:20:54.153025+0000 mon.a (mon.0) 3869 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: audit 2026-03-09T16:20:54.153619+0000 mon.a (mon.0) 3870 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: audit 2026-03-09T16:20:54.153619+0000 mon.a (mon.0) 3870 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: audit 2026-03-09T16:20:54.158866+0000 mon.a (mon.0) 3871 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:20:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:54 vm01 bash[20728]: audit 2026-03-09T16:20:54.158866+0000 mon.a (mon.0) 3871 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:20:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:56 vm09 bash[22983]: cluster 2026-03-09T16:20:55.037477+0000 mgr.y (mgr.14520) 1012 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:56 vm09 bash[22983]: cluster 2026-03-09T16:20:55.037477+0000 mgr.y (mgr.14520) 1012 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:56 vm01 bash[28152]: cluster 2026-03-09T16:20:55.037477+0000 mgr.y (mgr.14520) 1012 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:56.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:56 vm01 bash[28152]: cluster 2026-03-09T16:20:55.037477+0000 mgr.y (mgr.14520) 1012 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:56 vm01 bash[20728]: cluster 2026-03-09T16:20:55.037477+0000 mgr.y (mgr.14520) 1012 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:56 vm01 bash[20728]: cluster 2026-03-09T16:20:55.037477+0000 mgr.y (mgr.14520) 1012 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:20:57.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:57 vm09 bash[22983]: cluster 2026-03-09T16:20:57.037754+0000 mgr.y (mgr.14520) 1013 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:57.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:57 vm09 bash[22983]: cluster 2026-03-09T16:20:57.037754+0000 mgr.y (mgr.14520) 1013 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:57.632 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:20:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:20:57.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:57 vm01 bash[28152]: cluster 2026-03-09T16:20:57.037754+0000 mgr.y (mgr.14520) 1013 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:57.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:57 vm01 bash[28152]: cluster 2026-03-09T16:20:57.037754+0000 mgr.y (mgr.14520) 1013 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:57.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:57 vm01 bash[20728]: cluster 2026-03-09T16:20:57.037754+0000 mgr.y (mgr.14520) 1013 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:57.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:57 vm01 bash[20728]: cluster 2026-03-09T16:20:57.037754+0000 mgr.y (mgr.14520) 1013 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:58 vm09 bash[22983]: audit 2026-03-09T16:20:57.392229+0000 mgr.y (mgr.14520) 1014 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:58 vm09 bash[22983]: audit 2026-03-09T16:20:57.392229+0000 mgr.y (mgr.14520) 1014 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:58 vm01 bash[28152]: audit 2026-03-09T16:20:57.392229+0000 mgr.y (mgr.14520) 1014 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:58 vm01 bash[28152]: audit 2026-03-09T16:20:57.392229+0000 mgr.y (mgr.14520) 1014 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:58.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:58 vm01 bash[20728]: audit 2026-03-09T16:20:57.392229+0000 mgr.y (mgr.14520) 1014 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:58 vm01 bash[20728]: audit 2026-03-09T16:20:57.392229+0000 mgr.y (mgr.14520) 1014 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:20:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:59 vm09 bash[22983]: cluster 2026-03-09T16:20:59.038241+0000 mgr.y (mgr.14520) 1015 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:20:59 vm09 bash[22983]: cluster 2026-03-09T16:20:59.038241+0000 mgr.y (mgr.14520) 1015 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:59 vm01 bash[28152]: cluster 2026-03-09T16:20:59.038241+0000 mgr.y (mgr.14520) 1015 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:20:59 vm01 bash[28152]: cluster 2026-03-09T16:20:59.038241+0000 mgr.y (mgr.14520) 1015 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:59 vm01 bash[20728]: cluster 2026-03-09T16:20:59.038241+0000 mgr.y (mgr.14520) 1015 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:20:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:20:59 vm01 bash[20728]: cluster 2026-03-09T16:20:59.038241+0000 mgr.y (mgr.14520) 1015 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:00 vm09 bash[22983]: audit 2026-03-09T16:20:59.843166+0000 mon.a (mon.0) 3872 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:00 vm09 bash[22983]: audit 2026-03-09T16:20:59.843166+0000 mon.a (mon.0) 3872 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:00 vm01 bash[28152]: audit 2026-03-09T16:20:59.843166+0000 mon.a (mon.0) 3872 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:00 vm01 bash[28152]: audit 2026-03-09T16:20:59.843166+0000 mon.a (mon.0) 3872 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:00 vm01 bash[20728]: audit 2026-03-09T16:20:59.843166+0000 mon.a (mon.0) 3872 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:00 vm01 bash[20728]: audit 2026-03-09T16:20:59.843166+0000 mon.a (mon.0) 3872 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:01 vm09 bash[22983]: cluster 2026-03-09T16:21:01.038787+0000 mgr.y (mgr.14520) 1016 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:01 vm09 bash[22983]: cluster 2026-03-09T16:21:01.038787+0000 mgr.y (mgr.14520) 1016 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:01.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:01 vm01 bash[28152]: cluster 2026-03-09T16:21:01.038787+0000 mgr.y (mgr.14520) 1016 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:01 vm01 bash[28152]: cluster 2026-03-09T16:21:01.038787+0000 mgr.y (mgr.14520) 1016 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:01 vm01 bash[20728]: cluster 2026-03-09T16:21:01.038787+0000 mgr.y (mgr.14520) 1016 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:01 vm01 bash[20728]: cluster 2026-03-09T16:21:01.038787+0000 mgr.y (mgr.14520) 1016 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:03.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:21:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:21:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:21:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:04 vm09 bash[22983]: cluster 2026-03-09T16:21:03.039083+0000 mgr.y (mgr.14520) 1017 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:04.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:04 vm09 bash[22983]: cluster 2026-03-09T16:21:03.039083+0000 mgr.y (mgr.14520) 1017 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:04.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:04 vm01 bash[28152]: cluster 2026-03-09T16:21:03.039083+0000 mgr.y (mgr.14520) 1017 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:04.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:04 vm01 bash[28152]: cluster 2026-03-09T16:21:03.039083+0000 mgr.y (mgr.14520) 1017 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:04.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:04 vm01 bash[20728]: cluster 2026-03-09T16:21:03.039083+0000 mgr.y (mgr.14520) 1017 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:04.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:04 vm01 bash[20728]: cluster 2026-03-09T16:21:03.039083+0000 mgr.y (mgr.14520) 1017 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:05 vm09 bash[22983]: cluster 2026-03-09T16:21:05.039711+0000 mgr.y (mgr.14520) 1018 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:05 vm09 bash[22983]: cluster 2026-03-09T16:21:05.039711+0000 mgr.y (mgr.14520) 1018 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:05.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:05 vm01 bash[28152]: cluster 2026-03-09T16:21:05.039711+0000 mgr.y (mgr.14520) 1018 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:05.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:05 vm01 bash[28152]: cluster 2026-03-09T16:21:05.039711+0000 mgr.y (mgr.14520) 1018 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:05.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:05 vm01 bash[20728]: cluster 2026-03-09T16:21:05.039711+0000 mgr.y (mgr.14520) 1018 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:05.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:05 vm01 bash[20728]: cluster 2026-03-09T16:21:05.039711+0000 mgr.y (mgr.14520) 1018 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:07.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:21:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:21:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:08 vm09 bash[22983]: cluster 2026-03-09T16:21:07.039988+0000 mgr.y (mgr.14520) 1019 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:08 vm09 bash[22983]: cluster 2026-03-09T16:21:07.039988+0000 mgr.y (mgr.14520) 1019 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:08.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:08 vm01 bash[28152]: cluster 2026-03-09T16:21:07.039988+0000 mgr.y (mgr.14520) 1019 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:08.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:08 vm01 bash[28152]: cluster 2026-03-09T16:21:07.039988+0000 mgr.y (mgr.14520) 1019 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:08.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:08 vm01 bash[20728]: cluster 2026-03-09T16:21:07.039988+0000 mgr.y (mgr.14520) 1019 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:08.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:08 vm01 bash[20728]: cluster 2026-03-09T16:21:07.039988+0000 mgr.y (mgr.14520) 1019 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:09 vm09 bash[22983]: audit 2026-03-09T16:21:07.403090+0000 mgr.y (mgr.14520) 1020 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:09 vm09 bash[22983]: audit 2026-03-09T16:21:07.403090+0000 mgr.y (mgr.14520) 1020 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:09.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:09 vm01 bash[28152]: audit 2026-03-09T16:21:07.403090+0000 mgr.y (mgr.14520) 1020 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:09.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:09 vm01 bash[28152]: audit 2026-03-09T16:21:07.403090+0000 mgr.y (mgr.14520) 1020 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:09.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:09 vm01 bash[20728]: audit 2026-03-09T16:21:07.403090+0000 mgr.y (mgr.14520) 1020 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:09.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:09 vm01 bash[20728]: audit 2026-03-09T16:21:07.403090+0000 mgr.y (mgr.14520) 1020 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:10 vm09 bash[22983]: cluster 2026-03-09T16:21:09.040703+0000 mgr.y (mgr.14520) 1021 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:10 vm09 bash[22983]: cluster 2026-03-09T16:21:09.040703+0000 mgr.y (mgr.14520) 1021 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:10.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:10 vm01 bash[28152]: cluster 2026-03-09T16:21:09.040703+0000 mgr.y (mgr.14520) 1021 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:10.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:10 vm01 bash[28152]: cluster 2026-03-09T16:21:09.040703+0000 mgr.y (mgr.14520) 1021 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:10.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:10 vm01 bash[20728]: cluster 2026-03-09T16:21:09.040703+0000 mgr.y (mgr.14520) 1021 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:10.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:10 vm01 bash[20728]: cluster 2026-03-09T16:21:09.040703+0000 mgr.y (mgr.14520) 1021 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:12 vm09 bash[22983]: cluster 2026-03-09T16:21:11.041224+0000 mgr.y (mgr.14520) 1022 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:12 vm09 bash[22983]: cluster 2026-03-09T16:21:11.041224+0000 mgr.y (mgr.14520) 1022 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:12.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:12 vm01 bash[28152]: cluster 2026-03-09T16:21:11.041224+0000 mgr.y (mgr.14520) 1022 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:12.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:12 vm01 bash[28152]: cluster 2026-03-09T16:21:11.041224+0000 mgr.y (mgr.14520) 1022 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:12.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:12 vm01 bash[20728]: cluster 2026-03-09T16:21:11.041224+0000 mgr.y (mgr.14520) 1022 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:12.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:12 vm01 bash[20728]: cluster 2026-03-09T16:21:11.041224+0000 mgr.y (mgr.14520) 1022 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:13.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:21:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:21:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:21:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:14 vm09 bash[22983]: cluster 2026-03-09T16:21:13.041500+0000 mgr.y (mgr.14520) 1023 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:14 vm09 bash[22983]: cluster 2026-03-09T16:21:13.041500+0000 mgr.y (mgr.14520) 1023 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:14.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:14 vm01 bash[28152]: cluster 2026-03-09T16:21:13.041500+0000 mgr.y (mgr.14520) 1023 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:14.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:14 vm01 bash[28152]: cluster 2026-03-09T16:21:13.041500+0000 mgr.y (mgr.14520) 1023 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:14.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:14 vm01 bash[20728]: cluster 2026-03-09T16:21:13.041500+0000 mgr.y (mgr.14520) 1023 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:14.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:14 vm01 bash[20728]: cluster 2026-03-09T16:21:13.041500+0000 mgr.y (mgr.14520) 1023 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:15 vm09 bash[22983]: audit 2026-03-09T16:21:14.849083+0000 mon.a (mon.0) 3873 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:15.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:15 vm09 bash[22983]: audit 2026-03-09T16:21:14.849083+0000 mon.a (mon.0) 3873 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:15.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:15 vm01 bash[28152]: audit 2026-03-09T16:21:14.849083+0000 mon.a (mon.0) 3873 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:15.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:15 vm01 bash[28152]: audit 2026-03-09T16:21:14.849083+0000 mon.a (mon.0) 3873 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:15.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:15 vm01 bash[20728]: audit 2026-03-09T16:21:14.849083+0000 mon.a (mon.0) 3873 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:15.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:15 vm01 bash[20728]: audit 2026-03-09T16:21:14.849083+0000 mon.a (mon.0) 3873 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:16 vm01 bash[28152]: cluster 2026-03-09T16:21:15.042155+0000 mgr.y (mgr.14520) 1024 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:16 vm01 bash[28152]: cluster 2026-03-09T16:21:15.042155+0000 mgr.y (mgr.14520) 1024 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:16 vm01 bash[20728]: cluster 2026-03-09T16:21:15.042155+0000 mgr.y (mgr.14520) 1024 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:16 vm01 bash[20728]: cluster 2026-03-09T16:21:15.042155+0000 mgr.y (mgr.14520) 1024 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:16 vm09 bash[22983]: cluster 2026-03-09T16:21:15.042155+0000 mgr.y (mgr.14520) 1024 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:16 vm09 bash[22983]: cluster 2026-03-09T16:21:15.042155+0000 mgr.y (mgr.14520) 1024 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:17.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:21:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:21:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:18 vm09 bash[22983]: cluster 2026-03-09T16:21:17.042445+0000 mgr.y (mgr.14520) 1025 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:18 vm09 bash[22983]: cluster 2026-03-09T16:21:17.042445+0000 mgr.y (mgr.14520) 1025 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:18 vm01 bash[28152]: cluster 2026-03-09T16:21:17.042445+0000 mgr.y (mgr.14520) 1025 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:18 vm01 bash[28152]: cluster 2026-03-09T16:21:17.042445+0000 mgr.y (mgr.14520) 1025 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:18 vm01 bash[20728]: cluster 2026-03-09T16:21:17.042445+0000 mgr.y (mgr.14520) 1025 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:18 vm01 bash[20728]: cluster 2026-03-09T16:21:17.042445+0000 mgr.y (mgr.14520) 1025 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:19 vm09 bash[22983]: audit 2026-03-09T16:21:17.413790+0000 mgr.y (mgr.14520) 1026 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:19 vm09 bash[22983]: audit 2026-03-09T16:21:17.413790+0000 mgr.y (mgr.14520) 1026 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:19 vm01 bash[28152]: audit 2026-03-09T16:21:17.413790+0000 mgr.y (mgr.14520) 1026 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:19 vm01 bash[28152]: audit 2026-03-09T16:21:17.413790+0000 mgr.y (mgr.14520) 1026 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:19 vm01 bash[20728]: audit 2026-03-09T16:21:17.413790+0000 mgr.y (mgr.14520) 1026 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:19 vm01 bash[20728]: audit 2026-03-09T16:21:17.413790+0000 mgr.y (mgr.14520) 1026 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:20 vm09 bash[22983]: cluster 2026-03-09T16:21:19.042908+0000 mgr.y (mgr.14520) 1027 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:20 vm09 bash[22983]: cluster 2026-03-09T16:21:19.042908+0000 mgr.y (mgr.14520) 1027 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:20 vm01 bash[28152]: cluster 2026-03-09T16:21:19.042908+0000 mgr.y (mgr.14520) 1027 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:20.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:20 vm01 bash[28152]: cluster 2026-03-09T16:21:19.042908+0000 mgr.y (mgr.14520) 1027 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:20 vm01 bash[20728]: cluster 2026-03-09T16:21:19.042908+0000 mgr.y (mgr.14520) 1027 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:20 vm01 bash[20728]: cluster 2026-03-09T16:21:19.042908+0000 mgr.y (mgr.14520) 1027 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:22 vm09 bash[22983]: cluster 2026-03-09T16:21:21.043477+0000 mgr.y (mgr.14520) 1028 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:22 vm09 bash[22983]: cluster 2026-03-09T16:21:21.043477+0000 mgr.y (mgr.14520) 1028 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:22 vm01 bash[28152]: cluster 2026-03-09T16:21:21.043477+0000 mgr.y (mgr.14520) 1028 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:22.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:22 vm01 bash[28152]: cluster 2026-03-09T16:21:21.043477+0000 mgr.y (mgr.14520) 1028 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:22 vm01 bash[20728]: cluster 2026-03-09T16:21:21.043477+0000 mgr.y (mgr.14520) 1028 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:22 vm01 bash[20728]: cluster 2026-03-09T16:21:21.043477+0000 mgr.y (mgr.14520) 1028 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:21:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:21:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:21:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:24 vm09 bash[22983]: cluster 2026-03-09T16:21:23.043746+0000 mgr.y (mgr.14520) 1029 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:24 vm09 bash[22983]: cluster 2026-03-09T16:21:23.043746+0000 mgr.y (mgr.14520) 1029 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:24.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:24 vm01 bash[28152]: cluster 2026-03-09T16:21:23.043746+0000 mgr.y (mgr.14520) 1029 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:24.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:24 vm01 bash[28152]: cluster 2026-03-09T16:21:23.043746+0000 mgr.y (mgr.14520) 1029 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:24 vm01 bash[20728]: cluster 2026-03-09T16:21:23.043746+0000 mgr.y (mgr.14520) 1029 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:24 vm01 bash[20728]: cluster 2026-03-09T16:21:23.043746+0000 mgr.y (mgr.14520) 1029 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:26 vm09 bash[22983]: cluster 2026-03-09T16:21:25.044505+0000 mgr.y (mgr.14520) 1030 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:26 vm09 bash[22983]: cluster 2026-03-09T16:21:25.044505+0000 mgr.y (mgr.14520) 1030 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:26.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:26 vm01 bash[28152]: cluster 2026-03-09T16:21:25.044505+0000 mgr.y (mgr.14520) 1030 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:26.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:26 vm01 bash[28152]: cluster 2026-03-09T16:21:25.044505+0000 mgr.y (mgr.14520) 1030 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:26 vm01 bash[20728]: cluster 2026-03-09T16:21:25.044505+0000 mgr.y (mgr.14520) 1030 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:26 vm01 bash[20728]: cluster 2026-03-09T16:21:25.044505+0000 mgr.y (mgr.14520) 1030 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:27.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:21:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:21:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:28 vm09 bash[22983]: cluster 2026-03-09T16:21:27.044830+0000 mgr.y (mgr.14520) 1031 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:28 vm09 bash[22983]: cluster 2026-03-09T16:21:27.044830+0000 mgr.y (mgr.14520) 1031 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:28 vm01 bash[28152]: cluster 2026-03-09T16:21:27.044830+0000 mgr.y (mgr.14520) 1031 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:28 vm01 bash[28152]: cluster 2026-03-09T16:21:27.044830+0000 mgr.y (mgr.14520) 1031 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:28.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:28 vm01 bash[20728]: cluster 2026-03-09T16:21:27.044830+0000 mgr.y (mgr.14520) 1031 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:28.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:28 vm01 bash[20728]: cluster 2026-03-09T16:21:27.044830+0000 mgr.y (mgr.14520) 1031 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:29 vm09 bash[22983]: audit 2026-03-09T16:21:27.424607+0000 mgr.y (mgr.14520) 1032 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:29 vm09 bash[22983]: audit 2026-03-09T16:21:27.424607+0000 mgr.y (mgr.14520) 1032 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:29 vm01 bash[28152]: audit 2026-03-09T16:21:27.424607+0000 mgr.y (mgr.14520) 1032 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:29 vm01 bash[28152]: audit 2026-03-09T16:21:27.424607+0000 mgr.y (mgr.14520) 1032 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:29.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:29 vm01 bash[20728]: audit 2026-03-09T16:21:27.424607+0000 mgr.y (mgr.14520) 1032 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:29.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:29 vm01 bash[20728]: audit 2026-03-09T16:21:27.424607+0000 mgr.y (mgr.14520) 1032 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:30 vm09 bash[22983]: cluster 2026-03-09T16:21:29.045349+0000 mgr.y (mgr.14520) 1033 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:30 vm09 bash[22983]: cluster 2026-03-09T16:21:29.045349+0000 mgr.y (mgr.14520) 1033 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:30 vm09 bash[22983]: audit 2026-03-09T16:21:29.855046+0000 mon.a (mon.0) 3874 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:30 vm09 bash[22983]: audit 2026-03-09T16:21:29.855046+0000 mon.a (mon.0) 3874 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:30 vm01 bash[28152]: cluster 2026-03-09T16:21:29.045349+0000 mgr.y (mgr.14520) 1033 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:30 vm01 bash[28152]: cluster 2026-03-09T16:21:29.045349+0000 mgr.y (mgr.14520) 1033 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:30 vm01 bash[28152]: audit 2026-03-09T16:21:29.855046+0000 mon.a (mon.0) 3874 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:30 vm01 bash[28152]: audit 2026-03-09T16:21:29.855046+0000 mon.a (mon.0) 3874 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:30.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:30 vm01 bash[20728]: cluster 2026-03-09T16:21:29.045349+0000 mgr.y (mgr.14520) 1033 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:30.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:30 vm01 bash[20728]: cluster 2026-03-09T16:21:29.045349+0000 mgr.y (mgr.14520) 1033 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:30.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:30 vm01 bash[20728]: audit 2026-03-09T16:21:29.855046+0000 mon.a (mon.0) 3874 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:30.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:30 vm01 bash[20728]: audit 2026-03-09T16:21:29.855046+0000 mon.a (mon.0) 3874 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:31 vm09 bash[22983]: cluster 2026-03-09T16:21:31.045836+0000 mgr.y (mgr.14520) 1034 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:31 vm09 bash[22983]: cluster 2026-03-09T16:21:31.045836+0000 mgr.y (mgr.14520) 1034 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:31.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:31 vm01 bash[28152]: cluster 2026-03-09T16:21:31.045836+0000 mgr.y (mgr.14520) 1034 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:31.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:31 vm01 bash[28152]: cluster 2026-03-09T16:21:31.045836+0000 mgr.y (mgr.14520) 1034 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:31.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:31 vm01 bash[20728]: cluster 2026-03-09T16:21:31.045836+0000 mgr.y (mgr.14520) 1034 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:31.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:31 vm01 bash[20728]: cluster 2026-03-09T16:21:31.045836+0000 mgr.y (mgr.14520) 1034 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:33.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:21:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:21:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:21:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:34 vm09 bash[22983]: cluster 2026-03-09T16:21:33.046097+0000 mgr.y (mgr.14520) 1035 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:34 vm09 bash[22983]: cluster 2026-03-09T16:21:33.046097+0000 mgr.y (mgr.14520) 1035 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:34 vm01 bash[28152]: cluster 2026-03-09T16:21:33.046097+0000 mgr.y (mgr.14520) 1035 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:34 vm01 bash[28152]: cluster 2026-03-09T16:21:33.046097+0000 mgr.y (mgr.14520) 1035 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:34 vm01 bash[20728]: cluster 2026-03-09T16:21:33.046097+0000 mgr.y (mgr.14520) 1035 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:34 vm01 bash[20728]: cluster 2026-03-09T16:21:33.046097+0000 mgr.y (mgr.14520) 1035 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:36 vm09 bash[22983]: cluster 2026-03-09T16:21:35.046886+0000 mgr.y (mgr.14520) 1036 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:36 vm09 bash[22983]: cluster 2026-03-09T16:21:35.046886+0000 mgr.y (mgr.14520) 1036 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:36.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:36 vm01 bash[28152]: cluster 2026-03-09T16:21:35.046886+0000 mgr.y (mgr.14520) 1036 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:36.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:36 vm01 bash[28152]: cluster 2026-03-09T16:21:35.046886+0000 mgr.y (mgr.14520) 1036 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:36 vm01 bash[20728]: cluster 2026-03-09T16:21:35.046886+0000 mgr.y (mgr.14520) 1036 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:36 vm01 bash[20728]: cluster 2026-03-09T16:21:35.046886+0000 mgr.y (mgr.14520) 1036 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:37.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:21:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:21:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:38 vm09 bash[22983]: cluster 2026-03-09T16:21:37.047183+0000 mgr.y (mgr.14520) 1037 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:38 vm09 bash[22983]: cluster 2026-03-09T16:21:37.047183+0000 mgr.y (mgr.14520) 1037 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:38.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:38 vm01 bash[28152]: cluster 2026-03-09T16:21:37.047183+0000 mgr.y (mgr.14520) 1037 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:38.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:38 vm01 bash[28152]: cluster 2026-03-09T16:21:37.047183+0000 mgr.y (mgr.14520) 1037 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:38.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:38 vm01 bash[20728]: cluster 2026-03-09T16:21:37.047183+0000 mgr.y (mgr.14520) 1037 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:38.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:38 vm01 bash[20728]: cluster 2026-03-09T16:21:37.047183+0000 mgr.y (mgr.14520) 1037 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:39 vm09 bash[22983]: audit 2026-03-09T16:21:37.431462+0000 mgr.y (mgr.14520) 1038 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:39.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:39 vm09 bash[22983]: audit 2026-03-09T16:21:37.431462+0000 mgr.y (mgr.14520) 1038 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:39.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:39 vm01 bash[28152]: audit 2026-03-09T16:21:37.431462+0000 mgr.y (mgr.14520) 1038 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:39.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:39 vm01 bash[28152]: audit 2026-03-09T16:21:37.431462+0000 mgr.y (mgr.14520) 1038 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:39.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:39 vm01 bash[20728]: audit 2026-03-09T16:21:37.431462+0000 mgr.y (mgr.14520) 1038 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:39.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:39 vm01 bash[20728]: audit 2026-03-09T16:21:37.431462+0000 mgr.y (mgr.14520) 1038 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:40 vm09 bash[22983]: cluster 2026-03-09T16:21:39.047741+0000 mgr.y (mgr.14520) 1039 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:40.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:40 vm09 bash[22983]: cluster 2026-03-09T16:21:39.047741+0000 mgr.y (mgr.14520) 1039 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:40.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:40 vm01 bash[28152]: cluster 2026-03-09T16:21:39.047741+0000 mgr.y (mgr.14520) 1039 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:40.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:40 vm01 bash[28152]: cluster 2026-03-09T16:21:39.047741+0000 mgr.y (mgr.14520) 1039 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:40.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:40 vm01 bash[20728]: cluster 2026-03-09T16:21:39.047741+0000 mgr.y (mgr.14520) 1039 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:40.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:40 vm01 bash[20728]: cluster 2026-03-09T16:21:39.047741+0000 mgr.y (mgr.14520) 1039 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:42 vm09 bash[22983]: cluster 2026-03-09T16:21:41.048437+0000 mgr.y (mgr.14520) 1040 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:42.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:42 vm09 bash[22983]: cluster 2026-03-09T16:21:41.048437+0000 mgr.y (mgr.14520) 1040 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:42.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:42 vm01 bash[20728]: cluster 2026-03-09T16:21:41.048437+0000 mgr.y (mgr.14520) 1040 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:42.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:42 vm01 bash[20728]: cluster 2026-03-09T16:21:41.048437+0000 mgr.y (mgr.14520) 1040 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:42.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:42 vm01 bash[28152]: cluster 2026-03-09T16:21:41.048437+0000 mgr.y (mgr.14520) 1040 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:42.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:42 vm01 bash[28152]: cluster 2026-03-09T16:21:41.048437+0000 mgr.y (mgr.14520) 1040 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:43.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:21:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:21:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:21:44.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:44 vm01 bash[28152]: cluster 2026-03-09T16:21:43.049039+0000 mgr.y (mgr.14520) 1041 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:44.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:44 vm01 bash[28152]: cluster 2026-03-09T16:21:43.049039+0000 mgr.y (mgr.14520) 1041 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:44.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:44 vm01 bash[20728]: cluster 2026-03-09T16:21:43.049039+0000 mgr.y (mgr.14520) 1041 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:44.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:44 vm01 bash[20728]: cluster 2026-03-09T16:21:43.049039+0000 mgr.y (mgr.14520) 1041 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:44 vm09 bash[22983]: cluster 2026-03-09T16:21:43.049039+0000 mgr.y (mgr.14520) 1041 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:44 vm09 bash[22983]: cluster 2026-03-09T16:21:43.049039+0000 mgr.y (mgr.14520) 1041 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:45.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:45 vm01 bash[28152]: audit 2026-03-09T16:21:44.860602+0000 mon.a (mon.0) 3875 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:45.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:45 vm01 bash[28152]: audit 2026-03-09T16:21:44.860602+0000 mon.a (mon.0) 3875 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:45 vm01 bash[20728]: audit 2026-03-09T16:21:44.860602+0000 mon.a (mon.0) 3875 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:45 vm01 bash[20728]: audit 2026-03-09T16:21:44.860602+0000 mon.a (mon.0) 3875 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:45 vm09 bash[22983]: audit 2026-03-09T16:21:44.860602+0000 mon.a (mon.0) 3875 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:45 vm09 bash[22983]: audit 2026-03-09T16:21:44.860602+0000 mon.a (mon.0) 3875 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:21:46.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:46 vm01 bash[28152]: cluster 2026-03-09T16:21:45.049573+0000 mgr.y (mgr.14520) 1042 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:46.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:46 vm01 bash[28152]: cluster 2026-03-09T16:21:45.049573+0000 mgr.y (mgr.14520) 1042 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:46 vm01 bash[20728]: cluster 2026-03-09T16:21:45.049573+0000 mgr.y (mgr.14520) 1042 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:46 vm01 bash[20728]: cluster 2026-03-09T16:21:45.049573+0000 mgr.y (mgr.14520) 1042 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:46 vm09 bash[22983]: cluster 2026-03-09T16:21:45.049573+0000 mgr.y (mgr.14520) 1042 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:46 vm09 bash[22983]: cluster 2026-03-09T16:21:45.049573+0000 mgr.y (mgr.14520) 1042 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:47.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:21:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:21:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:48 vm09 bash[22983]: cluster 2026-03-09T16:21:47.049887+0000 mgr.y (mgr.14520) 1043 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:48 vm09 bash[22983]: cluster 2026-03-09T16:21:47.049887+0000 mgr.y (mgr.14520) 1043 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:48 vm01 bash[28152]: cluster 2026-03-09T16:21:47.049887+0000 mgr.y (mgr.14520) 1043 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:48 vm01 bash[28152]: cluster 2026-03-09T16:21:47.049887+0000 mgr.y (mgr.14520) 1043 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:48.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:48 vm01 bash[20728]: cluster 2026-03-09T16:21:47.049887+0000 mgr.y (mgr.14520) 1043 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:48.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:48 vm01 bash[20728]: cluster 2026-03-09T16:21:47.049887+0000 mgr.y (mgr.14520) 1043 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:49 vm09 bash[22983]: audit 2026-03-09T16:21:47.439533+0000 mgr.y (mgr.14520) 1044 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:49 vm09 bash[22983]: audit 2026-03-09T16:21:47.439533+0000 mgr.y (mgr.14520) 1044 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:49 vm01 bash[28152]: audit 2026-03-09T16:21:47.439533+0000 mgr.y (mgr.14520) 1044 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:49 vm01 bash[28152]: audit 2026-03-09T16:21:47.439533+0000 mgr.y (mgr.14520) 1044 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:49.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:49 vm01 bash[20728]: audit 2026-03-09T16:21:47.439533+0000 mgr.y (mgr.14520) 1044 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:49.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:49 vm01 bash[20728]: audit 2026-03-09T16:21:47.439533+0000 mgr.y (mgr.14520) 1044 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:50 vm09 bash[22983]: cluster 2026-03-09T16:21:49.050503+0000 mgr.y (mgr.14520) 1045 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:50 vm09 bash[22983]: cluster 2026-03-09T16:21:49.050503+0000 mgr.y (mgr.14520) 1045 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:50 vm01 bash[28152]: cluster 2026-03-09T16:21:49.050503+0000 mgr.y (mgr.14520) 1045 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:50 vm01 bash[28152]: cluster 2026-03-09T16:21:49.050503+0000 mgr.y (mgr.14520) 1045 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:50.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:50 vm01 bash[20728]: cluster 2026-03-09T16:21:49.050503+0000 mgr.y (mgr.14520) 1045 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:50.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:50 vm01 bash[20728]: cluster 2026-03-09T16:21:49.050503+0000 mgr.y (mgr.14520) 1045 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:52 vm09 bash[22983]: cluster 2026-03-09T16:21:51.050965+0000 mgr.y (mgr.14520) 1046 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:52 vm09 bash[22983]: cluster 2026-03-09T16:21:51.050965+0000 mgr.y (mgr.14520) 1046 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:52 vm01 bash[28152]: cluster 2026-03-09T16:21:51.050965+0000 mgr.y (mgr.14520) 1046 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:52 vm01 bash[28152]: cluster 2026-03-09T16:21:51.050965+0000 mgr.y (mgr.14520) 1046 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:52 vm01 bash[20728]: cluster 2026-03-09T16:21:51.050965+0000 mgr.y (mgr.14520) 1046 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:52 vm01 bash[20728]: cluster 2026-03-09T16:21:51.050965+0000 mgr.y (mgr.14520) 1046 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:53.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:21:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:21:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:21:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:54 vm09 bash[22983]: cluster 2026-03-09T16:21:53.051251+0000 mgr.y (mgr.14520) 1047 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:54 vm09 bash[22983]: cluster 2026-03-09T16:21:53.051251+0000 mgr.y (mgr.14520) 1047 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:54 vm09 bash[22983]: audit 2026-03-09T16:21:54.199130+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:21:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:54 vm09 bash[22983]: audit 2026-03-09T16:21:54.199130+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:21:54.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:54 vm01 bash[20728]: cluster 2026-03-09T16:21:53.051251+0000 mgr.y (mgr.14520) 1047 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:54 vm01 bash[20728]: cluster 2026-03-09T16:21:53.051251+0000 mgr.y (mgr.14520) 1047 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:54 vm01 bash[20728]: audit 2026-03-09T16:21:54.199130+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:21:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:54 vm01 bash[20728]: audit 2026-03-09T16:21:54.199130+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:21:54.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:54 vm01 bash[28152]: cluster 2026-03-09T16:21:53.051251+0000 mgr.y (mgr.14520) 1047 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:54.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:54 vm01 bash[28152]: cluster 2026-03-09T16:21:53.051251+0000 mgr.y (mgr.14520) 1047 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:54.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:54 vm01 bash[28152]: audit 2026-03-09T16:21:54.199130+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:21:54.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:54 vm01 bash[28152]: audit 2026-03-09T16:21:54.199130+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:21:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:56 vm09 bash[22983]: cluster 2026-03-09T16:21:55.051975+0000 mgr.y (mgr.14520) 1048 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:56 vm09 bash[22983]: cluster 2026-03-09T16:21:55.051975+0000 mgr.y (mgr.14520) 1048 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:56 vm01 bash[28152]: cluster 2026-03-09T16:21:55.051975+0000 mgr.y (mgr.14520) 1048 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:56 vm01 bash[28152]: cluster 2026-03-09T16:21:55.051975+0000 mgr.y (mgr.14520) 1048 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:56.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:56 vm01 bash[20728]: cluster 2026-03-09T16:21:55.051975+0000 mgr.y (mgr.14520) 1048 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:56 vm01 bash[20728]: cluster 2026-03-09T16:21:55.051975+0000 mgr.y (mgr.14520) 1048 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:21:57.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:21:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:21:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:58 vm09 bash[22983]: cluster 2026-03-09T16:21:57.052356+0000 mgr.y (mgr.14520) 1049 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:58 vm09 bash[22983]: cluster 2026-03-09T16:21:57.052356+0000 mgr.y (mgr.14520) 1049 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:58 vm01 bash[28152]: cluster 2026-03-09T16:21:57.052356+0000 mgr.y (mgr.14520) 1049 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:58 vm01 bash[28152]: cluster 2026-03-09T16:21:57.052356+0000 mgr.y (mgr.14520) 1049 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:58 vm01 bash[20728]: cluster 2026-03-09T16:21:57.052356+0000 mgr.y (mgr.14520) 1049 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:58 vm01 bash[20728]: cluster 2026-03-09T16:21:57.052356+0000 mgr.y (mgr.14520) 1049 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:21:59.479 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:59 vm01 bash[28152]: audit 2026-03-09T16:21:57.450126+0000 mgr.y (mgr.14520) 1050 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:59.479 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:21:59 vm01 bash[28152]: audit 2026-03-09T16:21:57.450126+0000 mgr.y (mgr.14520) 1050 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:59.479 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:59 vm01 bash[20728]: audit 2026-03-09T16:21:57.450126+0000 mgr.y (mgr.14520) 1050 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:59.479 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:21:59 vm01 bash[20728]: audit 2026-03-09T16:21:57.450126+0000 mgr.y (mgr.14520) 1050 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:59 vm09 bash[22983]: audit 2026-03-09T16:21:57.450126+0000 mgr.y (mgr.14520) 1050 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:21:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:21:59 vm09 bash[22983]: audit 2026-03-09T16:21:57.450126+0000 mgr.y (mgr.14520) 1050 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: cluster 2026-03-09T16:21:59.052909+0000 mgr.y (mgr.14520) 1051 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: cluster 2026-03-09T16:21:59.052909+0000 mgr.y (mgr.14520) 1051 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:21:59.866466+0000 mon.a (mon.0) 3877 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:21:59.866466+0000 mon.a (mon.0) 3877 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.106870+0000 mon.a (mon.0) 3878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.106870+0000 mon.a (mon.0) 3878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.113733+0000 mon.a (mon.0) 3879 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.113733+0000 mon.a (mon.0) 3879 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.114773+0000 mon.a (mon.0) 3880 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.114773+0000 mon.a (mon.0) 3880 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.116533+0000 mon.a (mon.0) 3881 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.116533+0000 mon.a (mon.0) 3881 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.120393+0000 mon.a (mon.0) 3882 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:00 vm01 bash[28152]: audit 2026-03-09T16:22:00.120393+0000 mon.a (mon.0) 3882 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: cluster 2026-03-09T16:21:59.052909+0000 mgr.y (mgr.14520) 1051 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: cluster 2026-03-09T16:21:59.052909+0000 mgr.y (mgr.14520) 1051 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:21:59.866466+0000 mon.a (mon.0) 3877 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:21:59.866466+0000 mon.a (mon.0) 3877 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.106870+0000 mon.a (mon.0) 3878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.106870+0000 mon.a (mon.0) 3878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.113733+0000 mon.a (mon.0) 3879 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.113733+0000 mon.a (mon.0) 3879 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.114773+0000 mon.a (mon.0) 3880 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.114773+0000 mon.a (mon.0) 3880 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.116533+0000 mon.a (mon.0) 3881 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.116533+0000 mon.a (mon.0) 3881 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.120393+0000 mon.a (mon.0) 3882 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:00 vm01 bash[20728]: audit 2026-03-09T16:22:00.120393+0000 mon.a (mon.0) 3882 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: cluster 2026-03-09T16:21:59.052909+0000 mgr.y (mgr.14520) 1051 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: cluster 2026-03-09T16:21:59.052909+0000 mgr.y (mgr.14520) 1051 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:21:59.866466+0000 mon.a (mon.0) 3877 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:21:59.866466+0000 mon.a (mon.0) 3877 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.106870+0000 mon.a (mon.0) 3878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.106870+0000 mon.a (mon.0) 3878 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.113733+0000 mon.a (mon.0) 3879 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.113733+0000 mon.a (mon.0) 3879 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.114773+0000 mon.a (mon.0) 3880 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.114773+0000 mon.a (mon.0) 3880 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.116533+0000 mon.a (mon.0) 3881 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.116533+0000 mon.a (mon.0) 3881 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.120393+0000 mon.a (mon.0) 3882 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:00 vm09 bash[22983]: audit 2026-03-09T16:22:00.120393+0000 mon.a (mon.0) 3882 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:22:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:02 vm09 bash[22983]: cluster 2026-03-09T16:22:01.053586+0000 mgr.y (mgr.14520) 1052 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:02 vm09 bash[22983]: cluster 2026-03-09T16:22:01.053586+0000 mgr.y (mgr.14520) 1052 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:02.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:02 vm01 bash[28152]: cluster 2026-03-09T16:22:01.053586+0000 mgr.y (mgr.14520) 1052 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:02.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:02 vm01 bash[28152]: cluster 2026-03-09T16:22:01.053586+0000 mgr.y (mgr.14520) 1052 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:02 vm01 bash[20728]: cluster 2026-03-09T16:22:01.053586+0000 mgr.y (mgr.14520) 1052 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:02 vm01 bash[20728]: cluster 2026-03-09T16:22:01.053586+0000 mgr.y (mgr.14520) 1052 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:03.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:22:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:22:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:22:03.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:03 vm09 bash[22983]: cluster 2026-03-09T16:22:03.053894+0000 mgr.y (mgr.14520) 1053 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:03.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:03 vm09 bash[22983]: cluster 2026-03-09T16:22:03.053894+0000 mgr.y (mgr.14520) 1053 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:03.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:03 vm01 bash[20728]: cluster 2026-03-09T16:22:03.053894+0000 mgr.y (mgr.14520) 1053 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:03.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:03 vm01 bash[20728]: cluster 2026-03-09T16:22:03.053894+0000 mgr.y (mgr.14520) 1053 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:03.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:03 vm01 bash[28152]: cluster 2026-03-09T16:22:03.053894+0000 mgr.y (mgr.14520) 1053 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:03.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:03 vm01 bash[28152]: cluster 2026-03-09T16:22:03.053894+0000 mgr.y (mgr.14520) 1053 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:06 vm09 bash[22983]: cluster 2026-03-09T16:22:05.054730+0000 mgr.y (mgr.14520) 1054 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:06 vm09 bash[22983]: cluster 2026-03-09T16:22:05.054730+0000 mgr.y (mgr.14520) 1054 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:06.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:06 vm01 bash[20728]: cluster 2026-03-09T16:22:05.054730+0000 mgr.y (mgr.14520) 1054 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:06.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:06 vm01 bash[20728]: cluster 2026-03-09T16:22:05.054730+0000 mgr.y (mgr.14520) 1054 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:06.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:06 vm01 bash[28152]: cluster 2026-03-09T16:22:05.054730+0000 mgr.y (mgr.14520) 1054 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:06.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:06 vm01 bash[28152]: cluster 2026-03-09T16:22:05.054730+0000 mgr.y (mgr.14520) 1054 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:07.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:22:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:22:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:08 vm09 bash[22983]: cluster 2026-03-09T16:22:07.054978+0000 mgr.y (mgr.14520) 1055 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:08 vm09 bash[22983]: cluster 2026-03-09T16:22:07.054978+0000 mgr.y (mgr.14520) 1055 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:08.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:08 vm01 bash[20728]: cluster 2026-03-09T16:22:07.054978+0000 mgr.y (mgr.14520) 1055 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:08.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:08 vm01 bash[20728]: cluster 2026-03-09T16:22:07.054978+0000 mgr.y (mgr.14520) 1055 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:08.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:08 vm01 bash[28152]: cluster 2026-03-09T16:22:07.054978+0000 mgr.y (mgr.14520) 1055 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:08.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:08 vm01 bash[28152]: cluster 2026-03-09T16:22:07.054978+0000 mgr.y (mgr.14520) 1055 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:09 vm09 bash[22983]: audit 2026-03-09T16:22:07.458141+0000 mgr.y (mgr.14520) 1056 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:09 vm09 bash[22983]: audit 2026-03-09T16:22:07.458141+0000 mgr.y (mgr.14520) 1056 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:09.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:09 vm01 bash[20728]: audit 2026-03-09T16:22:07.458141+0000 mgr.y (mgr.14520) 1056 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:09.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:09 vm01 bash[20728]: audit 2026-03-09T16:22:07.458141+0000 mgr.y (mgr.14520) 1056 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:09.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:09 vm01 bash[28152]: audit 2026-03-09T16:22:07.458141+0000 mgr.y (mgr.14520) 1056 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:09.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:09 vm01 bash[28152]: audit 2026-03-09T16:22:07.458141+0000 mgr.y (mgr.14520) 1056 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:10 vm09 bash[22983]: cluster 2026-03-09T16:22:09.055384+0000 mgr.y (mgr.14520) 1057 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:10 vm09 bash[22983]: cluster 2026-03-09T16:22:09.055384+0000 mgr.y (mgr.14520) 1057 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:10.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:10 vm01 bash[20728]: cluster 2026-03-09T16:22:09.055384+0000 mgr.y (mgr.14520) 1057 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:10.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:10 vm01 bash[20728]: cluster 2026-03-09T16:22:09.055384+0000 mgr.y (mgr.14520) 1057 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:10.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:10 vm01 bash[28152]: cluster 2026-03-09T16:22:09.055384+0000 mgr.y (mgr.14520) 1057 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:10.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:10 vm01 bash[28152]: cluster 2026-03-09T16:22:09.055384+0000 mgr.y (mgr.14520) 1057 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:12 vm09 bash[22983]: cluster 2026-03-09T16:22:11.055843+0000 mgr.y (mgr.14520) 1058 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:12.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:12 vm09 bash[22983]: cluster 2026-03-09T16:22:11.055843+0000 mgr.y (mgr.14520) 1058 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:12.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:12 vm01 bash[20728]: cluster 2026-03-09T16:22:11.055843+0000 mgr.y (mgr.14520) 1058 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:12.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:12 vm01 bash[20728]: cluster 2026-03-09T16:22:11.055843+0000 mgr.y (mgr.14520) 1058 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:12.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:12 vm01 bash[28152]: cluster 2026-03-09T16:22:11.055843+0000 mgr.y (mgr.14520) 1058 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:12.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:12 vm01 bash[28152]: cluster 2026-03-09T16:22:11.055843+0000 mgr.y (mgr.14520) 1058 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:13.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:22:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:22:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:22:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:14 vm09 bash[22983]: cluster 2026-03-09T16:22:13.056094+0000 mgr.y (mgr.14520) 1059 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:14.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:14 vm09 bash[22983]: cluster 2026-03-09T16:22:13.056094+0000 mgr.y (mgr.14520) 1059 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:14 vm01 bash[20728]: cluster 2026-03-09T16:22:13.056094+0000 mgr.y (mgr.14520) 1059 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:14 vm01 bash[20728]: cluster 2026-03-09T16:22:13.056094+0000 mgr.y (mgr.14520) 1059 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:14 vm01 bash[28152]: cluster 2026-03-09T16:22:13.056094+0000 mgr.y (mgr.14520) 1059 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:14 vm01 bash[28152]: cluster 2026-03-09T16:22:13.056094+0000 mgr.y (mgr.14520) 1059 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:15.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:15 vm01 bash[20728]: audit 2026-03-09T16:22:14.872543+0000 mon.a (mon.0) 3883 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:15 vm01 bash[20728]: audit 2026-03-09T16:22:14.872543+0000 mon.a (mon.0) 3883 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:15 vm01 bash[28152]: audit 2026-03-09T16:22:14.872543+0000 mon.a (mon.0) 3883 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:15 vm01 bash[28152]: audit 2026-03-09T16:22:14.872543+0000 mon.a (mon.0) 3883 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:15 vm09 bash[22983]: audit 2026-03-09T16:22:14.872543+0000 mon.a (mon.0) 3883 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:15 vm09 bash[22983]: audit 2026-03-09T16:22:14.872543+0000 mon.a (mon.0) 3883 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:16 vm01 bash[20728]: cluster 2026-03-09T16:22:15.056729+0000 mgr.y (mgr.14520) 1060 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:16 vm01 bash[20728]: cluster 2026-03-09T16:22:15.056729+0000 mgr.y (mgr.14520) 1060 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:16 vm01 bash[28152]: cluster 2026-03-09T16:22:15.056729+0000 mgr.y (mgr.14520) 1060 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:16 vm01 bash[28152]: cluster 2026-03-09T16:22:15.056729+0000 mgr.y (mgr.14520) 1060 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:16 vm09 bash[22983]: cluster 2026-03-09T16:22:15.056729+0000 mgr.y (mgr.14520) 1060 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:16 vm09 bash[22983]: cluster 2026-03-09T16:22:15.056729+0000 mgr.y (mgr.14520) 1060 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:17.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:22:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:22:18.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:18 vm01 bash[20728]: cluster 2026-03-09T16:22:17.056982+0000 mgr.y (mgr.14520) 1061 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:18.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:18 vm01 bash[20728]: cluster 2026-03-09T16:22:17.056982+0000 mgr.y (mgr.14520) 1061 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:18.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:18 vm01 bash[28152]: cluster 2026-03-09T16:22:17.056982+0000 mgr.y (mgr.14520) 1061 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:18.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:18 vm01 bash[28152]: cluster 2026-03-09T16:22:17.056982+0000 mgr.y (mgr.14520) 1061 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:18 vm09 bash[22983]: cluster 2026-03-09T16:22:17.056982+0000 mgr.y (mgr.14520) 1061 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:18 vm09 bash[22983]: cluster 2026-03-09T16:22:17.056982+0000 mgr.y (mgr.14520) 1061 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:19.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:19 vm01 bash[28152]: audit 2026-03-09T16:22:17.466146+0000 mgr.y (mgr.14520) 1062 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:19.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:19 vm01 bash[28152]: audit 2026-03-09T16:22:17.466146+0000 mgr.y (mgr.14520) 1062 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:19.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:19 vm01 bash[20728]: audit 2026-03-09T16:22:17.466146+0000 mgr.y (mgr.14520) 1062 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:19.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:19 vm01 bash[20728]: audit 2026-03-09T16:22:17.466146+0000 mgr.y (mgr.14520) 1062 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:19 vm09 bash[22983]: audit 2026-03-09T16:22:17.466146+0000 mgr.y (mgr.14520) 1062 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:19 vm09 bash[22983]: audit 2026-03-09T16:22:17.466146+0000 mgr.y (mgr.14520) 1062 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:20.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:20 vm01 bash[28152]: cluster 2026-03-09T16:22:19.057459+0000 mgr.y (mgr.14520) 1063 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:20.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:20 vm01 bash[28152]: cluster 2026-03-09T16:22:19.057459+0000 mgr.y (mgr.14520) 1063 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:20.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:20 vm01 bash[20728]: cluster 2026-03-09T16:22:19.057459+0000 mgr.y (mgr.14520) 1063 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:20.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:20 vm01 bash[20728]: cluster 2026-03-09T16:22:19.057459+0000 mgr.y (mgr.14520) 1063 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:20 vm09 bash[22983]: cluster 2026-03-09T16:22:19.057459+0000 mgr.y (mgr.14520) 1063 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:20 vm09 bash[22983]: cluster 2026-03-09T16:22:19.057459+0000 mgr.y (mgr.14520) 1063 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:22 vm09 bash[22983]: cluster 2026-03-09T16:22:21.058086+0000 mgr.y (mgr.14520) 1064 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:22 vm09 bash[22983]: cluster 2026-03-09T16:22:21.058086+0000 mgr.y (mgr.14520) 1064 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:22 vm01 bash[28152]: cluster 2026-03-09T16:22:21.058086+0000 mgr.y (mgr.14520) 1064 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:22.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:22 vm01 bash[28152]: cluster 2026-03-09T16:22:21.058086+0000 mgr.y (mgr.14520) 1064 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:22 vm01 bash[20728]: cluster 2026-03-09T16:22:21.058086+0000 mgr.y (mgr.14520) 1064 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:22 vm01 bash[20728]: cluster 2026-03-09T16:22:21.058086+0000 mgr.y (mgr.14520) 1064 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:22:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:22:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:22:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:24 vm09 bash[22983]: cluster 2026-03-09T16:22:23.058331+0000 mgr.y (mgr.14520) 1065 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:24 vm09 bash[22983]: cluster 2026-03-09T16:22:23.058331+0000 mgr.y (mgr.14520) 1065 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:24.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:24 vm01 bash[28152]: cluster 2026-03-09T16:22:23.058331+0000 mgr.y (mgr.14520) 1065 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:24.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:24 vm01 bash[28152]: cluster 2026-03-09T16:22:23.058331+0000 mgr.y (mgr.14520) 1065 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:24 vm01 bash[20728]: cluster 2026-03-09T16:22:23.058331+0000 mgr.y (mgr.14520) 1065 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:24 vm01 bash[20728]: cluster 2026-03-09T16:22:23.058331+0000 mgr.y (mgr.14520) 1065 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:26 vm09 bash[22983]: cluster 2026-03-09T16:22:25.058966+0000 mgr.y (mgr.14520) 1066 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:26 vm09 bash[22983]: cluster 2026-03-09T16:22:25.058966+0000 mgr.y (mgr.14520) 1066 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:26.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:26 vm01 bash[28152]: cluster 2026-03-09T16:22:25.058966+0000 mgr.y (mgr.14520) 1066 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:26.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:26 vm01 bash[28152]: cluster 2026-03-09T16:22:25.058966+0000 mgr.y (mgr.14520) 1066 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:26 vm01 bash[20728]: cluster 2026-03-09T16:22:25.058966+0000 mgr.y (mgr.14520) 1066 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:26 vm01 bash[20728]: cluster 2026-03-09T16:22:25.058966+0000 mgr.y (mgr.14520) 1066 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:27.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:22:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:22:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:28 vm09 bash[22983]: cluster 2026-03-09T16:22:27.059265+0000 mgr.y (mgr.14520) 1067 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:28 vm09 bash[22983]: cluster 2026-03-09T16:22:27.059265+0000 mgr.y (mgr.14520) 1067 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:28.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:28 vm01 bash[28152]: cluster 2026-03-09T16:22:27.059265+0000 mgr.y (mgr.14520) 1067 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:28.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:28 vm01 bash[28152]: cluster 2026-03-09T16:22:27.059265+0000 mgr.y (mgr.14520) 1067 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:28 vm01 bash[20728]: cluster 2026-03-09T16:22:27.059265+0000 mgr.y (mgr.14520) 1067 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:28 vm01 bash[20728]: cluster 2026-03-09T16:22:27.059265+0000 mgr.y (mgr.14520) 1067 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:29 vm09 bash[22983]: audit 2026-03-09T16:22:27.473587+0000 mgr.y (mgr.14520) 1068 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:29 vm09 bash[22983]: audit 2026-03-09T16:22:27.473587+0000 mgr.y (mgr.14520) 1068 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:29.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:29 vm01 bash[28152]: audit 2026-03-09T16:22:27.473587+0000 mgr.y (mgr.14520) 1068 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:29.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:29 vm01 bash[28152]: audit 2026-03-09T16:22:27.473587+0000 mgr.y (mgr.14520) 1068 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:29 vm01 bash[20728]: audit 2026-03-09T16:22:27.473587+0000 mgr.y (mgr.14520) 1068 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:29 vm01 bash[20728]: audit 2026-03-09T16:22:27.473587+0000 mgr.y (mgr.14520) 1068 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:30 vm09 bash[22983]: cluster 2026-03-09T16:22:29.059812+0000 mgr.y (mgr.14520) 1069 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:30 vm09 bash[22983]: cluster 2026-03-09T16:22:29.059812+0000 mgr.y (mgr.14520) 1069 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:30 vm09 bash[22983]: audit 2026-03-09T16:22:29.878656+0000 mon.a (mon.0) 3884 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:30 vm09 bash[22983]: audit 2026-03-09T16:22:29.878656+0000 mon.a (mon.0) 3884 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:30 vm01 bash[28152]: cluster 2026-03-09T16:22:29.059812+0000 mgr.y (mgr.14520) 1069 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:30 vm01 bash[28152]: cluster 2026-03-09T16:22:29.059812+0000 mgr.y (mgr.14520) 1069 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:30 vm01 bash[28152]: audit 2026-03-09T16:22:29.878656+0000 mon.a (mon.0) 3884 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:30 vm01 bash[28152]: audit 2026-03-09T16:22:29.878656+0000 mon.a (mon.0) 3884 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:30 vm01 bash[20728]: cluster 2026-03-09T16:22:29.059812+0000 mgr.y (mgr.14520) 1069 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:30 vm01 bash[20728]: cluster 2026-03-09T16:22:29.059812+0000 mgr.y (mgr.14520) 1069 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:30 vm01 bash[20728]: audit 2026-03-09T16:22:29.878656+0000 mon.a (mon.0) 3884 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:30 vm01 bash[20728]: audit 2026-03-09T16:22:29.878656+0000 mon.a (mon.0) 3884 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:31 vm09 bash[22983]: cluster 2026-03-09T16:22:31.060314+0000 mgr.y (mgr.14520) 1070 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:31 vm09 bash[22983]: cluster 2026-03-09T16:22:31.060314+0000 mgr.y (mgr.14520) 1070 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:31.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:31 vm01 bash[28152]: cluster 2026-03-09T16:22:31.060314+0000 mgr.y (mgr.14520) 1070 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:31.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:31 vm01 bash[28152]: cluster 2026-03-09T16:22:31.060314+0000 mgr.y (mgr.14520) 1070 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:31.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:31 vm01 bash[20728]: cluster 2026-03-09T16:22:31.060314+0000 mgr.y (mgr.14520) 1070 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:31.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:31 vm01 bash[20728]: cluster 2026-03-09T16:22:31.060314+0000 mgr.y (mgr.14520) 1070 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:33.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:22:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:22:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:22:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:34 vm09 bash[22983]: cluster 2026-03-09T16:22:33.060616+0000 mgr.y (mgr.14520) 1071 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:34.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:34 vm09 bash[22983]: cluster 2026-03-09T16:22:33.060616+0000 mgr.y (mgr.14520) 1071 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:34 vm01 bash[28152]: cluster 2026-03-09T16:22:33.060616+0000 mgr.y (mgr.14520) 1071 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:34 vm01 bash[28152]: cluster 2026-03-09T16:22:33.060616+0000 mgr.y (mgr.14520) 1071 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:34 vm01 bash[20728]: cluster 2026-03-09T16:22:33.060616+0000 mgr.y (mgr.14520) 1071 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:34 vm01 bash[20728]: cluster 2026-03-09T16:22:33.060616+0000 mgr.y (mgr.14520) 1071 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:36 vm09 bash[22983]: cluster 2026-03-09T16:22:35.061310+0000 mgr.y (mgr.14520) 1072 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:36.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:36 vm09 bash[22983]: cluster 2026-03-09T16:22:35.061310+0000 mgr.y (mgr.14520) 1072 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:36.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:36 vm01 bash[28152]: cluster 2026-03-09T16:22:35.061310+0000 mgr.y (mgr.14520) 1072 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:36.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:36 vm01 bash[28152]: cluster 2026-03-09T16:22:35.061310+0000 mgr.y (mgr.14520) 1072 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:36 vm01 bash[20728]: cluster 2026-03-09T16:22:35.061310+0000 mgr.y (mgr.14520) 1072 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:36 vm01 bash[20728]: cluster 2026-03-09T16:22:35.061310+0000 mgr.y (mgr.14520) 1072 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:37.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:22:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:22:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:38 vm09 bash[22983]: cluster 2026-03-09T16:22:37.061669+0000 mgr.y (mgr.14520) 1073 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:38.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:38 vm09 bash[22983]: cluster 2026-03-09T16:22:37.061669+0000 mgr.y (mgr.14520) 1073 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:38.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:38 vm01 bash[28152]: cluster 2026-03-09T16:22:37.061669+0000 mgr.y (mgr.14520) 1073 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:38.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:38 vm01 bash[28152]: cluster 2026-03-09T16:22:37.061669+0000 mgr.y (mgr.14520) 1073 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:38.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:38 vm01 bash[20728]: cluster 2026-03-09T16:22:37.061669+0000 mgr.y (mgr.14520) 1073 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:38.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:38 vm01 bash[20728]: cluster 2026-03-09T16:22:37.061669+0000 mgr.y (mgr.14520) 1073 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:39.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:39 vm01 bash[28152]: audit 2026-03-09T16:22:37.484293+0000 mgr.y (mgr.14520) 1074 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:39.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:39 vm01 bash[28152]: audit 2026-03-09T16:22:37.484293+0000 mgr.y (mgr.14520) 1074 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:39.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:39 vm01 bash[20728]: audit 2026-03-09T16:22:37.484293+0000 mgr.y (mgr.14520) 1074 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:39.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:39 vm01 bash[20728]: audit 2026-03-09T16:22:37.484293+0000 mgr.y (mgr.14520) 1074 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:39 vm09 bash[22983]: audit 2026-03-09T16:22:37.484293+0000 mgr.y (mgr.14520) 1074 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:39 vm09 bash[22983]: audit 2026-03-09T16:22:37.484293+0000 mgr.y (mgr.14520) 1074 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:40.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:40 vm01 bash[28152]: cluster 2026-03-09T16:22:39.062276+0000 mgr.y (mgr.14520) 1075 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:40.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:40 vm01 bash[28152]: cluster 2026-03-09T16:22:39.062276+0000 mgr.y (mgr.14520) 1075 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:40.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:40 vm01 bash[20728]: cluster 2026-03-09T16:22:39.062276+0000 mgr.y (mgr.14520) 1075 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:40.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:40 vm01 bash[20728]: cluster 2026-03-09T16:22:39.062276+0000 mgr.y (mgr.14520) 1075 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:40 vm09 bash[22983]: cluster 2026-03-09T16:22:39.062276+0000 mgr.y (mgr.14520) 1075 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:40 vm09 bash[22983]: cluster 2026-03-09T16:22:39.062276+0000 mgr.y (mgr.14520) 1075 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:42.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:42 vm01 bash[28152]: cluster 2026-03-09T16:22:41.062823+0000 mgr.y (mgr.14520) 1076 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:42.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:42 vm01 bash[28152]: cluster 2026-03-09T16:22:41.062823+0000 mgr.y (mgr.14520) 1076 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:42.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:42 vm01 bash[20728]: cluster 2026-03-09T16:22:41.062823+0000 mgr.y (mgr.14520) 1076 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:42.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:42 vm01 bash[20728]: cluster 2026-03-09T16:22:41.062823+0000 mgr.y (mgr.14520) 1076 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:42 vm09 bash[22983]: cluster 2026-03-09T16:22:41.062823+0000 mgr.y (mgr.14520) 1076 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:42 vm09 bash[22983]: cluster 2026-03-09T16:22:41.062823+0000 mgr.y (mgr.14520) 1076 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:43.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:22:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:22:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:22:44.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:44 vm01 bash[28152]: cluster 2026-03-09T16:22:43.063078+0000 mgr.y (mgr.14520) 1077 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:44.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:44 vm01 bash[28152]: cluster 2026-03-09T16:22:43.063078+0000 mgr.y (mgr.14520) 1077 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:44 vm01 bash[20728]: cluster 2026-03-09T16:22:43.063078+0000 mgr.y (mgr.14520) 1077 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:44 vm01 bash[20728]: cluster 2026-03-09T16:22:43.063078+0000 mgr.y (mgr.14520) 1077 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:44 vm09 bash[22983]: cluster 2026-03-09T16:22:43.063078+0000 mgr.y (mgr.14520) 1077 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:44 vm09 bash[22983]: cluster 2026-03-09T16:22:43.063078+0000 mgr.y (mgr.14520) 1077 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:45.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:45 vm01 bash[28152]: audit 2026-03-09T16:22:44.884643+0000 mon.a (mon.0) 3885 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:45.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:45 vm01 bash[28152]: audit 2026-03-09T16:22:44.884643+0000 mon.a (mon.0) 3885 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:45 vm01 bash[20728]: audit 2026-03-09T16:22:44.884643+0000 mon.a (mon.0) 3885 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:45 vm01 bash[20728]: audit 2026-03-09T16:22:44.884643+0000 mon.a (mon.0) 3885 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:45 vm09 bash[22983]: audit 2026-03-09T16:22:44.884643+0000 mon.a (mon.0) 3885 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:45 vm09 bash[22983]: audit 2026-03-09T16:22:44.884643+0000 mon.a (mon.0) 3885 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:22:46.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:46 vm01 bash[28152]: cluster 2026-03-09T16:22:45.063836+0000 mgr.y (mgr.14520) 1078 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:46.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:46 vm01 bash[28152]: cluster 2026-03-09T16:22:45.063836+0000 mgr.y (mgr.14520) 1078 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:46 vm01 bash[20728]: cluster 2026-03-09T16:22:45.063836+0000 mgr.y (mgr.14520) 1078 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:46 vm01 bash[20728]: cluster 2026-03-09T16:22:45.063836+0000 mgr.y (mgr.14520) 1078 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:46 vm09 bash[22983]: cluster 2026-03-09T16:22:45.063836+0000 mgr.y (mgr.14520) 1078 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:46 vm09 bash[22983]: cluster 2026-03-09T16:22:45.063836+0000 mgr.y (mgr.14520) 1078 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:47.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:22:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:22:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:48 vm09 bash[22983]: cluster 2026-03-09T16:22:47.064157+0000 mgr.y (mgr.14520) 1079 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:48 vm09 bash[22983]: cluster 2026-03-09T16:22:47.064157+0000 mgr.y (mgr.14520) 1079 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:48 vm01 bash[28152]: cluster 2026-03-09T16:22:47.064157+0000 mgr.y (mgr.14520) 1079 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:48 vm01 bash[28152]: cluster 2026-03-09T16:22:47.064157+0000 mgr.y (mgr.14520) 1079 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:48.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:48 vm01 bash[20728]: cluster 2026-03-09T16:22:47.064157+0000 mgr.y (mgr.14520) 1079 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:48.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:48 vm01 bash[20728]: cluster 2026-03-09T16:22:47.064157+0000 mgr.y (mgr.14520) 1079 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:49 vm09 bash[22983]: audit 2026-03-09T16:22:47.493035+0000 mgr.y (mgr.14520) 1080 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:49 vm09 bash[22983]: audit 2026-03-09T16:22:47.493035+0000 mgr.y (mgr.14520) 1080 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:49 vm01 bash[28152]: audit 2026-03-09T16:22:47.493035+0000 mgr.y (mgr.14520) 1080 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:49 vm01 bash[28152]: audit 2026-03-09T16:22:47.493035+0000 mgr.y (mgr.14520) 1080 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:49.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:49 vm01 bash[20728]: audit 2026-03-09T16:22:47.493035+0000 mgr.y (mgr.14520) 1080 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:49.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:49 vm01 bash[20728]: audit 2026-03-09T16:22:47.493035+0000 mgr.y (mgr.14520) 1080 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:50 vm09 bash[22983]: cluster 2026-03-09T16:22:49.064640+0000 mgr.y (mgr.14520) 1081 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:50 vm09 bash[22983]: cluster 2026-03-09T16:22:49.064640+0000 mgr.y (mgr.14520) 1081 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:50 vm01 bash[28152]: cluster 2026-03-09T16:22:49.064640+0000 mgr.y (mgr.14520) 1081 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:50 vm01 bash[28152]: cluster 2026-03-09T16:22:49.064640+0000 mgr.y (mgr.14520) 1081 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:50 vm01 bash[20728]: cluster 2026-03-09T16:22:49.064640+0000 mgr.y (mgr.14520) 1081 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:50 vm01 bash[20728]: cluster 2026-03-09T16:22:49.064640+0000 mgr.y (mgr.14520) 1081 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:52 vm09 bash[22983]: cluster 2026-03-09T16:22:51.065169+0000 mgr.y (mgr.14520) 1082 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:52 vm09 bash[22983]: cluster 2026-03-09T16:22:51.065169+0000 mgr.y (mgr.14520) 1082 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:52 vm01 bash[28152]: cluster 2026-03-09T16:22:51.065169+0000 mgr.y (mgr.14520) 1082 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:52 vm01 bash[28152]: cluster 2026-03-09T16:22:51.065169+0000 mgr.y (mgr.14520) 1082 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:52 vm01 bash[20728]: cluster 2026-03-09T16:22:51.065169+0000 mgr.y (mgr.14520) 1082 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:52 vm01 bash[20728]: cluster 2026-03-09T16:22:51.065169+0000 mgr.y (mgr.14520) 1082 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:53.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:22:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:22:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:22:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:54 vm09 bash[22983]: cluster 2026-03-09T16:22:53.065445+0000 mgr.y (mgr.14520) 1083 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:54 vm09 bash[22983]: cluster 2026-03-09T16:22:53.065445+0000 mgr.y (mgr.14520) 1083 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:54 vm01 bash[28152]: cluster 2026-03-09T16:22:53.065445+0000 mgr.y (mgr.14520) 1083 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:54 vm01 bash[28152]: cluster 2026-03-09T16:22:53.065445+0000 mgr.y (mgr.14520) 1083 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:54 vm01 bash[20728]: cluster 2026-03-09T16:22:53.065445+0000 mgr.y (mgr.14520) 1083 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:54 vm01 bash[20728]: cluster 2026-03-09T16:22:53.065445+0000 mgr.y (mgr.14520) 1083 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:56 vm09 bash[22983]: cluster 2026-03-09T16:22:55.066149+0000 mgr.y (mgr.14520) 1084 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:56 vm09 bash[22983]: cluster 2026-03-09T16:22:55.066149+0000 mgr.y (mgr.14520) 1084 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:56 vm01 bash[28152]: cluster 2026-03-09T16:22:55.066149+0000 mgr.y (mgr.14520) 1084 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:56 vm01 bash[28152]: cluster 2026-03-09T16:22:55.066149+0000 mgr.y (mgr.14520) 1084 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:56.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:56 vm01 bash[20728]: cluster 2026-03-09T16:22:55.066149+0000 mgr.y (mgr.14520) 1084 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:56 vm01 bash[20728]: cluster 2026-03-09T16:22:55.066149+0000 mgr.y (mgr.14520) 1084 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:22:57.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:22:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:22:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:58 vm09 bash[22983]: cluster 2026-03-09T16:22:57.066574+0000 mgr.y (mgr.14520) 1085 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:58 vm09 bash[22983]: cluster 2026-03-09T16:22:57.066574+0000 mgr.y (mgr.14520) 1085 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:58 vm01 bash[28152]: cluster 2026-03-09T16:22:57.066574+0000 mgr.y (mgr.14520) 1085 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:58 vm01 bash[28152]: cluster 2026-03-09T16:22:57.066574+0000 mgr.y (mgr.14520) 1085 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:58 vm01 bash[20728]: cluster 2026-03-09T16:22:57.066574+0000 mgr.y (mgr.14520) 1085 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:58 vm01 bash[20728]: cluster 2026-03-09T16:22:57.066574+0000 mgr.y (mgr.14520) 1085 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:22:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:59 vm09 bash[22983]: audit 2026-03-09T16:22:57.503632+0000 mgr.y (mgr.14520) 1086 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:22:59 vm09 bash[22983]: audit 2026-03-09T16:22:57.503632+0000 mgr.y (mgr.14520) 1086 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:59 vm01 bash[28152]: audit 2026-03-09T16:22:57.503632+0000 mgr.y (mgr.14520) 1086 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:59.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:22:59 vm01 bash[28152]: audit 2026-03-09T16:22:57.503632+0000 mgr.y (mgr.14520) 1086 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:59 vm01 bash[20728]: audit 2026-03-09T16:22:57.503632+0000 mgr.y (mgr.14520) 1086 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:22:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:22:59 vm01 bash[20728]: audit 2026-03-09T16:22:57.503632+0000 mgr.y (mgr.14520) 1086 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:00 vm09 bash[22983]: cluster 2026-03-09T16:22:59.067349+0000 mgr.y (mgr.14520) 1087 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:00 vm09 bash[22983]: cluster 2026-03-09T16:22:59.067349+0000 mgr.y (mgr.14520) 1087 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:00 vm09 bash[22983]: audit 2026-03-09T16:22:59.890319+0000 mon.a (mon.0) 3886 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:00 vm09 bash[22983]: audit 2026-03-09T16:22:59.890319+0000 mon.a (mon.0) 3886 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:00 vm09 bash[22983]: audit 2026-03-09T16:23:00.157779+0000 mon.a (mon.0) 3887 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:23:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:00 vm09 bash[22983]: audit 2026-03-09T16:23:00.157779+0000 mon.a (mon.0) 3887 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:00 vm01 bash[28152]: cluster 2026-03-09T16:22:59.067349+0000 mgr.y (mgr.14520) 1087 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:00 vm01 bash[28152]: cluster 2026-03-09T16:22:59.067349+0000 mgr.y (mgr.14520) 1087 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:00 vm01 bash[28152]: audit 2026-03-09T16:22:59.890319+0000 mon.a (mon.0) 3886 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:00 vm01 bash[28152]: audit 2026-03-09T16:22:59.890319+0000 mon.a (mon.0) 3886 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:00 vm01 bash[28152]: audit 2026-03-09T16:23:00.157779+0000 mon.a (mon.0) 3887 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:00 vm01 bash[28152]: audit 2026-03-09T16:23:00.157779+0000 mon.a (mon.0) 3887 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:00 vm01 bash[20728]: cluster 2026-03-09T16:22:59.067349+0000 mgr.y (mgr.14520) 1087 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:00 vm01 bash[20728]: cluster 2026-03-09T16:22:59.067349+0000 mgr.y (mgr.14520) 1087 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:00 vm01 bash[20728]: audit 2026-03-09T16:22:59.890319+0000 mon.a (mon.0) 3886 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:00 vm01 bash[20728]: audit 2026-03-09T16:22:59.890319+0000 mon.a (mon.0) 3886 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:00 vm01 bash[20728]: audit 2026-03-09T16:23:00.157779+0000 mon.a (mon.0) 3887 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:23:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:00 vm01 bash[20728]: audit 2026-03-09T16:23:00.157779+0000 mon.a (mon.0) 3887 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:23:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:01 vm09 bash[22983]: audit 2026-03-09T16:23:00.464036+0000 mon.a (mon.0) 3888 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:23:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:01 vm09 bash[22983]: audit 2026-03-09T16:23:00.464036+0000 mon.a (mon.0) 3888 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:23:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:01 vm09 bash[22983]: audit 2026-03-09T16:23:00.464554+0000 mon.a (mon.0) 3889 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:23:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:01 vm09 bash[22983]: audit 2026-03-09T16:23:00.464554+0000 mon.a (mon.0) 3889 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:23:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:01 vm09 bash[22983]: audit 2026-03-09T16:23:00.469017+0000 mon.a (mon.0) 3890 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:23:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:01 vm09 bash[22983]: audit 2026-03-09T16:23:00.469017+0000 mon.a (mon.0) 3890 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:01 vm01 bash[28152]: audit 2026-03-09T16:23:00.464036+0000 mon.a (mon.0) 3888 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:01 vm01 bash[28152]: audit 2026-03-09T16:23:00.464036+0000 mon.a (mon.0) 3888 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:01 vm01 bash[28152]: audit 2026-03-09T16:23:00.464554+0000 mon.a (mon.0) 3889 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:01 vm01 bash[28152]: audit 2026-03-09T16:23:00.464554+0000 mon.a (mon.0) 3889 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:01 vm01 bash[28152]: audit 2026-03-09T16:23:00.469017+0000 mon.a (mon.0) 3890 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:01 vm01 bash[28152]: audit 2026-03-09T16:23:00.469017+0000 mon.a (mon.0) 3890 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:01 vm01 bash[20728]: audit 2026-03-09T16:23:00.464036+0000 mon.a (mon.0) 3888 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:01 vm01 bash[20728]: audit 2026-03-09T16:23:00.464036+0000 mon.a (mon.0) 3888 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:01 vm01 bash[20728]: audit 2026-03-09T16:23:00.464554+0000 mon.a (mon.0) 3889 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:01 vm01 bash[20728]: audit 2026-03-09T16:23:00.464554+0000 mon.a (mon.0) 3889 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:01 vm01 bash[20728]: audit 2026-03-09T16:23:00.469017+0000 mon.a (mon.0) 3890 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:23:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:01 vm01 bash[20728]: audit 2026-03-09T16:23:00.469017+0000 mon.a (mon.0) 3890 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:23:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:02 vm09 bash[22983]: cluster 2026-03-09T16:23:01.068164+0000 mgr.y (mgr.14520) 1088 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:02 vm09 bash[22983]: cluster 2026-03-09T16:23:01.068164+0000 mgr.y (mgr.14520) 1088 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:02.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:02 vm01 bash[28152]: cluster 2026-03-09T16:23:01.068164+0000 mgr.y (mgr.14520) 1088 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:02.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:02 vm01 bash[28152]: cluster 2026-03-09T16:23:01.068164+0000 mgr.y (mgr.14520) 1088 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:02 vm01 bash[20728]: cluster 2026-03-09T16:23:01.068164+0000 mgr.y (mgr.14520) 1088 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:02 vm01 bash[20728]: cluster 2026-03-09T16:23:01.068164+0000 mgr.y (mgr.14520) 1088 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:03.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:23:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:23:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:23:03.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:03 vm09 bash[22983]: cluster 2026-03-09T16:23:03.068460+0000 mgr.y (mgr.14520) 1089 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:03.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:03 vm09 bash[22983]: cluster 2026-03-09T16:23:03.068460+0000 mgr.y (mgr.14520) 1089 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:03.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:03 vm01 bash[28152]: cluster 2026-03-09T16:23:03.068460+0000 mgr.y (mgr.14520) 1089 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:03.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:03 vm01 bash[28152]: cluster 2026-03-09T16:23:03.068460+0000 mgr.y (mgr.14520) 1089 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:03.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:03 vm01 bash[20728]: cluster 2026-03-09T16:23:03.068460+0000 mgr.y (mgr.14520) 1089 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:03.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:03 vm01 bash[20728]: cluster 2026-03-09T16:23:03.068460+0000 mgr.y (mgr.14520) 1089 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:06 vm09 bash[22983]: cluster 2026-03-09T16:23:05.069040+0000 mgr.y (mgr.14520) 1090 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:06.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:06 vm09 bash[22983]: cluster 2026-03-09T16:23:05.069040+0000 mgr.y (mgr.14520) 1090 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:06.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:06 vm01 bash[28152]: cluster 2026-03-09T16:23:05.069040+0000 mgr.y (mgr.14520) 1090 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:06.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:06 vm01 bash[28152]: cluster 2026-03-09T16:23:05.069040+0000 mgr.y (mgr.14520) 1090 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:06.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:06 vm01 bash[20728]: cluster 2026-03-09T16:23:05.069040+0000 mgr.y (mgr.14520) 1090 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:06.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:06 vm01 bash[20728]: cluster 2026-03-09T16:23:05.069040+0000 mgr.y (mgr.14520) 1090 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:07.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:23:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:23:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:08 vm09 bash[22983]: cluster 2026-03-09T16:23:07.069363+0000 mgr.y (mgr.14520) 1091 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:08.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:08 vm09 bash[22983]: cluster 2026-03-09T16:23:07.069363+0000 mgr.y (mgr.14520) 1091 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:08.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:08 vm01 bash[28152]: cluster 2026-03-09T16:23:07.069363+0000 mgr.y (mgr.14520) 1091 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:08.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:08 vm01 bash[28152]: cluster 2026-03-09T16:23:07.069363+0000 mgr.y (mgr.14520) 1091 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:08.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:08 vm01 bash[20728]: cluster 2026-03-09T16:23:07.069363+0000 mgr.y (mgr.14520) 1091 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:08.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:08 vm01 bash[20728]: cluster 2026-03-09T16:23:07.069363+0000 mgr.y (mgr.14520) 1091 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:09.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:09 vm01 bash[28152]: audit 2026-03-09T16:23:07.510768+0000 mgr.y (mgr.14520) 1092 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:09.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:09 vm01 bash[28152]: audit 2026-03-09T16:23:07.510768+0000 mgr.y (mgr.14520) 1092 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:09.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:09 vm01 bash[20728]: audit 2026-03-09T16:23:07.510768+0000 mgr.y (mgr.14520) 1092 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:09.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:09 vm01 bash[20728]: audit 2026-03-09T16:23:07.510768+0000 mgr.y (mgr.14520) 1092 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:09 vm09 bash[22983]: audit 2026-03-09T16:23:07.510768+0000 mgr.y (mgr.14520) 1092 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:09 vm09 bash[22983]: audit 2026-03-09T16:23:07.510768+0000 mgr.y (mgr.14520) 1092 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:10.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:10 vm01 bash[28152]: cluster 2026-03-09T16:23:09.069899+0000 mgr.y (mgr.14520) 1093 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:10.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:10 vm01 bash[28152]: cluster 2026-03-09T16:23:09.069899+0000 mgr.y (mgr.14520) 1093 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:10.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:10 vm01 bash[20728]: cluster 2026-03-09T16:23:09.069899+0000 mgr.y (mgr.14520) 1093 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:10.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:10 vm01 bash[20728]: cluster 2026-03-09T16:23:09.069899+0000 mgr.y (mgr.14520) 1093 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:10 vm09 bash[22983]: cluster 2026-03-09T16:23:09.069899+0000 mgr.y (mgr.14520) 1093 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:10 vm09 bash[22983]: cluster 2026-03-09T16:23:09.069899+0000 mgr.y (mgr.14520) 1093 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:12.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:12 vm01 bash[28152]: cluster 2026-03-09T16:23:11.070370+0000 mgr.y (mgr.14520) 1094 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:12.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:12 vm01 bash[28152]: cluster 2026-03-09T16:23:11.070370+0000 mgr.y (mgr.14520) 1094 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:12.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:12 vm01 bash[20728]: cluster 2026-03-09T16:23:11.070370+0000 mgr.y (mgr.14520) 1094 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:12.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:12 vm01 bash[20728]: cluster 2026-03-09T16:23:11.070370+0000 mgr.y (mgr.14520) 1094 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:12 vm09 bash[22983]: cluster 2026-03-09T16:23:11.070370+0000 mgr.y (mgr.14520) 1094 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:12 vm09 bash[22983]: cluster 2026-03-09T16:23:11.070370+0000 mgr.y (mgr.14520) 1094 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:13.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:23:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:23:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:23:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:14 vm01 bash[28152]: cluster 2026-03-09T16:23:13.070736+0000 mgr.y (mgr.14520) 1095 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:14 vm01 bash[28152]: cluster 2026-03-09T16:23:13.070736+0000 mgr.y (mgr.14520) 1095 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:14 vm01 bash[20728]: cluster 2026-03-09T16:23:13.070736+0000 mgr.y (mgr.14520) 1095 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:14 vm01 bash[20728]: cluster 2026-03-09T16:23:13.070736+0000 mgr.y (mgr.14520) 1095 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:14 vm09 bash[22983]: cluster 2026-03-09T16:23:13.070736+0000 mgr.y (mgr.14520) 1095 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:14 vm09 bash[22983]: cluster 2026-03-09T16:23:13.070736+0000 mgr.y (mgr.14520) 1095 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:15 vm01 bash[28152]: audit 2026-03-09T16:23:14.896034+0000 mon.a (mon.0) 3891 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:15 vm01 bash[28152]: audit 2026-03-09T16:23:14.896034+0000 mon.a (mon.0) 3891 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:15 vm01 bash[20728]: audit 2026-03-09T16:23:14.896034+0000 mon.a (mon.0) 3891 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:15 vm01 bash[20728]: audit 2026-03-09T16:23:14.896034+0000 mon.a (mon.0) 3891 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:15 vm09 bash[22983]: audit 2026-03-09T16:23:14.896034+0000 mon.a (mon.0) 3891 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:15 vm09 bash[22983]: audit 2026-03-09T16:23:14.896034+0000 mon.a (mon.0) 3891 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:16 vm01 bash[28152]: cluster 2026-03-09T16:23:15.071419+0000 mgr.y (mgr.14520) 1096 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:16 vm01 bash[28152]: cluster 2026-03-09T16:23:15.071419+0000 mgr.y (mgr.14520) 1096 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:16 vm01 bash[20728]: cluster 2026-03-09T16:23:15.071419+0000 mgr.y (mgr.14520) 1096 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:16 vm01 bash[20728]: cluster 2026-03-09T16:23:15.071419+0000 mgr.y (mgr.14520) 1096 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:16 vm09 bash[22983]: cluster 2026-03-09T16:23:15.071419+0000 mgr.y (mgr.14520) 1096 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:16 vm09 bash[22983]: cluster 2026-03-09T16:23:15.071419+0000 mgr.y (mgr.14520) 1096 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:17.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:23:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:23:18.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:18 vm01 bash[28152]: cluster 2026-03-09T16:23:17.071710+0000 mgr.y (mgr.14520) 1097 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:18.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:18 vm01 bash[28152]: cluster 2026-03-09T16:23:17.071710+0000 mgr.y (mgr.14520) 1097 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:18.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:18 vm01 bash[20728]: cluster 2026-03-09T16:23:17.071710+0000 mgr.y (mgr.14520) 1097 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:18.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:18 vm01 bash[20728]: cluster 2026-03-09T16:23:17.071710+0000 mgr.y (mgr.14520) 1097 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:18 vm09 bash[22983]: cluster 2026-03-09T16:23:17.071710+0000 mgr.y (mgr.14520) 1097 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:18 vm09 bash[22983]: cluster 2026-03-09T16:23:17.071710+0000 mgr.y (mgr.14520) 1097 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:19 vm09 bash[22983]: audit 2026-03-09T16:23:17.518450+0000 mgr.y (mgr.14520) 1098 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:19 vm09 bash[22983]: audit 2026-03-09T16:23:17.518450+0000 mgr.y (mgr.14520) 1098 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:19 vm01 bash[28152]: audit 2026-03-09T16:23:17.518450+0000 mgr.y (mgr.14520) 1098 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:19 vm01 bash[28152]: audit 2026-03-09T16:23:17.518450+0000 mgr.y (mgr.14520) 1098 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:19 vm01 bash[20728]: audit 2026-03-09T16:23:17.518450+0000 mgr.y (mgr.14520) 1098 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:19 vm01 bash[20728]: audit 2026-03-09T16:23:17.518450+0000 mgr.y (mgr.14520) 1098 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:20 vm09 bash[22983]: cluster 2026-03-09T16:23:19.072340+0000 mgr.y (mgr.14520) 1099 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:20 vm09 bash[22983]: cluster 2026-03-09T16:23:19.072340+0000 mgr.y (mgr.14520) 1099 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:20.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:20 vm01 bash[28152]: cluster 2026-03-09T16:23:19.072340+0000 mgr.y (mgr.14520) 1099 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:20.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:20 vm01 bash[28152]: cluster 2026-03-09T16:23:19.072340+0000 mgr.y (mgr.14520) 1099 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:20 vm01 bash[20728]: cluster 2026-03-09T16:23:19.072340+0000 mgr.y (mgr.14520) 1099 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:20 vm01 bash[20728]: cluster 2026-03-09T16:23:19.072340+0000 mgr.y (mgr.14520) 1099 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:21.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:21 vm01 bash[28152]: cluster 2026-03-09T16:23:21.072976+0000 mgr.y (mgr.14520) 1100 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:21.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:21 vm01 bash[28152]: cluster 2026-03-09T16:23:21.072976+0000 mgr.y (mgr.14520) 1100 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:21.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:21 vm01 bash[20728]: cluster 2026-03-09T16:23:21.072976+0000 mgr.y (mgr.14520) 1100 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:21.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:21 vm01 bash[20728]: cluster 2026-03-09T16:23:21.072976+0000 mgr.y (mgr.14520) 1100 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:21.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:21 vm09 bash[22983]: cluster 2026-03-09T16:23:21.072976+0000 mgr.y (mgr.14520) 1100 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:21.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:21 vm09 bash[22983]: cluster 2026-03-09T16:23:21.072976+0000 mgr.y (mgr.14520) 1100 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:23:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:23:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:23:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:24 vm09 bash[22983]: cluster 2026-03-09T16:23:23.073331+0000 mgr.y (mgr.14520) 1101 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:24.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:24 vm09 bash[22983]: cluster 2026-03-09T16:23:23.073331+0000 mgr.y (mgr.14520) 1101 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:24.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:24 vm01 bash[28152]: cluster 2026-03-09T16:23:23.073331+0000 mgr.y (mgr.14520) 1101 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:24.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:24 vm01 bash[28152]: cluster 2026-03-09T16:23:23.073331+0000 mgr.y (mgr.14520) 1101 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:24.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:24 vm01 bash[20728]: cluster 2026-03-09T16:23:23.073331+0000 mgr.y (mgr.14520) 1101 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:24.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:24 vm01 bash[20728]: cluster 2026-03-09T16:23:23.073331+0000 mgr.y (mgr.14520) 1101 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:26 vm09 bash[22983]: cluster 2026-03-09T16:23:25.074172+0000 mgr.y (mgr.14520) 1102 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:26.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:26 vm09 bash[22983]: cluster 2026-03-09T16:23:25.074172+0000 mgr.y (mgr.14520) 1102 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:26.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:26 vm01 bash[28152]: cluster 2026-03-09T16:23:25.074172+0000 mgr.y (mgr.14520) 1102 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:26.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:26 vm01 bash[28152]: cluster 2026-03-09T16:23:25.074172+0000 mgr.y (mgr.14520) 1102 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:26.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:26 vm01 bash[20728]: cluster 2026-03-09T16:23:25.074172+0000 mgr.y (mgr.14520) 1102 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:26.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:26 vm01 bash[20728]: cluster 2026-03-09T16:23:25.074172+0000 mgr.y (mgr.14520) 1102 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:27.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:23:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:23:28.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:28 vm01 bash[28152]: cluster 2026-03-09T16:23:27.074567+0000 mgr.y (mgr.14520) 1103 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:28.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:28 vm01 bash[28152]: cluster 2026-03-09T16:23:27.074567+0000 mgr.y (mgr.14520) 1103 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:28.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:28 vm01 bash[20728]: cluster 2026-03-09T16:23:27.074567+0000 mgr.y (mgr.14520) 1103 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:28.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:28 vm01 bash[20728]: cluster 2026-03-09T16:23:27.074567+0000 mgr.y (mgr.14520) 1103 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:28 vm09 bash[22983]: cluster 2026-03-09T16:23:27.074567+0000 mgr.y (mgr.14520) 1103 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:28 vm09 bash[22983]: cluster 2026-03-09T16:23:27.074567+0000 mgr.y (mgr.14520) 1103 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:29.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:29 vm01 bash[28152]: audit 2026-03-09T16:23:27.529138+0000 mgr.y (mgr.14520) 1104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:29.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:29 vm01 bash[28152]: audit 2026-03-09T16:23:27.529138+0000 mgr.y (mgr.14520) 1104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:29.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:29 vm01 bash[20728]: audit 2026-03-09T16:23:27.529138+0000 mgr.y (mgr.14520) 1104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:29.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:29 vm01 bash[20728]: audit 2026-03-09T16:23:27.529138+0000 mgr.y (mgr.14520) 1104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:29 vm09 bash[22983]: audit 2026-03-09T16:23:27.529138+0000 mgr.y (mgr.14520) 1104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:29 vm09 bash[22983]: audit 2026-03-09T16:23:27.529138+0000 mgr.y (mgr.14520) 1104 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:30.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:30 vm01 bash[28152]: cluster 2026-03-09T16:23:29.075066+0000 mgr.y (mgr.14520) 1105 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:30.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:30 vm01 bash[28152]: cluster 2026-03-09T16:23:29.075066+0000 mgr.y (mgr.14520) 1105 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:30.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:30 vm01 bash[28152]: audit 2026-03-09T16:23:29.901640+0000 mon.a (mon.0) 3892 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:30.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:30 vm01 bash[28152]: audit 2026-03-09T16:23:29.901640+0000 mon.a (mon.0) 3892 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:30.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:30 vm01 bash[20728]: cluster 2026-03-09T16:23:29.075066+0000 mgr.y (mgr.14520) 1105 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:30.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:30 vm01 bash[20728]: cluster 2026-03-09T16:23:29.075066+0000 mgr.y (mgr.14520) 1105 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:30.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:30 vm01 bash[20728]: audit 2026-03-09T16:23:29.901640+0000 mon.a (mon.0) 3892 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:30.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:30 vm01 bash[20728]: audit 2026-03-09T16:23:29.901640+0000 mon.a (mon.0) 3892 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:30 vm09 bash[22983]: cluster 2026-03-09T16:23:29.075066+0000 mgr.y (mgr.14520) 1105 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:30 vm09 bash[22983]: cluster 2026-03-09T16:23:29.075066+0000 mgr.y (mgr.14520) 1105 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:30 vm09 bash[22983]: audit 2026-03-09T16:23:29.901640+0000 mon.a (mon.0) 3892 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:30 vm09 bash[22983]: audit 2026-03-09T16:23:29.901640+0000 mon.a (mon.0) 3892 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:32.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:32 vm01 bash[28152]: cluster 2026-03-09T16:23:31.075655+0000 mgr.y (mgr.14520) 1106 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:32.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:32 vm01 bash[28152]: cluster 2026-03-09T16:23:31.075655+0000 mgr.y (mgr.14520) 1106 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:32.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:32 vm01 bash[20728]: cluster 2026-03-09T16:23:31.075655+0000 mgr.y (mgr.14520) 1106 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:32.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:32 vm01 bash[20728]: cluster 2026-03-09T16:23:31.075655+0000 mgr.y (mgr.14520) 1106 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:32 vm09 bash[22983]: cluster 2026-03-09T16:23:31.075655+0000 mgr.y (mgr.14520) 1106 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:32 vm09 bash[22983]: cluster 2026-03-09T16:23:31.075655+0000 mgr.y (mgr.14520) 1106 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:33.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:23:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:23:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:23:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:34 vm01 bash[20728]: cluster 2026-03-09T16:23:33.075986+0000 mgr.y (mgr.14520) 1107 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:34.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:34 vm01 bash[20728]: cluster 2026-03-09T16:23:33.075986+0000 mgr.y (mgr.14520) 1107 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:34 vm01 bash[28152]: cluster 2026-03-09T16:23:33.075986+0000 mgr.y (mgr.14520) 1107 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:34.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:34 vm01 bash[28152]: cluster 2026-03-09T16:23:33.075986+0000 mgr.y (mgr.14520) 1107 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:34 vm09 bash[22983]: cluster 2026-03-09T16:23:33.075986+0000 mgr.y (mgr.14520) 1107 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:34 vm09 bash[22983]: cluster 2026-03-09T16:23:33.075986+0000 mgr.y (mgr.14520) 1107 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:36 vm01 bash[20728]: cluster 2026-03-09T16:23:35.077632+0000 mgr.y (mgr.14520) 1108 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:36.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:36 vm01 bash[20728]: cluster 2026-03-09T16:23:35.077632+0000 mgr.y (mgr.14520) 1108 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:36.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:36 vm01 bash[28152]: cluster 2026-03-09T16:23:35.077632+0000 mgr.y (mgr.14520) 1108 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:36.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:36 vm01 bash[28152]: cluster 2026-03-09T16:23:35.077632+0000 mgr.y (mgr.14520) 1108 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:36 vm09 bash[22983]: cluster 2026-03-09T16:23:35.077632+0000 mgr.y (mgr.14520) 1108 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:36 vm09 bash[22983]: cluster 2026-03-09T16:23:35.077632+0000 mgr.y (mgr.14520) 1108 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:37.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:23:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:23:38.422 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:38 vm01 bash[20728]: cluster 2026-03-09T16:23:37.078019+0000 mgr.y (mgr.14520) 1109 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:38.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:38 vm01 bash[20728]: cluster 2026-03-09T16:23:37.078019+0000 mgr.y (mgr.14520) 1109 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:38.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:38 vm01 bash[28152]: cluster 2026-03-09T16:23:37.078019+0000 mgr.y (mgr.14520) 1109 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:38.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:38 vm01 bash[28152]: cluster 2026-03-09T16:23:37.078019+0000 mgr.y (mgr.14520) 1109 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:38 vm09 bash[22983]: cluster 2026-03-09T16:23:37.078019+0000 mgr.y (mgr.14520) 1109 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:38 vm09 bash[22983]: cluster 2026-03-09T16:23:37.078019+0000 mgr.y (mgr.14520) 1109 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:39 vm09 bash[22983]: audit 2026-03-09T16:23:37.539179+0000 mgr.y (mgr.14520) 1110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:39 vm09 bash[22983]: audit 2026-03-09T16:23:37.539179+0000 mgr.y (mgr.14520) 1110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:39 vm01 bash[20728]: audit 2026-03-09T16:23:37.539179+0000 mgr.y (mgr.14520) 1110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:39 vm01 bash[20728]: audit 2026-03-09T16:23:37.539179+0000 mgr.y (mgr.14520) 1110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:39 vm01 bash[28152]: audit 2026-03-09T16:23:37.539179+0000 mgr.y (mgr.14520) 1110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:39 vm01 bash[28152]: audit 2026-03-09T16:23:37.539179+0000 mgr.y (mgr.14520) 1110 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:40 vm09 bash[22983]: cluster 2026-03-09T16:23:39.078577+0000 mgr.y (mgr.14520) 1111 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:40 vm09 bash[22983]: cluster 2026-03-09T16:23:39.078577+0000 mgr.y (mgr.14520) 1111 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:40.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:40 vm01 bash[20728]: cluster 2026-03-09T16:23:39.078577+0000 mgr.y (mgr.14520) 1111 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:40.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:40 vm01 bash[20728]: cluster 2026-03-09T16:23:39.078577+0000 mgr.y (mgr.14520) 1111 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:40 vm01 bash[28152]: cluster 2026-03-09T16:23:39.078577+0000 mgr.y (mgr.14520) 1111 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:40 vm01 bash[28152]: cluster 2026-03-09T16:23:39.078577+0000 mgr.y (mgr.14520) 1111 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:42.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:42 vm01 bash[28152]: cluster 2026-03-09T16:23:41.079550+0000 mgr.y (mgr.14520) 1112 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:42.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:42 vm01 bash[28152]: cluster 2026-03-09T16:23:41.079550+0000 mgr.y (mgr.14520) 1112 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:42 vm01 bash[20728]: cluster 2026-03-09T16:23:41.079550+0000 mgr.y (mgr.14520) 1112 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:42 vm01 bash[20728]: cluster 2026-03-09T16:23:41.079550+0000 mgr.y (mgr.14520) 1112 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:42.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:42 vm09 bash[22983]: cluster 2026-03-09T16:23:41.079550+0000 mgr.y (mgr.14520) 1112 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:42.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:42 vm09 bash[22983]: cluster 2026-03-09T16:23:41.079550+0000 mgr.y (mgr.14520) 1112 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:43.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:23:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:23:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:23:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:43 vm09 bash[22983]: cluster 2026-03-09T16:23:43.079843+0000 mgr.y (mgr.14520) 1113 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:43 vm09 bash[22983]: cluster 2026-03-09T16:23:43.079843+0000 mgr.y (mgr.14520) 1113 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:43.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:43 vm01 bash[28152]: cluster 2026-03-09T16:23:43.079843+0000 mgr.y (mgr.14520) 1113 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:43.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:43 vm01 bash[28152]: cluster 2026-03-09T16:23:43.079843+0000 mgr.y (mgr.14520) 1113 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:43.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:43 vm01 bash[20728]: cluster 2026-03-09T16:23:43.079843+0000 mgr.y (mgr.14520) 1113 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:43.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:43 vm01 bash[20728]: cluster 2026-03-09T16:23:43.079843+0000 mgr.y (mgr.14520) 1113 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:44 vm09 bash[22983]: audit 2026-03-09T16:23:44.907337+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:45.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:44 vm09 bash[22983]: audit 2026-03-09T16:23:44.907337+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:45.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:44 vm01 bash[28152]: audit 2026-03-09T16:23:44.907337+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:45.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:44 vm01 bash[28152]: audit 2026-03-09T16:23:44.907337+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:44 vm01 bash[20728]: audit 2026-03-09T16:23:44.907337+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:44 vm01 bash[20728]: audit 2026-03-09T16:23:44.907337+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:23:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:45 vm09 bash[22983]: cluster 2026-03-09T16:23:45.082371+0000 mgr.y (mgr.14520) 1114 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:46.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:45 vm09 bash[22983]: cluster 2026-03-09T16:23:45.082371+0000 mgr.y (mgr.14520) 1114 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:46.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:45 vm01 bash[28152]: cluster 2026-03-09T16:23:45.082371+0000 mgr.y (mgr.14520) 1114 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:46.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:45 vm01 bash[28152]: cluster 2026-03-09T16:23:45.082371+0000 mgr.y (mgr.14520) 1114 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:45 vm01 bash[20728]: cluster 2026-03-09T16:23:45.082371+0000 mgr.y (mgr.14520) 1114 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:45 vm01 bash[20728]: cluster 2026-03-09T16:23:45.082371+0000 mgr.y (mgr.14520) 1114 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:47.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:23:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:23:48.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:48 vm01 bash[28152]: cluster 2026-03-09T16:23:47.082685+0000 mgr.y (mgr.14520) 1115 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:48.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:48 vm01 bash[28152]: cluster 2026-03-09T16:23:47.082685+0000 mgr.y (mgr.14520) 1115 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:48.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:48 vm01 bash[20728]: cluster 2026-03-09T16:23:47.082685+0000 mgr.y (mgr.14520) 1115 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:48.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:48 vm01 bash[20728]: cluster 2026-03-09T16:23:47.082685+0000 mgr.y (mgr.14520) 1115 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:48 vm09 bash[22983]: cluster 2026-03-09T16:23:47.082685+0000 mgr.y (mgr.14520) 1115 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:48 vm09 bash[22983]: cluster 2026-03-09T16:23:47.082685+0000 mgr.y (mgr.14520) 1115 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:49.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:49 vm01 bash[28152]: audit 2026-03-09T16:23:47.540134+0000 mgr.y (mgr.14520) 1116 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:49.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:49 vm01 bash[28152]: audit 2026-03-09T16:23:47.540134+0000 mgr.y (mgr.14520) 1116 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:49.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:49 vm01 bash[20728]: audit 2026-03-09T16:23:47.540134+0000 mgr.y (mgr.14520) 1116 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:49.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:49 vm01 bash[20728]: audit 2026-03-09T16:23:47.540134+0000 mgr.y (mgr.14520) 1116 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:49 vm09 bash[22983]: audit 2026-03-09T16:23:47.540134+0000 mgr.y (mgr.14520) 1116 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:49 vm09 bash[22983]: audit 2026-03-09T16:23:47.540134+0000 mgr.y (mgr.14520) 1116 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:50.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:50 vm01 bash[28152]: cluster 2026-03-09T16:23:49.083154+0000 mgr.y (mgr.14520) 1117 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:50.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:50 vm01 bash[28152]: cluster 2026-03-09T16:23:49.083154+0000 mgr.y (mgr.14520) 1117 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:50.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:50 vm01 bash[20728]: cluster 2026-03-09T16:23:49.083154+0000 mgr.y (mgr.14520) 1117 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:50.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:50 vm01 bash[20728]: cluster 2026-03-09T16:23:49.083154+0000 mgr.y (mgr.14520) 1117 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:50 vm09 bash[22983]: cluster 2026-03-09T16:23:49.083154+0000 mgr.y (mgr.14520) 1117 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:50 vm09 bash[22983]: cluster 2026-03-09T16:23:49.083154+0000 mgr.y (mgr.14520) 1117 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:52.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:52 vm01 bash[28152]: cluster 2026-03-09T16:23:51.083703+0000 mgr.y (mgr.14520) 1118 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:52.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:52 vm01 bash[28152]: cluster 2026-03-09T16:23:51.083703+0000 mgr.y (mgr.14520) 1118 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:52.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:52 vm01 bash[20728]: cluster 2026-03-09T16:23:51.083703+0000 mgr.y (mgr.14520) 1118 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:52.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:52 vm01 bash[20728]: cluster 2026-03-09T16:23:51.083703+0000 mgr.y (mgr.14520) 1118 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:52 vm09 bash[22983]: cluster 2026-03-09T16:23:51.083703+0000 mgr.y (mgr.14520) 1118 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:52 vm09 bash[22983]: cluster 2026-03-09T16:23:51.083703+0000 mgr.y (mgr.14520) 1118 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:53.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:23:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:23:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:23:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:54 vm09 bash[22983]: cluster 2026-03-09T16:23:53.083997+0000 mgr.y (mgr.14520) 1119 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:54 vm09 bash[22983]: cluster 2026-03-09T16:23:53.083997+0000 mgr.y (mgr.14520) 1119 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:54.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:54 vm01 bash[28152]: cluster 2026-03-09T16:23:53.083997+0000 mgr.y (mgr.14520) 1119 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:54.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:54 vm01 bash[28152]: cluster 2026-03-09T16:23:53.083997+0000 mgr.y (mgr.14520) 1119 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:54 vm01 bash[20728]: cluster 2026-03-09T16:23:53.083997+0000 mgr.y (mgr.14520) 1119 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:54 vm01 bash[20728]: cluster 2026-03-09T16:23:53.083997+0000 mgr.y (mgr.14520) 1119 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T16:23:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:56 vm09 bash[22983]: cluster 2026-03-09T16:23:55.084654+0000 mgr.y (mgr.14520) 1120 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:56 vm09 bash[22983]: cluster 2026-03-09T16:23:55.084654+0000 mgr.y (mgr.14520) 1120 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:56.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:56 vm01 bash[28152]: cluster 2026-03-09T16:23:55.084654+0000 mgr.y (mgr.14520) 1120 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:56.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:56 vm01 bash[28152]: cluster 2026-03-09T16:23:55.084654+0000 mgr.y (mgr.14520) 1120 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:56 vm01 bash[20728]: cluster 2026-03-09T16:23:55.084654+0000 mgr.y (mgr.14520) 1120 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:56 vm01 bash[20728]: cluster 2026-03-09T16:23:55.084654+0000 mgr.y (mgr.14520) 1120 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:23:57.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:23:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:23:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:58 vm09 bash[22983]: cluster 2026-03-09T16:23:57.084941+0000 mgr.y (mgr.14520) 1121 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:58 vm09 bash[22983]: cluster 2026-03-09T16:23:57.084941+0000 mgr.y (mgr.14520) 1121 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:58.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:58 vm01 bash[28152]: cluster 2026-03-09T16:23:57.084941+0000 mgr.y (mgr.14520) 1121 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:58.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:58 vm01 bash[28152]: cluster 2026-03-09T16:23:57.084941+0000 mgr.y (mgr.14520) 1121 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:58 vm01 bash[20728]: cluster 2026-03-09T16:23:57.084941+0000 mgr.y (mgr.14520) 1121 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:58 vm01 bash[20728]: cluster 2026-03-09T16:23:57.084941+0000 mgr.y (mgr.14520) 1121 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:23:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:59 vm09 bash[22983]: audit 2026-03-09T16:23:57.541071+0000 mgr.y (mgr.14520) 1122 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:23:59 vm09 bash[22983]: audit 2026-03-09T16:23:57.541071+0000 mgr.y (mgr.14520) 1122 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:59 vm01 bash[28152]: audit 2026-03-09T16:23:57.541071+0000 mgr.y (mgr.14520) 1122 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:59.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:23:59 vm01 bash[28152]: audit 2026-03-09T16:23:57.541071+0000 mgr.y (mgr.14520) 1122 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:59 vm01 bash[20728]: audit 2026-03-09T16:23:57.541071+0000 mgr.y (mgr.14520) 1122 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:23:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:23:59 vm01 bash[20728]: audit 2026-03-09T16:23:57.541071+0000 mgr.y (mgr.14520) 1122 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:00 vm09 bash[22983]: cluster 2026-03-09T16:23:59.085470+0000 mgr.y (mgr.14520) 1123 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:00 vm09 bash[22983]: cluster 2026-03-09T16:23:59.085470+0000 mgr.y (mgr.14520) 1123 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:00 vm09 bash[22983]: audit 2026-03-09T16:23:59.913165+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:00 vm09 bash[22983]: audit 2026-03-09T16:23:59.913165+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:00 vm01 bash[28152]: cluster 2026-03-09T16:23:59.085470+0000 mgr.y (mgr.14520) 1123 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:00 vm01 bash[28152]: cluster 2026-03-09T16:23:59.085470+0000 mgr.y (mgr.14520) 1123 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:00 vm01 bash[28152]: audit 2026-03-09T16:23:59.913165+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:00 vm01 bash[28152]: audit 2026-03-09T16:23:59.913165+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:00 vm01 bash[20728]: cluster 2026-03-09T16:23:59.085470+0000 mgr.y (mgr.14520) 1123 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:00 vm01 bash[20728]: cluster 2026-03-09T16:23:59.085470+0000 mgr.y (mgr.14520) 1123 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:00 vm01 bash[20728]: audit 2026-03-09T16:23:59.913165+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:00 vm01 bash[20728]: audit 2026-03-09T16:23:59.913165+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:01 vm09 bash[22983]: audit 2026-03-09T16:24:00.509644+0000 mon.a (mon.0) 3895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:24:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:01 vm09 bash[22983]: audit 2026-03-09T16:24:00.509644+0000 mon.a (mon.0) 3895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:24:01.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:01 vm01 bash[28152]: audit 2026-03-09T16:24:00.509644+0000 mon.a (mon.0) 3895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:24:01.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:01 vm01 bash[28152]: audit 2026-03-09T16:24:00.509644+0000 mon.a (mon.0) 3895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:24:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:01 vm01 bash[20728]: audit 2026-03-09T16:24:00.509644+0000 mon.a (mon.0) 3895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:24:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:01 vm01 bash[20728]: audit 2026-03-09T16:24:00.509644+0000 mon.a (mon.0) 3895 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:24:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:02 vm09 bash[22983]: cluster 2026-03-09T16:24:01.085964+0000 mgr.y (mgr.14520) 1124 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:02.647 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:02 vm09 bash[22983]: cluster 2026-03-09T16:24:01.085964+0000 mgr.y (mgr.14520) 1124 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:02.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:02 vm01 bash[28152]: cluster 2026-03-09T16:24:01.085964+0000 mgr.y (mgr.14520) 1124 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:02.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:02 vm01 bash[28152]: cluster 2026-03-09T16:24:01.085964+0000 mgr.y (mgr.14520) 1124 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:02 vm01 bash[20728]: cluster 2026-03-09T16:24:01.085964+0000 mgr.y (mgr.14520) 1124 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:02 vm01 bash[20728]: cluster 2026-03-09T16:24:01.085964+0000 mgr.y (mgr.14520) 1124 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:03.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:24:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:24:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:24:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:04 vm09 bash[22983]: cluster 2026-03-09T16:24:03.086266+0000 mgr.y (mgr.14520) 1125 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:04 vm09 bash[22983]: cluster 2026-03-09T16:24:03.086266+0000 mgr.y (mgr.14520) 1125 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:04.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:04 vm01 bash[28152]: cluster 2026-03-09T16:24:03.086266+0000 mgr.y (mgr.14520) 1125 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:04.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:04 vm01 bash[28152]: cluster 2026-03-09T16:24:03.086266+0000 mgr.y (mgr.14520) 1125 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:04 vm01 bash[20728]: cluster 2026-03-09T16:24:03.086266+0000 mgr.y (mgr.14520) 1125 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:04 vm01 bash[20728]: cluster 2026-03-09T16:24:03.086266+0000 mgr.y (mgr.14520) 1125 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:05 vm09 bash[22983]: cluster 2026-03-09T16:24:05.086879+0000 mgr.y (mgr.14520) 1126 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:05 vm09 bash[22983]: cluster 2026-03-09T16:24:05.086879+0000 mgr.y (mgr.14520) 1126 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:05.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:05 vm01 bash[28152]: cluster 2026-03-09T16:24:05.086879+0000 mgr.y (mgr.14520) 1126 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:05.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:05 vm01 bash[28152]: cluster 2026-03-09T16:24:05.086879+0000 mgr.y (mgr.14520) 1126 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:05.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:05 vm01 bash[20728]: cluster 2026-03-09T16:24:05.086879+0000 mgr.y (mgr.14520) 1126 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:05.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:05 vm01 bash[20728]: cluster 2026-03-09T16:24:05.086879+0000 mgr.y (mgr.14520) 1126 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.425450+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.425450+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.430070+0000 mon.a (mon.0) 3897 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.430070+0000 mon.a (mon.0) 3897 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.430895+0000 mon.a (mon.0) 3898 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.430895+0000 mon.a (mon.0) 3898 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.431308+0000 mon.a (mon.0) 3899 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.431308+0000 mon.a (mon.0) 3899 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.435089+0000 mon.a (mon.0) 3900 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: audit 2026-03-09T16:24:06.435089+0000 mon.a (mon.0) 3900 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: cluster 2026-03-09T16:24:07.087139+0000 mgr.y (mgr.14520) 1127 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:07 vm01 bash[20728]: cluster 2026-03-09T16:24:07.087139+0000 mgr.y (mgr.14520) 1127 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.425450+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.425450+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.430070+0000 mon.a (mon.0) 3897 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.430070+0000 mon.a (mon.0) 3897 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.430895+0000 mon.a (mon.0) 3898 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.430895+0000 mon.a (mon.0) 3898 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.431308+0000 mon.a (mon.0) 3899 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.431308+0000 mon.a (mon.0) 3899 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:24:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.435089+0000 mon.a (mon.0) 3900 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: audit 2026-03-09T16:24:06.435089+0000 mon.a (mon.0) 3900 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: cluster 2026-03-09T16:24:07.087139+0000 mgr.y (mgr.14520) 1127 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:07.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:07 vm01 bash[28152]: cluster 2026-03-09T16:24:07.087139+0000 mgr.y (mgr.14520) 1127 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:07.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:24:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.425450+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.425450+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.430070+0000 mon.a (mon.0) 3897 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.430070+0000 mon.a (mon.0) 3897 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.430895+0000 mon.a (mon.0) 3898 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.430895+0000 mon.a (mon.0) 3898 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.431308+0000 mon.a (mon.0) 3899 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.431308+0000 mon.a (mon.0) 3899 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.435089+0000 mon.a (mon.0) 3900 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: audit 2026-03-09T16:24:06.435089+0000 mon.a (mon.0) 3900 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: cluster 2026-03-09T16:24:07.087139+0000 mgr.y (mgr.14520) 1127 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:07.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:07 vm09 bash[22983]: cluster 2026-03-09T16:24:07.087139+0000 mgr.y (mgr.14520) 1127 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:08.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:08 vm09 bash[22983]: audit 2026-03-09T16:24:07.551896+0000 mgr.y (mgr.14520) 1128 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:08.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:08 vm09 bash[22983]: audit 2026-03-09T16:24:07.551896+0000 mgr.y (mgr.14520) 1128 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:08.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:08 vm01 bash[28152]: audit 2026-03-09T16:24:07.551896+0000 mgr.y (mgr.14520) 1128 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:08.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:08 vm01 bash[28152]: audit 2026-03-09T16:24:07.551896+0000 mgr.y (mgr.14520) 1128 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:08.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:08 vm01 bash[20728]: audit 2026-03-09T16:24:07.551896+0000 mgr.y (mgr.14520) 1128 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:08.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:08 vm01 bash[20728]: audit 2026-03-09T16:24:07.551896+0000 mgr.y (mgr.14520) 1128 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:09.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:09 vm09 bash[22983]: cluster 2026-03-09T16:24:09.087587+0000 mgr.y (mgr.14520) 1129 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:09.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:09 vm09 bash[22983]: cluster 2026-03-09T16:24:09.087587+0000 mgr.y (mgr.14520) 1129 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:09.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:09 vm01 bash[28152]: cluster 2026-03-09T16:24:09.087587+0000 mgr.y (mgr.14520) 1129 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:09.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:09 vm01 bash[28152]: cluster 2026-03-09T16:24:09.087587+0000 mgr.y (mgr.14520) 1129 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:09.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:09 vm01 bash[20728]: cluster 2026-03-09T16:24:09.087587+0000 mgr.y (mgr.14520) 1129 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:09.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:09 vm01 bash[20728]: cluster 2026-03-09T16:24:09.087587+0000 mgr.y (mgr.14520) 1129 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:12.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:12 vm01 bash[28152]: cluster 2026-03-09T16:24:11.088059+0000 mgr.y (mgr.14520) 1130 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:12.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:12 vm01 bash[28152]: cluster 2026-03-09T16:24:11.088059+0000 mgr.y (mgr.14520) 1130 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:12.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:12 vm01 bash[20728]: cluster 2026-03-09T16:24:11.088059+0000 mgr.y (mgr.14520) 1130 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:12.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:12 vm01 bash[20728]: cluster 2026-03-09T16:24:11.088059+0000 mgr.y (mgr.14520) 1130 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:12 vm09 bash[22983]: cluster 2026-03-09T16:24:11.088059+0000 mgr.y (mgr.14520) 1130 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:12 vm09 bash[22983]: cluster 2026-03-09T16:24:11.088059+0000 mgr.y (mgr.14520) 1130 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:13.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:24:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:24:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:24:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:14 vm01 bash[28152]: cluster 2026-03-09T16:24:13.088336+0000 mgr.y (mgr.14520) 1131 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:14 vm01 bash[28152]: cluster 2026-03-09T16:24:13.088336+0000 mgr.y (mgr.14520) 1131 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:14 vm01 bash[20728]: cluster 2026-03-09T16:24:13.088336+0000 mgr.y (mgr.14520) 1131 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:14 vm01 bash[20728]: cluster 2026-03-09T16:24:13.088336+0000 mgr.y (mgr.14520) 1131 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:14 vm09 bash[22983]: cluster 2026-03-09T16:24:13.088336+0000 mgr.y (mgr.14520) 1131 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:14 vm09 bash[22983]: cluster 2026-03-09T16:24:13.088336+0000 mgr.y (mgr.14520) 1131 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:15 vm01 bash[28152]: audit 2026-03-09T16:24:14.919254+0000 mon.a (mon.0) 3901 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:15 vm01 bash[28152]: audit 2026-03-09T16:24:14.919254+0000 mon.a (mon.0) 3901 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:15 vm01 bash[20728]: audit 2026-03-09T16:24:14.919254+0000 mon.a (mon.0) 3901 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:15 vm01 bash[20728]: audit 2026-03-09T16:24:14.919254+0000 mon.a (mon.0) 3901 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:15 vm09 bash[22983]: audit 2026-03-09T16:24:14.919254+0000 mon.a (mon.0) 3901 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:15 vm09 bash[22983]: audit 2026-03-09T16:24:14.919254+0000 mon.a (mon.0) 3901 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:16 vm01 bash[28152]: cluster 2026-03-09T16:24:15.088978+0000 mgr.y (mgr.14520) 1132 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:16 vm01 bash[28152]: cluster 2026-03-09T16:24:15.088978+0000 mgr.y (mgr.14520) 1132 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:16 vm01 bash[20728]: cluster 2026-03-09T16:24:15.088978+0000 mgr.y (mgr.14520) 1132 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:16 vm01 bash[20728]: cluster 2026-03-09T16:24:15.088978+0000 mgr.y (mgr.14520) 1132 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:16 vm09 bash[22983]: cluster 2026-03-09T16:24:15.088978+0000 mgr.y (mgr.14520) 1132 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:16 vm09 bash[22983]: cluster 2026-03-09T16:24:15.088978+0000 mgr.y (mgr.14520) 1132 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:17.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:24:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:24:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:18 vm09 bash[22983]: cluster 2026-03-09T16:24:17.089270+0000 mgr.y (mgr.14520) 1133 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:18 vm09 bash[22983]: cluster 2026-03-09T16:24:17.089270+0000 mgr.y (mgr.14520) 1133 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:18 vm01 bash[28152]: cluster 2026-03-09T16:24:17.089270+0000 mgr.y (mgr.14520) 1133 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:18 vm01 bash[28152]: cluster 2026-03-09T16:24:17.089270+0000 mgr.y (mgr.14520) 1133 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:18 vm01 bash[20728]: cluster 2026-03-09T16:24:17.089270+0000 mgr.y (mgr.14520) 1133 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:18 vm01 bash[20728]: cluster 2026-03-09T16:24:17.089270+0000 mgr.y (mgr.14520) 1133 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:19 vm09 bash[22983]: audit 2026-03-09T16:24:17.554073+0000 mgr.y (mgr.14520) 1134 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:19 vm09 bash[22983]: audit 2026-03-09T16:24:17.554073+0000 mgr.y (mgr.14520) 1134 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:19 vm09 bash[22983]: cluster 2026-03-09T16:24:19.089773+0000 mgr.y (mgr.14520) 1135 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:19 vm09 bash[22983]: cluster 2026-03-09T16:24:19.089773+0000 mgr.y (mgr.14520) 1135 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:19 vm01 bash[20728]: audit 2026-03-09T16:24:17.554073+0000 mgr.y (mgr.14520) 1134 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:19 vm01 bash[20728]: audit 2026-03-09T16:24:17.554073+0000 mgr.y (mgr.14520) 1134 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:19 vm01 bash[20728]: cluster 2026-03-09T16:24:19.089773+0000 mgr.y (mgr.14520) 1135 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:19 vm01 bash[20728]: cluster 2026-03-09T16:24:19.089773+0000 mgr.y (mgr.14520) 1135 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:19 vm01 bash[28152]: audit 2026-03-09T16:24:17.554073+0000 mgr.y (mgr.14520) 1134 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:19 vm01 bash[28152]: audit 2026-03-09T16:24:17.554073+0000 mgr.y (mgr.14520) 1134 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:19 vm01 bash[28152]: cluster 2026-03-09T16:24:19.089773+0000 mgr.y (mgr.14520) 1135 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:19 vm01 bash[28152]: cluster 2026-03-09T16:24:19.089773+0000 mgr.y (mgr.14520) 1135 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:22.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:22 vm01 bash[28152]: cluster 2026-03-09T16:24:21.090284+0000 mgr.y (mgr.14520) 1136 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:24:22.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:22 vm01 bash[28152]: cluster 2026-03-09T16:24:21.090284+0000 mgr.y (mgr.14520) 1136 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:24:22.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:22 vm01 bash[20728]: cluster 2026-03-09T16:24:21.090284+0000 mgr.y (mgr.14520) 1136 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:24:22.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:22 vm01 bash[20728]: cluster 2026-03-09T16:24:21.090284+0000 mgr.y (mgr.14520) 1136 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:24:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:22 vm09 bash[22983]: cluster 2026-03-09T16:24:21.090284+0000 mgr.y (mgr.14520) 1136 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:24:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:22 vm09 bash[22983]: cluster 2026-03-09T16:24:21.090284+0000 mgr.y (mgr.14520) 1136 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T16:24:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:24:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:24:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:24:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:24 vm09 bash[22983]: cluster 2026-03-09T16:24:23.090564+0000 mgr.y (mgr.14520) 1137 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T16:24:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:24 vm09 bash[22983]: cluster 2026-03-09T16:24:23.090564+0000 mgr.y (mgr.14520) 1137 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T16:24:24.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:24 vm01 bash[28152]: cluster 2026-03-09T16:24:23.090564+0000 mgr.y (mgr.14520) 1137 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T16:24:24.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:24 vm01 bash[28152]: cluster 2026-03-09T16:24:23.090564+0000 mgr.y (mgr.14520) 1137 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T16:24:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:24 vm01 bash[20728]: cluster 2026-03-09T16:24:23.090564+0000 mgr.y (mgr.14520) 1137 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T16:24:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:24 vm01 bash[20728]: cluster 2026-03-09T16:24:23.090564+0000 mgr.y (mgr.14520) 1137 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T16:24:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:26 vm09 bash[22983]: cluster 2026-03-09T16:24:25.091168+0000 mgr.y (mgr.14520) 1138 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:26 vm09 bash[22983]: cluster 2026-03-09T16:24:25.091168+0000 mgr.y (mgr.14520) 1138 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:26.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:26 vm01 bash[20728]: cluster 2026-03-09T16:24:25.091168+0000 mgr.y (mgr.14520) 1138 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:26 vm01 bash[20728]: cluster 2026-03-09T16:24:25.091168+0000 mgr.y (mgr.14520) 1138 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:26.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:26 vm01 bash[28152]: cluster 2026-03-09T16:24:25.091168+0000 mgr.y (mgr.14520) 1138 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:26.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:26 vm01 bash[28152]: cluster 2026-03-09T16:24:25.091168+0000 mgr.y (mgr.14520) 1138 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:27.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:24:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:24:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:28 vm09 bash[22983]: cluster 2026-03-09T16:24:27.091454+0000 mgr.y (mgr.14520) 1139 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:28 vm09 bash[22983]: cluster 2026-03-09T16:24:27.091454+0000 mgr.y (mgr.14520) 1139 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:28.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:28 vm01 bash[20728]: cluster 2026-03-09T16:24:27.091454+0000 mgr.y (mgr.14520) 1139 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:28 vm01 bash[20728]: cluster 2026-03-09T16:24:27.091454+0000 mgr.y (mgr.14520) 1139 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:28.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:28 vm01 bash[28152]: cluster 2026-03-09T16:24:27.091454+0000 mgr.y (mgr.14520) 1139 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:28.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:28 vm01 bash[28152]: cluster 2026-03-09T16:24:27.091454+0000 mgr.y (mgr.14520) 1139 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:29 vm09 bash[22983]: audit 2026-03-09T16:24:27.558284+0000 mgr.y (mgr.14520) 1140 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:29 vm09 bash[22983]: audit 2026-03-09T16:24:27.558284+0000 mgr.y (mgr.14520) 1140 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:29.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:29 vm01 bash[28152]: audit 2026-03-09T16:24:27.558284+0000 mgr.y (mgr.14520) 1140 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:29.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:29 vm01 bash[28152]: audit 2026-03-09T16:24:27.558284+0000 mgr.y (mgr.14520) 1140 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:29 vm01 bash[20728]: audit 2026-03-09T16:24:27.558284+0000 mgr.y (mgr.14520) 1140 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:29 vm01 bash[20728]: audit 2026-03-09T16:24:27.558284+0000 mgr.y (mgr.14520) 1140 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:30 vm09 bash[22983]: cluster 2026-03-09T16:24:29.091967+0000 mgr.y (mgr.14520) 1141 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:30 vm09 bash[22983]: cluster 2026-03-09T16:24:29.091967+0000 mgr.y (mgr.14520) 1141 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:30 vm09 bash[22983]: audit 2026-03-09T16:24:29.925658+0000 mon.a (mon.0) 3902 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:30 vm09 bash[22983]: audit 2026-03-09T16:24:29.925658+0000 mon.a (mon.0) 3902 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:30 vm01 bash[28152]: cluster 2026-03-09T16:24:29.091967+0000 mgr.y (mgr.14520) 1141 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:30 vm01 bash[28152]: cluster 2026-03-09T16:24:29.091967+0000 mgr.y (mgr.14520) 1141 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:30 vm01 bash[28152]: audit 2026-03-09T16:24:29.925658+0000 mon.a (mon.0) 3902 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:30 vm01 bash[28152]: audit 2026-03-09T16:24:29.925658+0000 mon.a (mon.0) 3902 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:30 vm01 bash[20728]: cluster 2026-03-09T16:24:29.091967+0000 mgr.y (mgr.14520) 1141 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:30 vm01 bash[20728]: cluster 2026-03-09T16:24:29.091967+0000 mgr.y (mgr.14520) 1141 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:30 vm01 bash[20728]: audit 2026-03-09T16:24:29.925658+0000 mon.a (mon.0) 3902 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:30 vm01 bash[20728]: audit 2026-03-09T16:24:29.925658+0000 mon.a (mon.0) 3902 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:32 vm09 bash[22983]: cluster 2026-03-09T16:24:31.092624+0000 mgr.y (mgr.14520) 1142 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:32 vm09 bash[22983]: cluster 2026-03-09T16:24:31.092624+0000 mgr.y (mgr.14520) 1142 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:32 vm01 bash[28152]: cluster 2026-03-09T16:24:31.092624+0000 mgr.y (mgr.14520) 1142 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:32 vm01 bash[28152]: cluster 2026-03-09T16:24:31.092624+0000 mgr.y (mgr.14520) 1142 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:32 vm01 bash[20728]: cluster 2026-03-09T16:24:31.092624+0000 mgr.y (mgr.14520) 1142 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:32 vm01 bash[20728]: cluster 2026-03-09T16:24:31.092624+0000 mgr.y (mgr.14520) 1142 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T16:24:33.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:24:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:24:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:24:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:34 vm09 bash[22983]: cluster 2026-03-09T16:24:33.092922+0000 mgr.y (mgr.14520) 1143 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:34 vm09 bash[22983]: cluster 2026-03-09T16:24:33.092922+0000 mgr.y (mgr.14520) 1143 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:34.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:34 vm01 bash[28152]: cluster 2026-03-09T16:24:33.092922+0000 mgr.y (mgr.14520) 1143 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:34.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:34 vm01 bash[28152]: cluster 2026-03-09T16:24:33.092922+0000 mgr.y (mgr.14520) 1143 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:34 vm01 bash[20728]: cluster 2026-03-09T16:24:33.092922+0000 mgr.y (mgr.14520) 1143 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:34 vm01 bash[20728]: cluster 2026-03-09T16:24:33.092922+0000 mgr.y (mgr.14520) 1143 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:36 vm09 bash[22983]: cluster 2026-03-09T16:24:35.093566+0000 mgr.y (mgr.14520) 1144 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:36 vm09 bash[22983]: cluster 2026-03-09T16:24:35.093566+0000 mgr.y (mgr.14520) 1144 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:36.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:36 vm01 bash[28152]: cluster 2026-03-09T16:24:35.093566+0000 mgr.y (mgr.14520) 1144 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:36.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:36 vm01 bash[28152]: cluster 2026-03-09T16:24:35.093566+0000 mgr.y (mgr.14520) 1144 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:36 vm01 bash[20728]: cluster 2026-03-09T16:24:35.093566+0000 mgr.y (mgr.14520) 1144 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:36 vm01 bash[20728]: cluster 2026-03-09T16:24:35.093566+0000 mgr.y (mgr.14520) 1144 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s 2026-03-09T16:24:37.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:24:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:24:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:38 vm09 bash[22983]: cluster 2026-03-09T16:24:37.093831+0000 mgr.y (mgr.14520) 1145 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:38 vm09 bash[22983]: cluster 2026-03-09T16:24:37.093831+0000 mgr.y (mgr.14520) 1145 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:38.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:38 vm01 bash[28152]: cluster 2026-03-09T16:24:37.093831+0000 mgr.y (mgr.14520) 1145 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:38.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:38 vm01 bash[28152]: cluster 2026-03-09T16:24:37.093831+0000 mgr.y (mgr.14520) 1145 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:38 vm01 bash[20728]: cluster 2026-03-09T16:24:37.093831+0000 mgr.y (mgr.14520) 1145 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:38 vm01 bash[20728]: cluster 2026-03-09T16:24:37.093831+0000 mgr.y (mgr.14520) 1145 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:39 vm09 bash[22983]: audit 2026-03-09T16:24:37.569210+0000 mgr.y (mgr.14520) 1146 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:39 vm09 bash[22983]: audit 2026-03-09T16:24:37.569210+0000 mgr.y (mgr.14520) 1146 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:39 vm09 bash[22983]: cluster 2026-03-09T16:24:39.094354+0000 mgr.y (mgr.14520) 1147 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:39 vm09 bash[22983]: cluster 2026-03-09T16:24:39.094354+0000 mgr.y (mgr.14520) 1147 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:39 vm01 bash[28152]: audit 2026-03-09T16:24:37.569210+0000 mgr.y (mgr.14520) 1146 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:39 vm01 bash[28152]: audit 2026-03-09T16:24:37.569210+0000 mgr.y (mgr.14520) 1146 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:39 vm01 bash[28152]: cluster 2026-03-09T16:24:39.094354+0000 mgr.y (mgr.14520) 1147 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:39 vm01 bash[28152]: cluster 2026-03-09T16:24:39.094354+0000 mgr.y (mgr.14520) 1147 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:39 vm01 bash[20728]: audit 2026-03-09T16:24:37.569210+0000 mgr.y (mgr.14520) 1146 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:39 vm01 bash[20728]: audit 2026-03-09T16:24:37.569210+0000 mgr.y (mgr.14520) 1146 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:39 vm01 bash[20728]: cluster 2026-03-09T16:24:39.094354+0000 mgr.y (mgr.14520) 1147 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:39 vm01 bash[20728]: cluster 2026-03-09T16:24:39.094354+0000 mgr.y (mgr.14520) 1147 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:42.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:42 vm01 bash[28152]: cluster 2026-03-09T16:24:41.095124+0000 mgr.y (mgr.14520) 1148 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:42.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:42 vm01 bash[28152]: cluster 2026-03-09T16:24:41.095124+0000 mgr.y (mgr.14520) 1148 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:42.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:42 vm01 bash[20728]: cluster 2026-03-09T16:24:41.095124+0000 mgr.y (mgr.14520) 1148 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:42.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:42 vm01 bash[20728]: cluster 2026-03-09T16:24:41.095124+0000 mgr.y (mgr.14520) 1148 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:42 vm09 bash[22983]: cluster 2026-03-09T16:24:41.095124+0000 mgr.y (mgr.14520) 1148 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:42 vm09 bash[22983]: cluster 2026-03-09T16:24:41.095124+0000 mgr.y (mgr.14520) 1148 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:43.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:24:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:24:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:24:44.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:44 vm01 bash[28152]: cluster 2026-03-09T16:24:43.095403+0000 mgr.y (mgr.14520) 1149 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:44.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:44 vm01 bash[28152]: cluster 2026-03-09T16:24:43.095403+0000 mgr.y (mgr.14520) 1149 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:44 vm01 bash[20728]: cluster 2026-03-09T16:24:43.095403+0000 mgr.y (mgr.14520) 1149 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:44 vm01 bash[20728]: cluster 2026-03-09T16:24:43.095403+0000 mgr.y (mgr.14520) 1149 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:44 vm09 bash[22983]: cluster 2026-03-09T16:24:43.095403+0000 mgr.y (mgr.14520) 1149 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:44 vm09 bash[22983]: cluster 2026-03-09T16:24:43.095403+0000 mgr.y (mgr.14520) 1149 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:45.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:45 vm01 bash[28152]: audit 2026-03-09T16:24:44.931707+0000 mon.a (mon.0) 3903 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:45.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:45 vm01 bash[28152]: audit 2026-03-09T16:24:44.931707+0000 mon.a (mon.0) 3903 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:45 vm01 bash[20728]: audit 2026-03-09T16:24:44.931707+0000 mon.a (mon.0) 3903 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:45.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:45 vm01 bash[20728]: audit 2026-03-09T16:24:44.931707+0000 mon.a (mon.0) 3903 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:45 vm09 bash[22983]: audit 2026-03-09T16:24:44.931707+0000 mon.a (mon.0) 3903 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:45 vm09 bash[22983]: audit 2026-03-09T16:24:44.931707+0000 mon.a (mon.0) 3903 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:24:46.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:46 vm01 bash[28152]: cluster 2026-03-09T16:24:45.096057+0000 mgr.y (mgr.14520) 1150 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:46.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:46 vm01 bash[28152]: cluster 2026-03-09T16:24:45.096057+0000 mgr.y (mgr.14520) 1150 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:46 vm01 bash[20728]: cluster 2026-03-09T16:24:45.096057+0000 mgr.y (mgr.14520) 1150 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:46.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:46 vm01 bash[20728]: cluster 2026-03-09T16:24:45.096057+0000 mgr.y (mgr.14520) 1150 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:46 vm09 bash[22983]: cluster 2026-03-09T16:24:45.096057+0000 mgr.y (mgr.14520) 1150 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:46 vm09 bash[22983]: cluster 2026-03-09T16:24:45.096057+0000 mgr.y (mgr.14520) 1150 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:47.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:24:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:24:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:48 vm09 bash[22983]: cluster 2026-03-09T16:24:47.096475+0000 mgr.y (mgr.14520) 1151 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:48 vm09 bash[22983]: cluster 2026-03-09T16:24:47.096475+0000 mgr.y (mgr.14520) 1151 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:48 vm01 bash[28152]: cluster 2026-03-09T16:24:47.096475+0000 mgr.y (mgr.14520) 1151 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:48 vm01 bash[28152]: cluster 2026-03-09T16:24:47.096475+0000 mgr.y (mgr.14520) 1151 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:48.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:48 vm01 bash[20728]: cluster 2026-03-09T16:24:47.096475+0000 mgr.y (mgr.14520) 1151 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:48 vm01 bash[20728]: cluster 2026-03-09T16:24:47.096475+0000 mgr.y (mgr.14520) 1151 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:49 vm09 bash[22983]: audit 2026-03-09T16:24:47.575027+0000 mgr.y (mgr.14520) 1152 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:49 vm09 bash[22983]: audit 2026-03-09T16:24:47.575027+0000 mgr.y (mgr.14520) 1152 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:49 vm01 bash[28152]: audit 2026-03-09T16:24:47.575027+0000 mgr.y (mgr.14520) 1152 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:49 vm01 bash[28152]: audit 2026-03-09T16:24:47.575027+0000 mgr.y (mgr.14520) 1152 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:49.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:49 vm01 bash[20728]: audit 2026-03-09T16:24:47.575027+0000 mgr.y (mgr.14520) 1152 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:49.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:49 vm01 bash[20728]: audit 2026-03-09T16:24:47.575027+0000 mgr.y (mgr.14520) 1152 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:50 vm09 bash[22983]: cluster 2026-03-09T16:24:49.097199+0000 mgr.y (mgr.14520) 1153 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:50 vm09 bash[22983]: cluster 2026-03-09T16:24:49.097199+0000 mgr.y (mgr.14520) 1153 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:50 vm01 bash[28152]: cluster 2026-03-09T16:24:49.097199+0000 mgr.y (mgr.14520) 1153 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:50 vm01 bash[28152]: cluster 2026-03-09T16:24:49.097199+0000 mgr.y (mgr.14520) 1153 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:50.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:50 vm01 bash[20728]: cluster 2026-03-09T16:24:49.097199+0000 mgr.y (mgr.14520) 1153 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:50 vm01 bash[20728]: cluster 2026-03-09T16:24:49.097199+0000 mgr.y (mgr.14520) 1153 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:52 vm09 bash[22983]: cluster 2026-03-09T16:24:51.097491+0000 mgr.y (mgr.14520) 1154 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:52 vm09 bash[22983]: cluster 2026-03-09T16:24:51.097491+0000 mgr.y (mgr.14520) 1154 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:52 vm01 bash[28152]: cluster 2026-03-09T16:24:51.097491+0000 mgr.y (mgr.14520) 1154 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:52.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:52 vm01 bash[28152]: cluster 2026-03-09T16:24:51.097491+0000 mgr.y (mgr.14520) 1154 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:52 vm01 bash[20728]: cluster 2026-03-09T16:24:51.097491+0000 mgr.y (mgr.14520) 1154 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:52 vm01 bash[20728]: cluster 2026-03-09T16:24:51.097491+0000 mgr.y (mgr.14520) 1154 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:53.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:24:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:24:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:24:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:54 vm09 bash[22983]: cluster 2026-03-09T16:24:53.097845+0000 mgr.y (mgr.14520) 1155 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:54 vm09 bash[22983]: cluster 2026-03-09T16:24:53.097845+0000 mgr.y (mgr.14520) 1155 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:54 vm01 bash[28152]: cluster 2026-03-09T16:24:53.097845+0000 mgr.y (mgr.14520) 1155 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:54 vm01 bash[28152]: cluster 2026-03-09T16:24:53.097845+0000 mgr.y (mgr.14520) 1155 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:54.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:54 vm01 bash[20728]: cluster 2026-03-09T16:24:53.097845+0000 mgr.y (mgr.14520) 1155 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:54.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:54 vm01 bash[20728]: cluster 2026-03-09T16:24:53.097845+0000 mgr.y (mgr.14520) 1155 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T16:24:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:56 vm09 bash[22983]: cluster 2026-03-09T16:24:55.098533+0000 mgr.y (mgr.14520) 1156 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:56 vm09 bash[22983]: cluster 2026-03-09T16:24:55.098533+0000 mgr.y (mgr.14520) 1156 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:56 vm01 bash[28152]: cluster 2026-03-09T16:24:55.098533+0000 mgr.y (mgr.14520) 1156 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:56 vm01 bash[28152]: cluster 2026-03-09T16:24:55.098533+0000 mgr.y (mgr.14520) 1156 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:56 vm01 bash[20728]: cluster 2026-03-09T16:24:55.098533+0000 mgr.y (mgr.14520) 1156 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:56 vm01 bash[20728]: cluster 2026-03-09T16:24:55.098533+0000 mgr.y (mgr.14520) 1156 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:24:57.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:24:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:24:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:58 vm09 bash[22983]: cluster 2026-03-09T16:24:57.098848+0000 mgr.y (mgr.14520) 1157 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:58 vm09 bash[22983]: cluster 2026-03-09T16:24:57.098848+0000 mgr.y (mgr.14520) 1157 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:58 vm01 bash[28152]: cluster 2026-03-09T16:24:57.098848+0000 mgr.y (mgr.14520) 1157 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:58 vm01 bash[28152]: cluster 2026-03-09T16:24:57.098848+0000 mgr.y (mgr.14520) 1157 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:58.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:58 vm01 bash[20728]: cluster 2026-03-09T16:24:57.098848+0000 mgr.y (mgr.14520) 1157 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:58.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:58 vm01 bash[20728]: cluster 2026-03-09T16:24:57.098848+0000 mgr.y (mgr.14520) 1157 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:24:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:59 vm09 bash[22983]: audit 2026-03-09T16:24:57.580048+0000 mgr.y (mgr.14520) 1158 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:24:59 vm09 bash[22983]: audit 2026-03-09T16:24:57.580048+0000 mgr.y (mgr.14520) 1158 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:59 vm01 bash[28152]: audit 2026-03-09T16:24:57.580048+0000 mgr.y (mgr.14520) 1158 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:24:59 vm01 bash[28152]: audit 2026-03-09T16:24:57.580048+0000 mgr.y (mgr.14520) 1158 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:59.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:59 vm01 bash[20728]: audit 2026-03-09T16:24:57.580048+0000 mgr.y (mgr.14520) 1158 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:24:59.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:24:59 vm01 bash[20728]: audit 2026-03-09T16:24:57.580048+0000 mgr.y (mgr.14520) 1158 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:00 vm09 bash[22983]: cluster 2026-03-09T16:24:59.099509+0000 mgr.y (mgr.14520) 1159 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:00 vm09 bash[22983]: cluster 2026-03-09T16:24:59.099509+0000 mgr.y (mgr.14520) 1159 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:00 vm09 bash[22983]: audit 2026-03-09T16:24:59.937719+0000 mon.a (mon.0) 3904 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:00 vm09 bash[22983]: audit 2026-03-09T16:24:59.937719+0000 mon.a (mon.0) 3904 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:00 vm01 bash[28152]: cluster 2026-03-09T16:24:59.099509+0000 mgr.y (mgr.14520) 1159 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:00 vm01 bash[28152]: cluster 2026-03-09T16:24:59.099509+0000 mgr.y (mgr.14520) 1159 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:00 vm01 bash[28152]: audit 2026-03-09T16:24:59.937719+0000 mon.a (mon.0) 3904 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:00 vm01 bash[28152]: audit 2026-03-09T16:24:59.937719+0000 mon.a (mon.0) 3904 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:00.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:00 vm01 bash[20728]: cluster 2026-03-09T16:24:59.099509+0000 mgr.y (mgr.14520) 1159 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:00.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:00 vm01 bash[20728]: cluster 2026-03-09T16:24:59.099509+0000 mgr.y (mgr.14520) 1159 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:00.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:00 vm01 bash[20728]: audit 2026-03-09T16:24:59.937719+0000 mon.a (mon.0) 3904 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:00.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:00 vm01 bash[20728]: audit 2026-03-09T16:24:59.937719+0000 mon.a (mon.0) 3904 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:02 vm09 bash[22983]: cluster 2026-03-09T16:25:01.099831+0000 mgr.y (mgr.14520) 1160 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:02 vm09 bash[22983]: cluster 2026-03-09T16:25:01.099831+0000 mgr.y (mgr.14520) 1160 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:02.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:02 vm01 bash[28152]: cluster 2026-03-09T16:25:01.099831+0000 mgr.y (mgr.14520) 1160 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:02.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:02 vm01 bash[28152]: cluster 2026-03-09T16:25:01.099831+0000 mgr.y (mgr.14520) 1160 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:02 vm01 bash[20728]: cluster 2026-03-09T16:25:01.099831+0000 mgr.y (mgr.14520) 1160 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:02 vm01 bash[20728]: cluster 2026-03-09T16:25:01.099831+0000 mgr.y (mgr.14520) 1160 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:03.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:25:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:25:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:25:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:04 vm09 bash[22983]: cluster 2026-03-09T16:25:03.100121+0000 mgr.y (mgr.14520) 1161 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:04 vm09 bash[22983]: cluster 2026-03-09T16:25:03.100121+0000 mgr.y (mgr.14520) 1161 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:04.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:04 vm01 bash[28152]: cluster 2026-03-09T16:25:03.100121+0000 mgr.y (mgr.14520) 1161 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:04.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:04 vm01 bash[28152]: cluster 2026-03-09T16:25:03.100121+0000 mgr.y (mgr.14520) 1161 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:04 vm01 bash[20728]: cluster 2026-03-09T16:25:03.100121+0000 mgr.y (mgr.14520) 1161 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:04 vm01 bash[20728]: cluster 2026-03-09T16:25:03.100121+0000 mgr.y (mgr.14520) 1161 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:06 vm09 bash[22983]: cluster 2026-03-09T16:25:05.100817+0000 mgr.y (mgr.14520) 1162 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:06 vm09 bash[22983]: cluster 2026-03-09T16:25:05.100817+0000 mgr.y (mgr.14520) 1162 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:06.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:06 vm01 bash[28152]: cluster 2026-03-09T16:25:05.100817+0000 mgr.y (mgr.14520) 1162 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:06.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:06 vm01 bash[28152]: cluster 2026-03-09T16:25:05.100817+0000 mgr.y (mgr.14520) 1162 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:06.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:06 vm01 bash[20728]: cluster 2026-03-09T16:25:05.100817+0000 mgr.y (mgr.14520) 1162 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:06.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:06 vm01 bash[20728]: cluster 2026-03-09T16:25:05.100817+0000 mgr.y (mgr.14520) 1162 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:06.476484+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:06.476484+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.065573+0000 mon.a (mon.0) 3906 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.065573+0000 mon.a (mon.0) 3906 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.072321+0000 mon.a (mon.0) 3907 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.072321+0000 mon.a (mon.0) 3907 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.088636+0000 mon.a (mon.0) 3908 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.088636+0000 mon.a (mon.0) 3908 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.095103+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.095103+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.096270+0000 mon.a (mon.0) 3910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.096270+0000 mon.a (mon.0) 3910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.096966+0000 mon.a (mon.0) 3911 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.096966+0000 mon.a (mon.0) 3911 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.102903+0000 mon.a (mon.0) 3912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.583 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:07 vm09 bash[22983]: audit 2026-03-09T16:25:07.102903+0000 mon.a (mon.0) 3912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:06.476484+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:06.476484+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.065573+0000 mon.a (mon.0) 3906 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.065573+0000 mon.a (mon.0) 3906 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.072321+0000 mon.a (mon.0) 3907 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.072321+0000 mon.a (mon.0) 3907 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.088636+0000 mon.a (mon.0) 3908 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.088636+0000 mon.a (mon.0) 3908 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.095103+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.095103+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.096270+0000 mon.a (mon.0) 3910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.096270+0000 mon.a (mon.0) 3910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.096966+0000 mon.a (mon.0) 3911 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.096966+0000 mon.a (mon.0) 3911 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.102903+0000 mon.a (mon.0) 3912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:07 vm01 bash[28152]: audit 2026-03-09T16:25:07.102903+0000 mon.a (mon.0) 3912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:06.476484+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:06.476484+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.065573+0000 mon.a (mon.0) 3906 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.065573+0000 mon.a (mon.0) 3906 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.072321+0000 mon.a (mon.0) 3907 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.072321+0000 mon.a (mon.0) 3907 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.088636+0000 mon.a (mon.0) 3908 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.088636+0000 mon.a (mon.0) 3908 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.095103+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.095103+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.096270+0000 mon.a (mon.0) 3910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.096270+0000 mon.a (mon.0) 3910 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.096966+0000 mon.a (mon.0) 3911 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.096966+0000 mon.a (mon.0) 3911 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.102903+0000 mon.a (mon.0) 3912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:07 vm01 bash[20728]: audit 2026-03-09T16:25:07.102903+0000 mon.a (mon.0) 3912 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:25:07.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:25:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:25:07.882 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:25:07 vm09 bash[50619]: logger=cleanup t=2026-03-09T16:25:07.745926541Z level=info msg="Completed cleanup jobs" duration=1.283413ms 2026-03-09T16:25:08.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:25:07 vm09 bash[50619]: logger=plugins.update.checker t=2026-03-09T16:25:07.902667875Z level=info msg="Update check succeeded" duration=51.378246ms 2026-03-09T16:25:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:08 vm09 bash[22983]: cluster 2026-03-09T16:25:07.101068+0000 mgr.y (mgr.14520) 1163 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:08 vm09 bash[22983]: cluster 2026-03-09T16:25:07.101068+0000 mgr.y (mgr.14520) 1163 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:08.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:08 vm01 bash[28152]: cluster 2026-03-09T16:25:07.101068+0000 mgr.y (mgr.14520) 1163 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:08 vm01 bash[28152]: cluster 2026-03-09T16:25:07.101068+0000 mgr.y (mgr.14520) 1163 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:08 vm01 bash[20728]: cluster 2026-03-09T16:25:07.101068+0000 mgr.y (mgr.14520) 1163 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:08 vm01 bash[20728]: cluster 2026-03-09T16:25:07.101068+0000 mgr.y (mgr.14520) 1163 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:09 vm09 bash[22983]: audit 2026-03-09T16:25:07.590312+0000 mgr.y (mgr.14520) 1164 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:09 vm09 bash[22983]: audit 2026-03-09T16:25:07.590312+0000 mgr.y (mgr.14520) 1164 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:09.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:09 vm01 bash[28152]: audit 2026-03-09T16:25:07.590312+0000 mgr.y (mgr.14520) 1164 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:09.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:09 vm01 bash[28152]: audit 2026-03-09T16:25:07.590312+0000 mgr.y (mgr.14520) 1164 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:09.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:09 vm01 bash[20728]: audit 2026-03-09T16:25:07.590312+0000 mgr.y (mgr.14520) 1164 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:09.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:09 vm01 bash[20728]: audit 2026-03-09T16:25:07.590312+0000 mgr.y (mgr.14520) 1164 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:10 vm09 bash[22983]: cluster 2026-03-09T16:25:09.101766+0000 mgr.y (mgr.14520) 1165 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:10 vm09 bash[22983]: cluster 2026-03-09T16:25:09.101766+0000 mgr.y (mgr.14520) 1165 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:10 vm01 bash[28152]: cluster 2026-03-09T16:25:09.101766+0000 mgr.y (mgr.14520) 1165 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:10 vm01 bash[28152]: cluster 2026-03-09T16:25:09.101766+0000 mgr.y (mgr.14520) 1165 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:10.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:10 vm01 bash[20728]: cluster 2026-03-09T16:25:09.101766+0000 mgr.y (mgr.14520) 1165 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:10.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:10 vm01 bash[20728]: cluster 2026-03-09T16:25:09.101766+0000 mgr.y (mgr.14520) 1165 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:11 vm09 bash[22983]: cluster 2026-03-09T16:25:11.102099+0000 mgr.y (mgr.14520) 1166 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:11.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:11 vm09 bash[22983]: cluster 2026-03-09T16:25:11.102099+0000 mgr.y (mgr.14520) 1166 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:11.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:11 vm01 bash[28152]: cluster 2026-03-09T16:25:11.102099+0000 mgr.y (mgr.14520) 1166 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:11.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:11 vm01 bash[28152]: cluster 2026-03-09T16:25:11.102099+0000 mgr.y (mgr.14520) 1166 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:11.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:11 vm01 bash[20728]: cluster 2026-03-09T16:25:11.102099+0000 mgr.y (mgr.14520) 1166 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:11.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:11 vm01 bash[20728]: cluster 2026-03-09T16:25:11.102099+0000 mgr.y (mgr.14520) 1166 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:13.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:25:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:25:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:25:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:14 vm01 bash[28152]: cluster 2026-03-09T16:25:13.102475+0000 mgr.y (mgr.14520) 1167 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:14.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:14 vm01 bash[28152]: cluster 2026-03-09T16:25:13.102475+0000 mgr.y (mgr.14520) 1167 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:14 vm01 bash[20728]: cluster 2026-03-09T16:25:13.102475+0000 mgr.y (mgr.14520) 1167 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:14.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:14 vm01 bash[20728]: cluster 2026-03-09T16:25:13.102475+0000 mgr.y (mgr.14520) 1167 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:14 vm09 bash[22983]: cluster 2026-03-09T16:25:13.102475+0000 mgr.y (mgr.14520) 1167 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:14 vm09 bash[22983]: cluster 2026-03-09T16:25:13.102475+0000 mgr.y (mgr.14520) 1167 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:15 vm01 bash[28152]: audit 2026-03-09T16:25:14.943886+0000 mon.a (mon.0) 3913 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:15.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:15 vm01 bash[28152]: audit 2026-03-09T16:25:14.943886+0000 mon.a (mon.0) 3913 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:15 vm01 bash[20728]: audit 2026-03-09T16:25:14.943886+0000 mon.a (mon.0) 3913 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:15.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:15 vm01 bash[20728]: audit 2026-03-09T16:25:14.943886+0000 mon.a (mon.0) 3913 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:15 vm09 bash[22983]: audit 2026-03-09T16:25:14.943886+0000 mon.a (mon.0) 3913 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:15 vm09 bash[22983]: audit 2026-03-09T16:25:14.943886+0000 mon.a (mon.0) 3913 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:16 vm01 bash[28152]: cluster 2026-03-09T16:25:15.103266+0000 mgr.y (mgr.14520) 1168 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:16 vm01 bash[28152]: cluster 2026-03-09T16:25:15.103266+0000 mgr.y (mgr.14520) 1168 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:16 vm01 bash[20728]: cluster 2026-03-09T16:25:15.103266+0000 mgr.y (mgr.14520) 1168 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:16 vm01 bash[20728]: cluster 2026-03-09T16:25:15.103266+0000 mgr.y (mgr.14520) 1168 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:16 vm09 bash[22983]: cluster 2026-03-09T16:25:15.103266+0000 mgr.y (mgr.14520) 1168 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:16 vm09 bash[22983]: cluster 2026-03-09T16:25:15.103266+0000 mgr.y (mgr.14520) 1168 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:17.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:25:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:25:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:18 vm09 bash[22983]: cluster 2026-03-09T16:25:17.103559+0000 mgr.y (mgr.14520) 1169 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:18 vm09 bash[22983]: cluster 2026-03-09T16:25:17.103559+0000 mgr.y (mgr.14520) 1169 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:18 vm01 bash[28152]: cluster 2026-03-09T16:25:17.103559+0000 mgr.y (mgr.14520) 1169 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:18 vm01 bash[28152]: cluster 2026-03-09T16:25:17.103559+0000 mgr.y (mgr.14520) 1169 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:18 vm01 bash[20728]: cluster 2026-03-09T16:25:17.103559+0000 mgr.y (mgr.14520) 1169 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:18 vm01 bash[20728]: cluster 2026-03-09T16:25:17.103559+0000 mgr.y (mgr.14520) 1169 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:19 vm09 bash[22983]: audit 2026-03-09T16:25:17.601057+0000 mgr.y (mgr.14520) 1170 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:19 vm09 bash[22983]: audit 2026-03-09T16:25:17.601057+0000 mgr.y (mgr.14520) 1170 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:19 vm01 bash[28152]: audit 2026-03-09T16:25:17.601057+0000 mgr.y (mgr.14520) 1170 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:19 vm01 bash[28152]: audit 2026-03-09T16:25:17.601057+0000 mgr.y (mgr.14520) 1170 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:19.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:19 vm01 bash[20728]: audit 2026-03-09T16:25:17.601057+0000 mgr.y (mgr.14520) 1170 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:19.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:19 vm01 bash[20728]: audit 2026-03-09T16:25:17.601057+0000 mgr.y (mgr.14520) 1170 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:20 vm09 bash[22983]: cluster 2026-03-09T16:25:19.105071+0000 mgr.y (mgr.14520) 1171 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:20 vm09 bash[22983]: cluster 2026-03-09T16:25:19.105071+0000 mgr.y (mgr.14520) 1171 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:20 vm01 bash[28152]: cluster 2026-03-09T16:25:19.105071+0000 mgr.y (mgr.14520) 1171 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:20 vm01 bash[28152]: cluster 2026-03-09T16:25:19.105071+0000 mgr.y (mgr.14520) 1171 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:20 vm01 bash[20728]: cluster 2026-03-09T16:25:19.105071+0000 mgr.y (mgr.14520) 1171 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:20 vm01 bash[20728]: cluster 2026-03-09T16:25:19.105071+0000 mgr.y (mgr.14520) 1171 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:22 vm09 bash[22983]: cluster 2026-03-09T16:25:21.105363+0000 mgr.y (mgr.14520) 1172 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:22 vm09 bash[22983]: cluster 2026-03-09T16:25:21.105363+0000 mgr.y (mgr.14520) 1172 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:22.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:22 vm01 bash[28152]: cluster 2026-03-09T16:25:21.105363+0000 mgr.y (mgr.14520) 1172 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:22.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:22 vm01 bash[28152]: cluster 2026-03-09T16:25:21.105363+0000 mgr.y (mgr.14520) 1172 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:22 vm01 bash[20728]: cluster 2026-03-09T16:25:21.105363+0000 mgr.y (mgr.14520) 1172 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:22 vm01 bash[20728]: cluster 2026-03-09T16:25:21.105363+0000 mgr.y (mgr.14520) 1172 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:25:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:25:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:25:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:24 vm09 bash[22983]: cluster 2026-03-09T16:25:23.105636+0000 mgr.y (mgr.14520) 1173 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:24 vm09 bash[22983]: cluster 2026-03-09T16:25:23.105636+0000 mgr.y (mgr.14520) 1173 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:24.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:24 vm01 bash[28152]: cluster 2026-03-09T16:25:23.105636+0000 mgr.y (mgr.14520) 1173 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:24.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:24 vm01 bash[28152]: cluster 2026-03-09T16:25:23.105636+0000 mgr.y (mgr.14520) 1173 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:24 vm01 bash[20728]: cluster 2026-03-09T16:25:23.105636+0000 mgr.y (mgr.14520) 1173 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:24 vm01 bash[20728]: cluster 2026-03-09T16:25:23.105636+0000 mgr.y (mgr.14520) 1173 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:26 vm09 bash[22983]: cluster 2026-03-09T16:25:25.106397+0000 mgr.y (mgr.14520) 1174 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:26 vm09 bash[22983]: cluster 2026-03-09T16:25:25.106397+0000 mgr.y (mgr.14520) 1174 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:26.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:26 vm01 bash[28152]: cluster 2026-03-09T16:25:25.106397+0000 mgr.y (mgr.14520) 1174 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:26.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:26 vm01 bash[28152]: cluster 2026-03-09T16:25:25.106397+0000 mgr.y (mgr.14520) 1174 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:26 vm01 bash[20728]: cluster 2026-03-09T16:25:25.106397+0000 mgr.y (mgr.14520) 1174 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:26 vm01 bash[20728]: cluster 2026-03-09T16:25:25.106397+0000 mgr.y (mgr.14520) 1174 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:27.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:25:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:25:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:28 vm09 bash[22983]: cluster 2026-03-09T16:25:27.106674+0000 mgr.y (mgr.14520) 1175 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:28 vm09 bash[22983]: cluster 2026-03-09T16:25:27.106674+0000 mgr.y (mgr.14520) 1175 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:28 vm01 bash[28152]: cluster 2026-03-09T16:25:27.106674+0000 mgr.y (mgr.14520) 1175 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:28 vm01 bash[28152]: cluster 2026-03-09T16:25:27.106674+0000 mgr.y (mgr.14520) 1175 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:28 vm01 bash[20728]: cluster 2026-03-09T16:25:27.106674+0000 mgr.y (mgr.14520) 1175 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:28 vm01 bash[20728]: cluster 2026-03-09T16:25:27.106674+0000 mgr.y (mgr.14520) 1175 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:29 vm09 bash[22983]: audit 2026-03-09T16:25:27.611838+0000 mgr.y (mgr.14520) 1176 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:29 vm09 bash[22983]: audit 2026-03-09T16:25:27.611838+0000 mgr.y (mgr.14520) 1176 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:29 vm01 bash[28152]: audit 2026-03-09T16:25:27.611838+0000 mgr.y (mgr.14520) 1176 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:29 vm01 bash[28152]: audit 2026-03-09T16:25:27.611838+0000 mgr.y (mgr.14520) 1176 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:29 vm01 bash[20728]: audit 2026-03-09T16:25:27.611838+0000 mgr.y (mgr.14520) 1176 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:29 vm01 bash[20728]: audit 2026-03-09T16:25:27.611838+0000 mgr.y (mgr.14520) 1176 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:30 vm09 bash[22983]: cluster 2026-03-09T16:25:29.107416+0000 mgr.y (mgr.14520) 1177 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:30 vm09 bash[22983]: cluster 2026-03-09T16:25:29.107416+0000 mgr.y (mgr.14520) 1177 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:30 vm09 bash[22983]: audit 2026-03-09T16:25:29.949823+0000 mon.a (mon.0) 3914 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:30 vm09 bash[22983]: audit 2026-03-09T16:25:29.949823+0000 mon.a (mon.0) 3914 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:30 vm01 bash[28152]: cluster 2026-03-09T16:25:29.107416+0000 mgr.y (mgr.14520) 1177 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:30 vm01 bash[28152]: cluster 2026-03-09T16:25:29.107416+0000 mgr.y (mgr.14520) 1177 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:30 vm01 bash[28152]: audit 2026-03-09T16:25:29.949823+0000 mon.a (mon.0) 3914 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:30 vm01 bash[28152]: audit 2026-03-09T16:25:29.949823+0000 mon.a (mon.0) 3914 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:30 vm01 bash[20728]: cluster 2026-03-09T16:25:29.107416+0000 mgr.y (mgr.14520) 1177 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:30 vm01 bash[20728]: cluster 2026-03-09T16:25:29.107416+0000 mgr.y (mgr.14520) 1177 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:30 vm01 bash[20728]: audit 2026-03-09T16:25:29.949823+0000 mon.a (mon.0) 3914 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:30 vm01 bash[20728]: audit 2026-03-09T16:25:29.949823+0000 mon.a (mon.0) 3914 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:32 vm09 bash[22983]: cluster 2026-03-09T16:25:31.107804+0000 mgr.y (mgr.14520) 1178 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:32 vm09 bash[22983]: cluster 2026-03-09T16:25:31.107804+0000 mgr.y (mgr.14520) 1178 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:32 vm01 bash[28152]: cluster 2026-03-09T16:25:31.107804+0000 mgr.y (mgr.14520) 1178 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:32 vm01 bash[28152]: cluster 2026-03-09T16:25:31.107804+0000 mgr.y (mgr.14520) 1178 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:32.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:32 vm01 bash[20728]: cluster 2026-03-09T16:25:31.107804+0000 mgr.y (mgr.14520) 1178 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:32.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:32 vm01 bash[20728]: cluster 2026-03-09T16:25:31.107804+0000 mgr.y (mgr.14520) 1178 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:33.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:25:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:25:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:25:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:34 vm09 bash[22983]: cluster 2026-03-09T16:25:33.108291+0000 mgr.y (mgr.14520) 1179 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:34 vm09 bash[22983]: cluster 2026-03-09T16:25:33.108291+0000 mgr.y (mgr.14520) 1179 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:34.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:34 vm01 bash[28152]: cluster 2026-03-09T16:25:33.108291+0000 mgr.y (mgr.14520) 1179 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:34.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:34 vm01 bash[28152]: cluster 2026-03-09T16:25:33.108291+0000 mgr.y (mgr.14520) 1179 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:34 vm01 bash[20728]: cluster 2026-03-09T16:25:33.108291+0000 mgr.y (mgr.14520) 1179 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:34 vm01 bash[20728]: cluster 2026-03-09T16:25:33.108291+0000 mgr.y (mgr.14520) 1179 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:36 vm09 bash[22983]: cluster 2026-03-09T16:25:35.108918+0000 mgr.y (mgr.14520) 1180 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:36 vm09 bash[22983]: cluster 2026-03-09T16:25:35.108918+0000 mgr.y (mgr.14520) 1180 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:36.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:36 vm01 bash[28152]: cluster 2026-03-09T16:25:35.108918+0000 mgr.y (mgr.14520) 1180 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:36.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:36 vm01 bash[28152]: cluster 2026-03-09T16:25:35.108918+0000 mgr.y (mgr.14520) 1180 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:36 vm01 bash[20728]: cluster 2026-03-09T16:25:35.108918+0000 mgr.y (mgr.14520) 1180 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:36 vm01 bash[20728]: cluster 2026-03-09T16:25:35.108918+0000 mgr.y (mgr.14520) 1180 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:37.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:25:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:25:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:38 vm09 bash[22983]: cluster 2026-03-09T16:25:37.109216+0000 mgr.y (mgr.14520) 1181 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:38 vm09 bash[22983]: cluster 2026-03-09T16:25:37.109216+0000 mgr.y (mgr.14520) 1181 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:38.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:38 vm01 bash[28152]: cluster 2026-03-09T16:25:37.109216+0000 mgr.y (mgr.14520) 1181 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:38.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:38 vm01 bash[28152]: cluster 2026-03-09T16:25:37.109216+0000 mgr.y (mgr.14520) 1181 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:38 vm01 bash[20728]: cluster 2026-03-09T16:25:37.109216+0000 mgr.y (mgr.14520) 1181 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:38 vm01 bash[20728]: cluster 2026-03-09T16:25:37.109216+0000 mgr.y (mgr.14520) 1181 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:39 vm09 bash[22983]: audit 2026-03-09T16:25:37.622561+0000 mgr.y (mgr.14520) 1182 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:39 vm09 bash[22983]: audit 2026-03-09T16:25:37.622561+0000 mgr.y (mgr.14520) 1182 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:39.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:39 vm01 bash[28152]: audit 2026-03-09T16:25:37.622561+0000 mgr.y (mgr.14520) 1182 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:39 vm01 bash[28152]: audit 2026-03-09T16:25:37.622561+0000 mgr.y (mgr.14520) 1182 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:39 vm01 bash[20728]: audit 2026-03-09T16:25:37.622561+0000 mgr.y (mgr.14520) 1182 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:39 vm01 bash[20728]: audit 2026-03-09T16:25:37.622561+0000 mgr.y (mgr.14520) 1182 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:40 vm09 bash[22983]: cluster 2026-03-09T16:25:39.109957+0000 mgr.y (mgr.14520) 1183 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:40 vm09 bash[22983]: cluster 2026-03-09T16:25:39.109957+0000 mgr.y (mgr.14520) 1183 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:40 vm01 bash[28152]: cluster 2026-03-09T16:25:39.109957+0000 mgr.y (mgr.14520) 1183 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:40 vm01 bash[28152]: cluster 2026-03-09T16:25:39.109957+0000 mgr.y (mgr.14520) 1183 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:40 vm01 bash[20728]: cluster 2026-03-09T16:25:39.109957+0000 mgr.y (mgr.14520) 1183 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:40 vm01 bash[20728]: cluster 2026-03-09T16:25:39.109957+0000 mgr.y (mgr.14520) 1183 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:41 vm09 bash[22983]: cluster 2026-03-09T16:25:41.110286+0000 mgr.y (mgr.14520) 1184 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:41 vm09 bash[22983]: cluster 2026-03-09T16:25:41.110286+0000 mgr.y (mgr.14520) 1184 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:41.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:41 vm01 bash[28152]: cluster 2026-03-09T16:25:41.110286+0000 mgr.y (mgr.14520) 1184 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:41.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:41 vm01 bash[28152]: cluster 2026-03-09T16:25:41.110286+0000 mgr.y (mgr.14520) 1184 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:41.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:41 vm01 bash[20728]: cluster 2026-03-09T16:25:41.110286+0000 mgr.y (mgr.14520) 1184 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:41.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:41 vm01 bash[20728]: cluster 2026-03-09T16:25:41.110286+0000 mgr.y (mgr.14520) 1184 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:43.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:25:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:25:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:25:44.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:44 vm01 bash[28152]: cluster 2026-03-09T16:25:43.110763+0000 mgr.y (mgr.14520) 1185 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:44.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:44 vm01 bash[28152]: cluster 2026-03-09T16:25:43.110763+0000 mgr.y (mgr.14520) 1185 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:44 vm01 bash[20728]: cluster 2026-03-09T16:25:43.110763+0000 mgr.y (mgr.14520) 1185 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:44 vm01 bash[20728]: cluster 2026-03-09T16:25:43.110763+0000 mgr.y (mgr.14520) 1185 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:44 vm09 bash[22983]: cluster 2026-03-09T16:25:43.110763+0000 mgr.y (mgr.14520) 1185 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:44 vm09 bash[22983]: cluster 2026-03-09T16:25:43.110763+0000 mgr.y (mgr.14520) 1185 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:45 vm09 bash[22983]: audit 2026-03-09T16:25:44.956324+0000 mon.a (mon.0) 3915 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:45 vm09 bash[22983]: audit 2026-03-09T16:25:44.956324+0000 mon.a (mon.0) 3915 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:45 vm01 bash[28152]: audit 2026-03-09T16:25:44.956324+0000 mon.a (mon.0) 3915 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:45 vm01 bash[28152]: audit 2026-03-09T16:25:44.956324+0000 mon.a (mon.0) 3915 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:45.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:45 vm01 bash[20728]: audit 2026-03-09T16:25:44.956324+0000 mon.a (mon.0) 3915 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:45.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:45 vm01 bash[20728]: audit 2026-03-09T16:25:44.956324+0000 mon.a (mon.0) 3915 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:25:46.132 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:25:45 vm09 bash[50619]: logger=infra.usagestats t=2026-03-09T16:25:45.774549638Z level=info msg="Usage stats are ready to report" 2026-03-09T16:25:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:46 vm09 bash[22983]: cluster 2026-03-09T16:25:45.111457+0000 mgr.y (mgr.14520) 1186 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:46 vm09 bash[22983]: cluster 2026-03-09T16:25:45.111457+0000 mgr.y (mgr.14520) 1186 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:46.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:46 vm01 bash[28152]: cluster 2026-03-09T16:25:45.111457+0000 mgr.y (mgr.14520) 1186 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:46.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:46 vm01 bash[28152]: cluster 2026-03-09T16:25:45.111457+0000 mgr.y (mgr.14520) 1186 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:46 vm01 bash[20728]: cluster 2026-03-09T16:25:45.111457+0000 mgr.y (mgr.14520) 1186 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:46 vm01 bash[20728]: cluster 2026-03-09T16:25:45.111457+0000 mgr.y (mgr.14520) 1186 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:47.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:25:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:25:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:48 vm09 bash[22983]: cluster 2026-03-09T16:25:47.111735+0000 mgr.y (mgr.14520) 1187 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:48 vm09 bash[22983]: cluster 2026-03-09T16:25:47.111735+0000 mgr.y (mgr.14520) 1187 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:48 vm01 bash[28152]: cluster 2026-03-09T16:25:47.111735+0000 mgr.y (mgr.14520) 1187 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:48 vm01 bash[28152]: cluster 2026-03-09T16:25:47.111735+0000 mgr.y (mgr.14520) 1187 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:48 vm01 bash[20728]: cluster 2026-03-09T16:25:47.111735+0000 mgr.y (mgr.14520) 1187 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:48 vm01 bash[20728]: cluster 2026-03-09T16:25:47.111735+0000 mgr.y (mgr.14520) 1187 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:49 vm09 bash[22983]: audit 2026-03-09T16:25:47.630265+0000 mgr.y (mgr.14520) 1188 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:49 vm09 bash[22983]: audit 2026-03-09T16:25:47.630265+0000 mgr.y (mgr.14520) 1188 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:49 vm01 bash[28152]: audit 2026-03-09T16:25:47.630265+0000 mgr.y (mgr.14520) 1188 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:49 vm01 bash[28152]: audit 2026-03-09T16:25:47.630265+0000 mgr.y (mgr.14520) 1188 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:49.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:49 vm01 bash[20728]: audit 2026-03-09T16:25:47.630265+0000 mgr.y (mgr.14520) 1188 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:49 vm01 bash[20728]: audit 2026-03-09T16:25:47.630265+0000 mgr.y (mgr.14520) 1188 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:50 vm09 bash[22983]: cluster 2026-03-09T16:25:49.112601+0000 mgr.y (mgr.14520) 1189 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:50 vm09 bash[22983]: cluster 2026-03-09T16:25:49.112601+0000 mgr.y (mgr.14520) 1189 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:50 vm01 bash[28152]: cluster 2026-03-09T16:25:49.112601+0000 mgr.y (mgr.14520) 1189 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:50 vm01 bash[28152]: cluster 2026-03-09T16:25:49.112601+0000 mgr.y (mgr.14520) 1189 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:50 vm01 bash[20728]: cluster 2026-03-09T16:25:49.112601+0000 mgr.y (mgr.14520) 1189 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:50 vm01 bash[20728]: cluster 2026-03-09T16:25:49.112601+0000 mgr.y (mgr.14520) 1189 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:52 vm09 bash[22983]: cluster 2026-03-09T16:25:51.112895+0000 mgr.y (mgr.14520) 1190 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:52 vm09 bash[22983]: cluster 2026-03-09T16:25:51.112895+0000 mgr.y (mgr.14520) 1190 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:52 vm01 bash[28152]: cluster 2026-03-09T16:25:51.112895+0000 mgr.y (mgr.14520) 1190 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:52.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:52 vm01 bash[28152]: cluster 2026-03-09T16:25:51.112895+0000 mgr.y (mgr.14520) 1190 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:52 vm01 bash[20728]: cluster 2026-03-09T16:25:51.112895+0000 mgr.y (mgr.14520) 1190 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:52 vm01 bash[20728]: cluster 2026-03-09T16:25:51.112895+0000 mgr.y (mgr.14520) 1190 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:53.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:25:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:25:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:25:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:54 vm09 bash[22983]: cluster 2026-03-09T16:25:53.113235+0000 mgr.y (mgr.14520) 1191 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:54 vm09 bash[22983]: cluster 2026-03-09T16:25:53.113235+0000 mgr.y (mgr.14520) 1191 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:54 vm01 bash[28152]: cluster 2026-03-09T16:25:53.113235+0000 mgr.y (mgr.14520) 1191 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:54.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:54 vm01 bash[28152]: cluster 2026-03-09T16:25:53.113235+0000 mgr.y (mgr.14520) 1191 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:54 vm01 bash[20728]: cluster 2026-03-09T16:25:53.113235+0000 mgr.y (mgr.14520) 1191 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:54 vm01 bash[20728]: cluster 2026-03-09T16:25:53.113235+0000 mgr.y (mgr.14520) 1191 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:56 vm09 bash[22983]: cluster 2026-03-09T16:25:55.114185+0000 mgr.y (mgr.14520) 1192 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:56 vm09 bash[22983]: cluster 2026-03-09T16:25:55.114185+0000 mgr.y (mgr.14520) 1192 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:56 vm01 bash[28152]: cluster 2026-03-09T16:25:55.114185+0000 mgr.y (mgr.14520) 1192 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:56 vm01 bash[28152]: cluster 2026-03-09T16:25:55.114185+0000 mgr.y (mgr.14520) 1192 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:56 vm01 bash[20728]: cluster 2026-03-09T16:25:55.114185+0000 mgr.y (mgr.14520) 1192 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:56 vm01 bash[20728]: cluster 2026-03-09T16:25:55.114185+0000 mgr.y (mgr.14520) 1192 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:25:58.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:25:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:25:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:58 vm09 bash[22983]: cluster 2026-03-09T16:25:57.114487+0000 mgr.y (mgr.14520) 1193 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:58 vm09 bash[22983]: cluster 2026-03-09T16:25:57.114487+0000 mgr.y (mgr.14520) 1193 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:58 vm01 bash[28152]: cluster 2026-03-09T16:25:57.114487+0000 mgr.y (mgr.14520) 1193 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:58.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:58 vm01 bash[28152]: cluster 2026-03-09T16:25:57.114487+0000 mgr.y (mgr.14520) 1193 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:58 vm01 bash[20728]: cluster 2026-03-09T16:25:57.114487+0000 mgr.y (mgr.14520) 1193 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:58 vm01 bash[20728]: cluster 2026-03-09T16:25:57.114487+0000 mgr.y (mgr.14520) 1193 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:25:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:59 vm09 bash[22983]: audit 2026-03-09T16:25:57.641004+0000 mgr.y (mgr.14520) 1194 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:25:59 vm09 bash[22983]: audit 2026-03-09T16:25:57.641004+0000 mgr.y (mgr.14520) 1194 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:59 vm01 bash[28152]: audit 2026-03-09T16:25:57.641004+0000 mgr.y (mgr.14520) 1194 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:25:59 vm01 bash[28152]: audit 2026-03-09T16:25:57.641004+0000 mgr.y (mgr.14520) 1194 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:59.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:59 vm01 bash[20728]: audit 2026-03-09T16:25:57.641004+0000 mgr.y (mgr.14520) 1194 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:25:59.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:25:59 vm01 bash[20728]: audit 2026-03-09T16:25:57.641004+0000 mgr.y (mgr.14520) 1194 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:00 vm09 bash[22983]: cluster 2026-03-09T16:25:59.115169+0000 mgr.y (mgr.14520) 1195 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:00 vm09 bash[22983]: cluster 2026-03-09T16:25:59.115169+0000 mgr.y (mgr.14520) 1195 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:00 vm09 bash[22983]: audit 2026-03-09T16:25:59.962136+0000 mon.a (mon.0) 3916 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:00 vm09 bash[22983]: audit 2026-03-09T16:25:59.962136+0000 mon.a (mon.0) 3916 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:00 vm01 bash[28152]: cluster 2026-03-09T16:25:59.115169+0000 mgr.y (mgr.14520) 1195 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:00 vm01 bash[28152]: cluster 2026-03-09T16:25:59.115169+0000 mgr.y (mgr.14520) 1195 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:00 vm01 bash[28152]: audit 2026-03-09T16:25:59.962136+0000 mon.a (mon.0) 3916 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:00 vm01 bash[28152]: audit 2026-03-09T16:25:59.962136+0000 mon.a (mon.0) 3916 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:00 vm01 bash[20728]: cluster 2026-03-09T16:25:59.115169+0000 mgr.y (mgr.14520) 1195 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:00 vm01 bash[20728]: cluster 2026-03-09T16:25:59.115169+0000 mgr.y (mgr.14520) 1195 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:00 vm01 bash[20728]: audit 2026-03-09T16:25:59.962136+0000 mon.a (mon.0) 3916 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:00 vm01 bash[20728]: audit 2026-03-09T16:25:59.962136+0000 mon.a (mon.0) 3916 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:02 vm09 bash[22983]: cluster 2026-03-09T16:26:01.115503+0000 mgr.y (mgr.14520) 1196 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:02.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:02 vm09 bash[22983]: cluster 2026-03-09T16:26:01.115503+0000 mgr.y (mgr.14520) 1196 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:02.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:02 vm01 bash[28152]: cluster 2026-03-09T16:26:01.115503+0000 mgr.y (mgr.14520) 1196 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:02.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:02 vm01 bash[28152]: cluster 2026-03-09T16:26:01.115503+0000 mgr.y (mgr.14520) 1196 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:02 vm01 bash[20728]: cluster 2026-03-09T16:26:01.115503+0000 mgr.y (mgr.14520) 1196 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:02.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:02 vm01 bash[20728]: cluster 2026-03-09T16:26:01.115503+0000 mgr.y (mgr.14520) 1196 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:03.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:26:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:26:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:26:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:04 vm09 bash[22983]: cluster 2026-03-09T16:26:03.115809+0000 mgr.y (mgr.14520) 1197 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:04 vm09 bash[22983]: cluster 2026-03-09T16:26:03.115809+0000 mgr.y (mgr.14520) 1197 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:04.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:04 vm01 bash[28152]: cluster 2026-03-09T16:26:03.115809+0000 mgr.y (mgr.14520) 1197 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:04.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:04 vm01 bash[28152]: cluster 2026-03-09T16:26:03.115809+0000 mgr.y (mgr.14520) 1197 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:04 vm01 bash[20728]: cluster 2026-03-09T16:26:03.115809+0000 mgr.y (mgr.14520) 1197 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:04 vm01 bash[20728]: cluster 2026-03-09T16:26:03.115809+0000 mgr.y (mgr.14520) 1197 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:06 vm09 bash[22983]: cluster 2026-03-09T16:26:05.116497+0000 mgr.y (mgr.14520) 1198 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:06 vm09 bash[22983]: cluster 2026-03-09T16:26:05.116497+0000 mgr.y (mgr.14520) 1198 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:06.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:06 vm01 bash[28152]: cluster 2026-03-09T16:26:05.116497+0000 mgr.y (mgr.14520) 1198 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:06.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:06 vm01 bash[28152]: cluster 2026-03-09T16:26:05.116497+0000 mgr.y (mgr.14520) 1198 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:06.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:06 vm01 bash[20728]: cluster 2026-03-09T16:26:05.116497+0000 mgr.y (mgr.14520) 1198 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:06.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:06 vm01 bash[20728]: cluster 2026-03-09T16:26:05.116497+0000 mgr.y (mgr.14520) 1198 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:07 vm09 bash[22983]: audit 2026-03-09T16:26:07.141633+0000 mon.a (mon.0) 3917 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:26:07.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:07 vm09 bash[22983]: audit 2026-03-09T16:26:07.141633+0000 mon.a (mon.0) 3917 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:26:07.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:07 vm01 bash[28152]: audit 2026-03-09T16:26:07.141633+0000 mon.a (mon.0) 3917 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:26:07.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:07 vm01 bash[28152]: audit 2026-03-09T16:26:07.141633+0000 mon.a (mon.0) 3917 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:26:07.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:07 vm01 bash[20728]: audit 2026-03-09T16:26:07.141633+0000 mon.a (mon.0) 3917 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:26:07.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:07 vm01 bash[20728]: audit 2026-03-09T16:26:07.141633+0000 mon.a (mon.0) 3917 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:26:08.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:26:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: cluster 2026-03-09T16:26:07.116777+0000 mgr.y (mgr.14520) 1199 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: cluster 2026-03-09T16:26:07.116777+0000 mgr.y (mgr.14520) 1199 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.476845+0000 mon.a (mon.0) 3918 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.476845+0000 mon.a (mon.0) 3918 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.486179+0000 mon.a (mon.0) 3919 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.486179+0000 mon.a (mon.0) 3919 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.487041+0000 mon.a (mon.0) 3920 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.487041+0000 mon.a (mon.0) 3920 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.487528+0000 mon.a (mon.0) 3921 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.487528+0000 mon.a (mon.0) 3921 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.495203+0000 mon.a (mon.0) 3922 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.495203+0000 mon.a (mon.0) 3922 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.650341+0000 mgr.y (mgr.14520) 1200 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:08 vm09 bash[22983]: audit 2026-03-09T16:26:07.650341+0000 mgr.y (mgr.14520) 1200 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: cluster 2026-03-09T16:26:07.116777+0000 mgr.y (mgr.14520) 1199 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: cluster 2026-03-09T16:26:07.116777+0000 mgr.y (mgr.14520) 1199 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.476845+0000 mon.a (mon.0) 3918 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.476845+0000 mon.a (mon.0) 3918 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.486179+0000 mon.a (mon.0) 3919 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.486179+0000 mon.a (mon.0) 3919 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.487041+0000 mon.a (mon.0) 3920 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.487041+0000 mon.a (mon.0) 3920 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.487528+0000 mon.a (mon.0) 3921 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.487528+0000 mon.a (mon.0) 3921 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.495203+0000 mon.a (mon.0) 3922 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.495203+0000 mon.a (mon.0) 3922 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.650341+0000 mgr.y (mgr.14520) 1200 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:08 vm01 bash[28152]: audit 2026-03-09T16:26:07.650341+0000 mgr.y (mgr.14520) 1200 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: cluster 2026-03-09T16:26:07.116777+0000 mgr.y (mgr.14520) 1199 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: cluster 2026-03-09T16:26:07.116777+0000 mgr.y (mgr.14520) 1199 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.476845+0000 mon.a (mon.0) 3918 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.476845+0000 mon.a (mon.0) 3918 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.486179+0000 mon.a (mon.0) 3919 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.486179+0000 mon.a (mon.0) 3919 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.487041+0000 mon.a (mon.0) 3920 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.487041+0000 mon.a (mon.0) 3920 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.487528+0000 mon.a (mon.0) 3921 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.487528+0000 mon.a (mon.0) 3921 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.495203+0000 mon.a (mon.0) 3922 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.495203+0000 mon.a (mon.0) 3922 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.650341+0000 mgr.y (mgr.14520) 1200 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:08 vm01 bash[20728]: audit 2026-03-09T16:26:07.650341+0000 mgr.y (mgr.14520) 1200 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:09 vm09 bash[22983]: cluster 2026-03-09T16:26:09.117560+0000 mgr.y (mgr.14520) 1201 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:09 vm09 bash[22983]: cluster 2026-03-09T16:26:09.117560+0000 mgr.y (mgr.14520) 1201 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:09.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:09 vm01 bash[28152]: cluster 2026-03-09T16:26:09.117560+0000 mgr.y (mgr.14520) 1201 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:09.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:09 vm01 bash[28152]: cluster 2026-03-09T16:26:09.117560+0000 mgr.y (mgr.14520) 1201 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:09.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:09 vm01 bash[20728]: cluster 2026-03-09T16:26:09.117560+0000 mgr.y (mgr.14520) 1201 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:09.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:09 vm01 bash[20728]: cluster 2026-03-09T16:26:09.117560+0000 mgr.y (mgr.14520) 1201 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:12 vm09 bash[22983]: cluster 2026-03-09T16:26:11.117923+0000 mgr.y (mgr.14520) 1202 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:12 vm09 bash[22983]: cluster 2026-03-09T16:26:11.117923+0000 mgr.y (mgr.14520) 1202 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:12.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:12 vm01 bash[28152]: cluster 2026-03-09T16:26:11.117923+0000 mgr.y (mgr.14520) 1202 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:12.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:12 vm01 bash[28152]: cluster 2026-03-09T16:26:11.117923+0000 mgr.y (mgr.14520) 1202 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:12.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:12 vm01 bash[20728]: cluster 2026-03-09T16:26:11.117923+0000 mgr.y (mgr.14520) 1202 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:12.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:12 vm01 bash[20728]: cluster 2026-03-09T16:26:11.117923+0000 mgr.y (mgr.14520) 1202 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:13.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:26:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:26:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:26:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:14 vm09 bash[22983]: cluster 2026-03-09T16:26:13.118234+0000 mgr.y (mgr.14520) 1203 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:14 vm09 bash[22983]: cluster 2026-03-09T16:26:13.118234+0000 mgr.y (mgr.14520) 1203 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:14.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:14 vm01 bash[28152]: cluster 2026-03-09T16:26:13.118234+0000 mgr.y (mgr.14520) 1203 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:14.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:14 vm01 bash[28152]: cluster 2026-03-09T16:26:13.118234+0000 mgr.y (mgr.14520) 1203 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:14.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:14 vm01 bash[20728]: cluster 2026-03-09T16:26:13.118234+0000 mgr.y (mgr.14520) 1203 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:14.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:14 vm01 bash[20728]: cluster 2026-03-09T16:26:13.118234+0000 mgr.y (mgr.14520) 1203 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:15 vm09 bash[22983]: audit 2026-03-09T16:26:14.967826+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:15 vm09 bash[22983]: audit 2026-03-09T16:26:14.967826+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:15.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:15 vm01 bash[28152]: audit 2026-03-09T16:26:14.967826+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:15.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:15 vm01 bash[28152]: audit 2026-03-09T16:26:14.967826+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:15.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:15 vm01 bash[20728]: audit 2026-03-09T16:26:14.967826+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:15.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:15 vm01 bash[20728]: audit 2026-03-09T16:26:14.967826+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:16 vm09 bash[22983]: cluster 2026-03-09T16:26:15.118840+0000 mgr.y (mgr.14520) 1204 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:16 vm09 bash[22983]: cluster 2026-03-09T16:26:15.118840+0000 mgr.y (mgr.14520) 1204 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:16.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:16 vm01 bash[28152]: cluster 2026-03-09T16:26:15.118840+0000 mgr.y (mgr.14520) 1204 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:16.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:16 vm01 bash[28152]: cluster 2026-03-09T16:26:15.118840+0000 mgr.y (mgr.14520) 1204 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:16.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:16 vm01 bash[20728]: cluster 2026-03-09T16:26:15.118840+0000 mgr.y (mgr.14520) 1204 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:16 vm01 bash[20728]: cluster 2026-03-09T16:26:15.118840+0000 mgr.y (mgr.14520) 1204 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:18.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:26:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:26:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:18 vm09 bash[22983]: cluster 2026-03-09T16:26:17.119235+0000 mgr.y (mgr.14520) 1205 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:18 vm09 bash[22983]: cluster 2026-03-09T16:26:17.119235+0000 mgr.y (mgr.14520) 1205 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:18 vm01 bash[28152]: cluster 2026-03-09T16:26:17.119235+0000 mgr.y (mgr.14520) 1205 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:18 vm01 bash[28152]: cluster 2026-03-09T16:26:17.119235+0000 mgr.y (mgr.14520) 1205 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:18 vm01 bash[20728]: cluster 2026-03-09T16:26:17.119235+0000 mgr.y (mgr.14520) 1205 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:18 vm01 bash[20728]: cluster 2026-03-09T16:26:17.119235+0000 mgr.y (mgr.14520) 1205 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:19 vm09 bash[22983]: audit 2026-03-09T16:26:17.658323+0000 mgr.y (mgr.14520) 1206 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:19 vm09 bash[22983]: audit 2026-03-09T16:26:17.658323+0000 mgr.y (mgr.14520) 1206 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:19 vm01 bash[28152]: audit 2026-03-09T16:26:17.658323+0000 mgr.y (mgr.14520) 1206 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:19 vm01 bash[28152]: audit 2026-03-09T16:26:17.658323+0000 mgr.y (mgr.14520) 1206 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:19 vm01 bash[20728]: audit 2026-03-09T16:26:17.658323+0000 mgr.y (mgr.14520) 1206 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:19 vm01 bash[20728]: audit 2026-03-09T16:26:17.658323+0000 mgr.y (mgr.14520) 1206 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:20 vm09 bash[22983]: cluster 2026-03-09T16:26:19.119943+0000 mgr.y (mgr.14520) 1207 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:20 vm09 bash[22983]: cluster 2026-03-09T16:26:19.119943+0000 mgr.y (mgr.14520) 1207 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:20 vm01 bash[28152]: cluster 2026-03-09T16:26:19.119943+0000 mgr.y (mgr.14520) 1207 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:20.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:20 vm01 bash[28152]: cluster 2026-03-09T16:26:19.119943+0000 mgr.y (mgr.14520) 1207 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:20 vm01 bash[20728]: cluster 2026-03-09T16:26:19.119943+0000 mgr.y (mgr.14520) 1207 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:20.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:20 vm01 bash[20728]: cluster 2026-03-09T16:26:19.119943+0000 mgr.y (mgr.14520) 1207 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:22 vm09 bash[22983]: cluster 2026-03-09T16:26:21.120268+0000 mgr.y (mgr.14520) 1208 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:22 vm09 bash[22983]: cluster 2026-03-09T16:26:21.120268+0000 mgr.y (mgr.14520) 1208 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:22 vm01 bash[28152]: cluster 2026-03-09T16:26:21.120268+0000 mgr.y (mgr.14520) 1208 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:22 vm01 bash[28152]: cluster 2026-03-09T16:26:21.120268+0000 mgr.y (mgr.14520) 1208 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:22 vm01 bash[20728]: cluster 2026-03-09T16:26:21.120268+0000 mgr.y (mgr.14520) 1208 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:22 vm01 bash[20728]: cluster 2026-03-09T16:26:21.120268+0000 mgr.y (mgr.14520) 1208 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:26:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:26:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:26:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:24 vm09 bash[22983]: cluster 2026-03-09T16:26:23.120632+0000 mgr.y (mgr.14520) 1209 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:24 vm09 bash[22983]: cluster 2026-03-09T16:26:23.120632+0000 mgr.y (mgr.14520) 1209 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:24.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:24 vm01 bash[20728]: cluster 2026-03-09T16:26:23.120632+0000 mgr.y (mgr.14520) 1209 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:24.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:24 vm01 bash[20728]: cluster 2026-03-09T16:26:23.120632+0000 mgr.y (mgr.14520) 1209 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:24.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:24 vm01 bash[28152]: cluster 2026-03-09T16:26:23.120632+0000 mgr.y (mgr.14520) 1209 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:24.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:24 vm01 bash[28152]: cluster 2026-03-09T16:26:23.120632+0000 mgr.y (mgr.14520) 1209 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:26 vm09 bash[22983]: cluster 2026-03-09T16:26:25.121294+0000 mgr.y (mgr.14520) 1210 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:26 vm09 bash[22983]: cluster 2026-03-09T16:26:25.121294+0000 mgr.y (mgr.14520) 1210 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:26.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:26 vm01 bash[28152]: cluster 2026-03-09T16:26:25.121294+0000 mgr.y (mgr.14520) 1210 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:26.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:26 vm01 bash[28152]: cluster 2026-03-09T16:26:25.121294+0000 mgr.y (mgr.14520) 1210 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:26 vm01 bash[20728]: cluster 2026-03-09T16:26:25.121294+0000 mgr.y (mgr.14520) 1210 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:26 vm01 bash[20728]: cluster 2026-03-09T16:26:25.121294+0000 mgr.y (mgr.14520) 1210 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:27 vm09 bash[22983]: cluster 2026-03-09T16:26:27.121602+0000 mgr.y (mgr.14520) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:27.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:27 vm09 bash[22983]: cluster 2026-03-09T16:26:27.121602+0000 mgr.y (mgr.14520) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:27.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:27 vm01 bash[28152]: cluster 2026-03-09T16:26:27.121602+0000 mgr.y (mgr.14520) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:27.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:27 vm01 bash[28152]: cluster 2026-03-09T16:26:27.121602+0000 mgr.y (mgr.14520) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:27.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:27 vm01 bash[20728]: cluster 2026-03-09T16:26:27.121602+0000 mgr.y (mgr.14520) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:27.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:27 vm01 bash[20728]: cluster 2026-03-09T16:26:27.121602+0000 mgr.y (mgr.14520) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:28.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:26:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:26:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:28 vm09 bash[22983]: audit 2026-03-09T16:26:27.664321+0000 mgr.y (mgr.14520) 1212 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:28 vm09 bash[22983]: audit 2026-03-09T16:26:27.664321+0000 mgr.y (mgr.14520) 1212 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:28 vm01 bash[28152]: audit 2026-03-09T16:26:27.664321+0000 mgr.y (mgr.14520) 1212 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:28 vm01 bash[28152]: audit 2026-03-09T16:26:27.664321+0000 mgr.y (mgr.14520) 1212 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:28.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:28 vm01 bash[20728]: audit 2026-03-09T16:26:27.664321+0000 mgr.y (mgr.14520) 1212 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:28.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:28 vm01 bash[20728]: audit 2026-03-09T16:26:27.664321+0000 mgr.y (mgr.14520) 1212 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:29 vm09 bash[22983]: cluster 2026-03-09T16:26:29.122552+0000 mgr.y (mgr.14520) 1213 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:29 vm09 bash[22983]: cluster 2026-03-09T16:26:29.122552+0000 mgr.y (mgr.14520) 1213 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:29 vm01 bash[28152]: cluster 2026-03-09T16:26:29.122552+0000 mgr.y (mgr.14520) 1213 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:29 vm01 bash[28152]: cluster 2026-03-09T16:26:29.122552+0000 mgr.y (mgr.14520) 1213 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:29.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:29 vm01 bash[20728]: cluster 2026-03-09T16:26:29.122552+0000 mgr.y (mgr.14520) 1213 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:29 vm01 bash[20728]: cluster 2026-03-09T16:26:29.122552+0000 mgr.y (mgr.14520) 1213 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:30 vm09 bash[22983]: audit 2026-03-09T16:26:29.973733+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:30 vm09 bash[22983]: audit 2026-03-09T16:26:29.973733+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:30 vm01 bash[28152]: audit 2026-03-09T16:26:29.973733+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:30 vm01 bash[28152]: audit 2026-03-09T16:26:29.973733+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:30 vm01 bash[20728]: audit 2026-03-09T16:26:29.973733+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:30 vm01 bash[20728]: audit 2026-03-09T16:26:29.973733+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:31 vm09 bash[22983]: cluster 2026-03-09T16:26:31.122856+0000 mgr.y (mgr.14520) 1214 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:31 vm09 bash[22983]: cluster 2026-03-09T16:26:31.122856+0000 mgr.y (mgr.14520) 1214 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:31.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:31 vm01 bash[28152]: cluster 2026-03-09T16:26:31.122856+0000 mgr.y (mgr.14520) 1214 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:31.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:31 vm01 bash[28152]: cluster 2026-03-09T16:26:31.122856+0000 mgr.y (mgr.14520) 1214 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:31.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:31 vm01 bash[20728]: cluster 2026-03-09T16:26:31.122856+0000 mgr.y (mgr.14520) 1214 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:31.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:31 vm01 bash[20728]: cluster 2026-03-09T16:26:31.122856+0000 mgr.y (mgr.14520) 1214 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:33.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:26:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:26:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:26:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:34 vm09 bash[22983]: cluster 2026-03-09T16:26:33.123156+0000 mgr.y (mgr.14520) 1215 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:34 vm09 bash[22983]: cluster 2026-03-09T16:26:33.123156+0000 mgr.y (mgr.14520) 1215 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:34.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:34 vm01 bash[28152]: cluster 2026-03-09T16:26:33.123156+0000 mgr.y (mgr.14520) 1215 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:34.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:34 vm01 bash[28152]: cluster 2026-03-09T16:26:33.123156+0000 mgr.y (mgr.14520) 1215 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:34 vm01 bash[20728]: cluster 2026-03-09T16:26:33.123156+0000 mgr.y (mgr.14520) 1215 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:34 vm01 bash[20728]: cluster 2026-03-09T16:26:33.123156+0000 mgr.y (mgr.14520) 1215 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:26:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:36 vm09 bash[22983]: cluster 2026-03-09T16:26:35.123879+0000 mgr.y (mgr.14520) 1216 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:36 vm09 bash[22983]: cluster 2026-03-09T16:26:35.123879+0000 mgr.y (mgr.14520) 1216 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:36 vm09 bash[22983]: cluster 2026-03-09T16:26:35.191990+0000 mon.a (mon.0) 3925 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T16:26:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:36 vm09 bash[22983]: cluster 2026-03-09T16:26:35.191990+0000 mon.a (mon.0) 3925 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T16:26:36.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:36 vm01 bash[28152]: cluster 2026-03-09T16:26:35.123879+0000 mgr.y (mgr.14520) 1216 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:36.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:36 vm01 bash[28152]: cluster 2026-03-09T16:26:35.123879+0000 mgr.y (mgr.14520) 1216 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:36.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:36 vm01 bash[28152]: cluster 2026-03-09T16:26:35.191990+0000 mon.a (mon.0) 3925 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T16:26:36.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:36 vm01 bash[28152]: cluster 2026-03-09T16:26:35.191990+0000 mon.a (mon.0) 3925 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T16:26:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:36 vm01 bash[20728]: cluster 2026-03-09T16:26:35.123879+0000 mgr.y (mgr.14520) 1216 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:36 vm01 bash[20728]: cluster 2026-03-09T16:26:35.123879+0000 mgr.y (mgr.14520) 1216 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:36 vm01 bash[20728]: cluster 2026-03-09T16:26:35.191990+0000 mon.a (mon.0) 3925 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T16:26:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:36 vm01 bash[20728]: cluster 2026-03-09T16:26:35.191990+0000 mon.a (mon.0) 3925 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-09T16:26:37.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:37 vm09 bash[22983]: cluster 2026-03-09T16:26:36.212353+0000 mon.a (mon.0) 3926 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T16:26:37.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:37 vm09 bash[22983]: cluster 2026-03-09T16:26:36.212353+0000 mon.a (mon.0) 3926 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T16:26:37.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:37 vm01 bash[28152]: cluster 2026-03-09T16:26:36.212353+0000 mon.a (mon.0) 3926 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T16:26:37.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:37 vm01 bash[28152]: cluster 2026-03-09T16:26:36.212353+0000 mon.a (mon.0) 3926 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T16:26:37.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:37 vm01 bash[20728]: cluster 2026-03-09T16:26:36.212353+0000 mon.a (mon.0) 3926 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T16:26:37.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:37 vm01 bash[20728]: cluster 2026-03-09T16:26:36.212353+0000 mon.a (mon.0) 3926 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-09T16:26:38.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:26:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:26:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:38 vm09 bash[22983]: cluster 2026-03-09T16:26:37.124193+0000 mgr.y (mgr.14520) 1217 : cluster [DBG] pgmap v1635: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:26:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:38 vm09 bash[22983]: cluster 2026-03-09T16:26:37.124193+0000 mgr.y (mgr.14520) 1217 : cluster [DBG] pgmap v1635: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:26:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:38 vm09 bash[22983]: cluster 2026-03-09T16:26:37.218622+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T16:26:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:38 vm09 bash[22983]: cluster 2026-03-09T16:26:37.218622+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T16:26:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:38 vm09 bash[22983]: cluster 2026-03-09T16:26:38.215705+0000 mon.a (mon.0) 3928 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T16:26:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:38 vm09 bash[22983]: cluster 2026-03-09T16:26:38.215705+0000 mon.a (mon.0) 3928 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T16:26:38.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:38 vm01 bash[28152]: cluster 2026-03-09T16:26:37.124193+0000 mgr.y (mgr.14520) 1217 : cluster [DBG] pgmap v1635: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:26:38.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:38 vm01 bash[28152]: cluster 2026-03-09T16:26:37.124193+0000 mgr.y (mgr.14520) 1217 : cluster [DBG] pgmap v1635: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:26:38.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:38 vm01 bash[28152]: cluster 2026-03-09T16:26:37.218622+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T16:26:38.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:38 vm01 bash[28152]: cluster 2026-03-09T16:26:37.218622+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T16:26:38.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:38 vm01 bash[28152]: cluster 2026-03-09T16:26:38.215705+0000 mon.a (mon.0) 3928 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T16:26:38.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:38 vm01 bash[28152]: cluster 2026-03-09T16:26:38.215705+0000 mon.a (mon.0) 3928 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T16:26:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:38 vm01 bash[20728]: cluster 2026-03-09T16:26:37.124193+0000 mgr.y (mgr.14520) 1217 : cluster [DBG] pgmap v1635: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:26:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:38 vm01 bash[20728]: cluster 2026-03-09T16:26:37.124193+0000 mgr.y (mgr.14520) 1217 : cluster [DBG] pgmap v1635: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:26:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:38 vm01 bash[20728]: cluster 2026-03-09T16:26:37.218622+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T16:26:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:38 vm01 bash[20728]: cluster 2026-03-09T16:26:37.218622+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-09T16:26:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:38 vm01 bash[20728]: cluster 2026-03-09T16:26:38.215705+0000 mon.a (mon.0) 3928 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T16:26:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:38 vm01 bash[20728]: cluster 2026-03-09T16:26:38.215705+0000 mon.a (mon.0) 3928 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-09T16:26:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:39 vm09 bash[22983]: audit 2026-03-09T16:26:37.665862+0000 mgr.y (mgr.14520) 1218 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:39 vm09 bash[22983]: audit 2026-03-09T16:26:37.665862+0000 mgr.y (mgr.14520) 1218 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:39 vm09 bash[22983]: cluster 2026-03-09T16:26:39.242209+0000 mon.a (mon.0) 3929 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T16:26:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:39 vm09 bash[22983]: cluster 2026-03-09T16:26:39.242209+0000 mon.a (mon.0) 3929 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T16:26:39.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:39 vm01 bash[28152]: audit 2026-03-09T16:26:37.665862+0000 mgr.y (mgr.14520) 1218 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:39.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:39 vm01 bash[28152]: audit 2026-03-09T16:26:37.665862+0000 mgr.y (mgr.14520) 1218 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:39.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:39 vm01 bash[28152]: cluster 2026-03-09T16:26:39.242209+0000 mon.a (mon.0) 3929 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T16:26:39.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:39 vm01 bash[28152]: cluster 2026-03-09T16:26:39.242209+0000 mon.a (mon.0) 3929 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T16:26:39.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:39 vm01 bash[20728]: audit 2026-03-09T16:26:37.665862+0000 mgr.y (mgr.14520) 1218 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:39 vm01 bash[20728]: audit 2026-03-09T16:26:37.665862+0000 mgr.y (mgr.14520) 1218 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:39 vm01 bash[20728]: cluster 2026-03-09T16:26:39.242209+0000 mon.a (mon.0) 3929 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T16:26:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:39 vm01 bash[20728]: cluster 2026-03-09T16:26:39.242209+0000 mon.a (mon.0) 3929 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-09T16:26:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:40 vm09 bash[22983]: cluster 2026-03-09T16:26:39.124842+0000 mgr.y (mgr.14520) 1219 : cluster [DBG] pgmap v1638: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:40 vm09 bash[22983]: cluster 2026-03-09T16:26:39.124842+0000 mgr.y (mgr.14520) 1219 : cluster [DBG] pgmap v1638: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:40 vm09 bash[22983]: cluster 2026-03-09T16:26:39.258705+0000 mon.a (mon.0) 3930 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:40 vm09 bash[22983]: cluster 2026-03-09T16:26:39.258705+0000 mon.a (mon.0) 3930 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:40 vm09 bash[22983]: cluster 2026-03-09T16:26:40.255210+0000 mon.a (mon.0) 3931 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T16:26:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:40 vm09 bash[22983]: cluster 2026-03-09T16:26:40.255210+0000 mon.a (mon.0) 3931 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T16:26:40.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:40 vm01 bash[28152]: cluster 2026-03-09T16:26:39.124842+0000 mgr.y (mgr.14520) 1219 : cluster [DBG] pgmap v1638: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:40 vm01 bash[28152]: cluster 2026-03-09T16:26:39.124842+0000 mgr.y (mgr.14520) 1219 : cluster [DBG] pgmap v1638: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:40 vm01 bash[28152]: cluster 2026-03-09T16:26:39.258705+0000 mon.a (mon.0) 3930 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:40 vm01 bash[28152]: cluster 2026-03-09T16:26:39.258705+0000 mon.a (mon.0) 3930 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:40 vm01 bash[28152]: cluster 2026-03-09T16:26:40.255210+0000 mon.a (mon.0) 3931 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:40 vm01 bash[28152]: cluster 2026-03-09T16:26:40.255210+0000 mon.a (mon.0) 3931 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:40 vm01 bash[20728]: cluster 2026-03-09T16:26:39.124842+0000 mgr.y (mgr.14520) 1219 : cluster [DBG] pgmap v1638: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:40 vm01 bash[20728]: cluster 2026-03-09T16:26:39.124842+0000 mgr.y (mgr.14520) 1219 : cluster [DBG] pgmap v1638: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:40 vm01 bash[20728]: cluster 2026-03-09T16:26:39.258705+0000 mon.a (mon.0) 3930 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:40 vm01 bash[20728]: cluster 2026-03-09T16:26:39.258705+0000 mon.a (mon.0) 3930 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:40 vm01 bash[20728]: cluster 2026-03-09T16:26:40.255210+0000 mon.a (mon.0) 3931 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T16:26:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:40 vm01 bash[20728]: cluster 2026-03-09T16:26:40.255210+0000 mon.a (mon.0) 3931 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-09T16:26:42.290 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: Running main() from gmock_main.cc 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [==========] Running 2 tests from 1 test suite. 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [----------] Global test environment set-up. 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotify 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: handle_notify cookie 94101106178864 notify_id 3165390897158 notifier_gid 24706 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotify (1800445 ms) 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotifyTimeout 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: Trying... 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: handle_notify cookie 94101119058096 notify_id 3178275799044 notifier_gid 45664 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: Waiting for 3.000000000s 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: Timed out. 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: Flushing... 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: Flushed... 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotifyTimeout (7093 ms) 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify (1807538 ms total) 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [----------] Global test environment tear-down 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [==========] 2 tests from 1 test suite ran. (1807538 ms total) 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stdout: watch_notify: [ PASSED ] 2 tests. 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59946 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59946 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60264 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60264 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60501 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60501 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60357 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60357 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.291 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60550 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60550 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60188 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60188 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59833 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59833 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60585 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60585 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60079 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60079 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59597 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59597 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59675 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59675 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60022 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60022 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=59721 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 59721 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60118 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60118 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60150 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60150 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60475 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60475 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ for t in "${!pids[@]}" 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=60637 2026-03-09T16:26:42.292 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 60637 2026-03-09T16:26:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:42 vm09 bash[22983]: cluster 2026-03-09T16:26:41.125209+0000 mgr.y (mgr.14520) 1220 : cluster [DBG] pgmap v1641: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:42 vm09 bash[22983]: cluster 2026-03-09T16:26:41.125209+0000 mgr.y (mgr.14520) 1220 : cluster [DBG] pgmap v1641: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:42 vm09 bash[22983]: cluster 2026-03-09T16:26:41.276406+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T16:26:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:42 vm09 bash[22983]: cluster 2026-03-09T16:26:41.276406+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T16:26:42.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:42 vm01 bash[28152]: cluster 2026-03-09T16:26:41.125209+0000 mgr.y (mgr.14520) 1220 : cluster [DBG] pgmap v1641: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:42.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:42 vm01 bash[28152]: cluster 2026-03-09T16:26:41.125209+0000 mgr.y (mgr.14520) 1220 : cluster [DBG] pgmap v1641: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:42.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:42 vm01 bash[28152]: cluster 2026-03-09T16:26:41.276406+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T16:26:42.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:42 vm01 bash[28152]: cluster 2026-03-09T16:26:41.276406+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T16:26:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:42 vm01 bash[20728]: cluster 2026-03-09T16:26:41.125209+0000 mgr.y (mgr.14520) 1220 : cluster [DBG] pgmap v1641: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:42 vm01 bash[20728]: cluster 2026-03-09T16:26:41.125209+0000 mgr.y (mgr.14520) 1220 : cluster [DBG] pgmap v1641: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:42 vm01 bash[20728]: cluster 2026-03-09T16:26:41.276406+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T16:26:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:42 vm01 bash[20728]: cluster 2026-03-09T16:26:41.276406+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-09T16:26:43.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:26:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:26:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:26:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:43 vm09 bash[22983]: cluster 2026-03-09T16:26:42.294997+0000 mon.a (mon.0) 3933 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T16:26:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:43 vm09 bash[22983]: cluster 2026-03-09T16:26:42.294997+0000 mon.a (mon.0) 3933 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T16:26:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:43 vm09 bash[22983]: cluster 2026-03-09T16:26:43.125597+0000 mgr.y (mgr.14520) 1221 : cluster [DBG] pgmap v1644: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:43 vm09 bash[22983]: cluster 2026-03-09T16:26:43.125597+0000 mgr.y (mgr.14520) 1221 : cluster [DBG] pgmap v1644: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:43 vm09 bash[22983]: cluster 2026-03-09T16:26:43.299585+0000 mon.a (mon.0) 3934 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T16:26:43.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:43 vm09 bash[22983]: cluster 2026-03-09T16:26:43.299585+0000 mon.a (mon.0) 3934 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T16:26:43.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:43 vm01 bash[28152]: cluster 2026-03-09T16:26:42.294997+0000 mon.a (mon.0) 3933 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:43 vm01 bash[28152]: cluster 2026-03-09T16:26:42.294997+0000 mon.a (mon.0) 3933 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:43 vm01 bash[28152]: cluster 2026-03-09T16:26:43.125597+0000 mgr.y (mgr.14520) 1221 : cluster [DBG] pgmap v1644: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:43 vm01 bash[28152]: cluster 2026-03-09T16:26:43.125597+0000 mgr.y (mgr.14520) 1221 : cluster [DBG] pgmap v1644: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:43 vm01 bash[28152]: cluster 2026-03-09T16:26:43.299585+0000 mon.a (mon.0) 3934 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:43 vm01 bash[28152]: cluster 2026-03-09T16:26:43.299585+0000 mon.a (mon.0) 3934 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:43 vm01 bash[20728]: cluster 2026-03-09T16:26:42.294997+0000 mon.a (mon.0) 3933 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:43 vm01 bash[20728]: cluster 2026-03-09T16:26:42.294997+0000 mon.a (mon.0) 3933 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:43 vm01 bash[20728]: cluster 2026-03-09T16:26:43.125597+0000 mgr.y (mgr.14520) 1221 : cluster [DBG] pgmap v1644: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:43 vm01 bash[20728]: cluster 2026-03-09T16:26:43.125597+0000 mgr.y (mgr.14520) 1221 : cluster [DBG] pgmap v1644: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:43 vm01 bash[20728]: cluster 2026-03-09T16:26:43.299585+0000 mon.a (mon.0) 3934 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T16:26:43.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:43 vm01 bash[20728]: cluster 2026-03-09T16:26:43.299585+0000 mon.a (mon.0) 3934 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-09T16:26:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:45 vm09 bash[22983]: cluster 2026-03-09T16:26:44.302424+0000 mon.a (mon.0) 3935 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T16:26:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:45 vm09 bash[22983]: cluster 2026-03-09T16:26:44.302424+0000 mon.a (mon.0) 3935 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T16:26:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:45 vm09 bash[22983]: audit 2026-03-09T16:26:44.980551+0000 mon.a (mon.0) 3936 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:45 vm09 bash[22983]: audit 2026-03-09T16:26:44.980551+0000 mon.a (mon.0) 3936 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:45 vm09 bash[22983]: cluster 2026-03-09T16:26:45.125993+0000 mgr.y (mgr.14520) 1222 : cluster [DBG] pgmap v1647: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:26:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:45 vm09 bash[22983]: cluster 2026-03-09T16:26:45.125993+0000 mgr.y (mgr.14520) 1222 : cluster [DBG] pgmap v1647: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:26:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:45 vm01 bash[28152]: cluster 2026-03-09T16:26:44.302424+0000 mon.a (mon.0) 3935 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T16:26:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:45 vm01 bash[28152]: cluster 2026-03-09T16:26:44.302424+0000 mon.a (mon.0) 3935 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T16:26:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:45 vm01 bash[28152]: audit 2026-03-09T16:26:44.980551+0000 mon.a (mon.0) 3936 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:45 vm01 bash[28152]: audit 2026-03-09T16:26:44.980551+0000 mon.a (mon.0) 3936 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:45 vm01 bash[28152]: cluster 2026-03-09T16:26:45.125993+0000 mgr.y (mgr.14520) 1222 : cluster [DBG] pgmap v1647: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:26:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:45 vm01 bash[28152]: cluster 2026-03-09T16:26:45.125993+0000 mgr.y (mgr.14520) 1222 : cluster [DBG] pgmap v1647: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:26:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:45 vm01 bash[20728]: cluster 2026-03-09T16:26:44.302424+0000 mon.a (mon.0) 3935 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T16:26:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:45 vm01 bash[20728]: cluster 2026-03-09T16:26:44.302424+0000 mon.a (mon.0) 3935 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-09T16:26:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:45 vm01 bash[20728]: audit 2026-03-09T16:26:44.980551+0000 mon.a (mon.0) 3936 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:45 vm01 bash[20728]: audit 2026-03-09T16:26:44.980551+0000 mon.a (mon.0) 3936 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:26:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:45 vm01 bash[20728]: cluster 2026-03-09T16:26:45.125993+0000 mgr.y (mgr.14520) 1222 : cluster [DBG] pgmap v1647: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:26:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:45 vm01 bash[20728]: cluster 2026-03-09T16:26:45.125993+0000 mgr.y (mgr.14520) 1222 : cluster [DBG] pgmap v1647: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T16:26:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:46 vm09 bash[22983]: cluster 2026-03-09T16:26:45.298350+0000 mon.a (mon.0) 3937 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:46 vm09 bash[22983]: cluster 2026-03-09T16:26:45.298350+0000 mon.a (mon.0) 3937 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:46 vm09 bash[22983]: cluster 2026-03-09T16:26:45.322335+0000 mon.a (mon.0) 3938 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T16:26:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:46 vm09 bash[22983]: cluster 2026-03-09T16:26:45.322335+0000 mon.a (mon.0) 3938 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T16:26:46.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:46 vm01 bash[28152]: cluster 2026-03-09T16:26:45.298350+0000 mon.a (mon.0) 3937 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:46.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:46 vm01 bash[28152]: cluster 2026-03-09T16:26:45.298350+0000 mon.a (mon.0) 3937 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:46.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:46 vm01 bash[28152]: cluster 2026-03-09T16:26:45.322335+0000 mon.a (mon.0) 3938 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T16:26:46.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:46 vm01 bash[28152]: cluster 2026-03-09T16:26:45.322335+0000 mon.a (mon.0) 3938 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T16:26:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:46 vm01 bash[20728]: cluster 2026-03-09T16:26:45.298350+0000 mon.a (mon.0) 3937 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:46 vm01 bash[20728]: cluster 2026-03-09T16:26:45.298350+0000 mon.a (mon.0) 3937 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:46 vm01 bash[20728]: cluster 2026-03-09T16:26:45.322335+0000 mon.a (mon.0) 3938 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T16:26:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:46 vm01 bash[20728]: cluster 2026-03-09T16:26:45.322335+0000 mon.a (mon.0) 3938 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-09T16:26:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:47 vm09 bash[22983]: cluster 2026-03-09T16:26:46.342789+0000 mon.a (mon.0) 3939 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T16:26:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:47 vm09 bash[22983]: cluster 2026-03-09T16:26:46.342789+0000 mon.a (mon.0) 3939 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T16:26:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:47 vm09 bash[22983]: cluster 2026-03-09T16:26:47.126375+0000 mgr.y (mgr.14520) 1223 : cluster [DBG] pgmap v1650: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:47.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:47 vm09 bash[22983]: cluster 2026-03-09T16:26:47.126375+0000 mgr.y (mgr.14520) 1223 : cluster [DBG] pgmap v1650: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:47.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:47 vm01 bash[28152]: cluster 2026-03-09T16:26:46.342789+0000 mon.a (mon.0) 3939 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T16:26:47.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:47 vm01 bash[28152]: cluster 2026-03-09T16:26:46.342789+0000 mon.a (mon.0) 3939 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T16:26:47.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:47 vm01 bash[28152]: cluster 2026-03-09T16:26:47.126375+0000 mgr.y (mgr.14520) 1223 : cluster [DBG] pgmap v1650: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:47.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:47 vm01 bash[28152]: cluster 2026-03-09T16:26:47.126375+0000 mgr.y (mgr.14520) 1223 : cluster [DBG] pgmap v1650: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:47.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:47 vm01 bash[20728]: cluster 2026-03-09T16:26:46.342789+0000 mon.a (mon.0) 3939 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T16:26:47.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:47 vm01 bash[20728]: cluster 2026-03-09T16:26:46.342789+0000 mon.a (mon.0) 3939 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-09T16:26:47.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:47 vm01 bash[20728]: cluster 2026-03-09T16:26:47.126375+0000 mgr.y (mgr.14520) 1223 : cluster [DBG] pgmap v1650: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:47.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:47 vm01 bash[20728]: cluster 2026-03-09T16:26:47.126375+0000 mgr.y (mgr.14520) 1223 : cluster [DBG] pgmap v1650: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:48.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:26:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:26:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:48 vm09 bash[22983]: cluster 2026-03-09T16:26:47.347354+0000 mon.a (mon.0) 3940 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T16:26:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:48 vm09 bash[22983]: cluster 2026-03-09T16:26:47.347354+0000 mon.a (mon.0) 3940 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T16:26:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:48 vm09 bash[22983]: audit 2026-03-09T16:26:47.674355+0000 mgr.y (mgr.14520) 1224 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:48 vm09 bash[22983]: audit 2026-03-09T16:26:47.674355+0000 mgr.y (mgr.14520) 1224 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:48 vm01 bash[28152]: cluster 2026-03-09T16:26:47.347354+0000 mon.a (mon.0) 3940 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T16:26:48.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:48 vm01 bash[28152]: cluster 2026-03-09T16:26:47.347354+0000 mon.a (mon.0) 3940 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T16:26:48.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:48 vm01 bash[28152]: audit 2026-03-09T16:26:47.674355+0000 mgr.y (mgr.14520) 1224 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:48.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:48 vm01 bash[28152]: audit 2026-03-09T16:26:47.674355+0000 mgr.y (mgr.14520) 1224 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:48 vm01 bash[20728]: cluster 2026-03-09T16:26:47.347354+0000 mon.a (mon.0) 3940 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T16:26:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:48 vm01 bash[20728]: cluster 2026-03-09T16:26:47.347354+0000 mon.a (mon.0) 3940 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-09T16:26:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:48 vm01 bash[20728]: audit 2026-03-09T16:26:47.674355+0000 mgr.y (mgr.14520) 1224 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:48 vm01 bash[20728]: audit 2026-03-09T16:26:47.674355+0000 mgr.y (mgr.14520) 1224 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:49 vm09 bash[22983]: cluster 2026-03-09T16:26:48.359283+0000 mon.a (mon.0) 3941 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T16:26:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:49 vm09 bash[22983]: cluster 2026-03-09T16:26:48.359283+0000 mon.a (mon.0) 3941 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T16:26:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:49 vm09 bash[22983]: cluster 2026-03-09T16:26:49.127080+0000 mgr.y (mgr.14520) 1225 : cluster [DBG] pgmap v1653: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:49 vm09 bash[22983]: cluster 2026-03-09T16:26:49.127080+0000 mgr.y (mgr.14520) 1225 : cluster [DBG] pgmap v1653: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:49 vm01 bash[28152]: cluster 2026-03-09T16:26:48.359283+0000 mon.a (mon.0) 3941 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T16:26:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:49 vm01 bash[28152]: cluster 2026-03-09T16:26:48.359283+0000 mon.a (mon.0) 3941 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T16:26:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:49 vm01 bash[28152]: cluster 2026-03-09T16:26:49.127080+0000 mgr.y (mgr.14520) 1225 : cluster [DBG] pgmap v1653: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:49 vm01 bash[28152]: cluster 2026-03-09T16:26:49.127080+0000 mgr.y (mgr.14520) 1225 : cluster [DBG] pgmap v1653: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:49 vm01 bash[20728]: cluster 2026-03-09T16:26:48.359283+0000 mon.a (mon.0) 3941 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T16:26:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:49 vm01 bash[20728]: cluster 2026-03-09T16:26:48.359283+0000 mon.a (mon.0) 3941 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-09T16:26:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:49 vm01 bash[20728]: cluster 2026-03-09T16:26:49.127080+0000 mgr.y (mgr.14520) 1225 : cluster [DBG] pgmap v1653: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:49 vm01 bash[20728]: cluster 2026-03-09T16:26:49.127080+0000 mgr.y (mgr.14520) 1225 : cluster [DBG] pgmap v1653: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:50 vm01 bash[28152]: cluster 2026-03-09T16:26:49.404304+0000 mon.a (mon.0) 3942 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T16:26:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:50 vm01 bash[28152]: cluster 2026-03-09T16:26:49.404304+0000 mon.a (mon.0) 3942 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T16:26:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:50 vm01 bash[20728]: cluster 2026-03-09T16:26:49.404304+0000 mon.a (mon.0) 3942 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T16:26:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:50 vm01 bash[20728]: cluster 2026-03-09T16:26:49.404304+0000 mon.a (mon.0) 3942 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T16:26:50.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:50 vm09 bash[22983]: cluster 2026-03-09T16:26:49.404304+0000 mon.a (mon.0) 3942 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T16:26:50.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:50 vm09 bash[22983]: cluster 2026-03-09T16:26:49.404304+0000 mon.a (mon.0) 3942 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-09T16:26:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:51 vm01 bash[28152]: cluster 2026-03-09T16:26:50.389463+0000 mon.a (mon.0) 3943 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T16:26:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:51 vm01 bash[28152]: cluster 2026-03-09T16:26:50.389463+0000 mon.a (mon.0) 3943 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T16:26:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:51 vm01 bash[28152]: cluster 2026-03-09T16:26:51.127481+0000 mgr.y (mgr.14520) 1226 : cluster [DBG] pgmap v1656: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:51 vm01 bash[28152]: cluster 2026-03-09T16:26:51.127481+0000 mgr.y (mgr.14520) 1226 : cluster [DBG] pgmap v1656: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:51.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:51 vm01 bash[20728]: cluster 2026-03-09T16:26:50.389463+0000 mon.a (mon.0) 3943 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T16:26:51.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:51 vm01 bash[20728]: cluster 2026-03-09T16:26:50.389463+0000 mon.a (mon.0) 3943 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T16:26:51.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:51 vm01 bash[20728]: cluster 2026-03-09T16:26:51.127481+0000 mgr.y (mgr.14520) 1226 : cluster [DBG] pgmap v1656: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:51.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:51 vm01 bash[20728]: cluster 2026-03-09T16:26:51.127481+0000 mgr.y (mgr.14520) 1226 : cluster [DBG] pgmap v1656: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:51 vm09 bash[22983]: cluster 2026-03-09T16:26:50.389463+0000 mon.a (mon.0) 3943 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T16:26:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:51 vm09 bash[22983]: cluster 2026-03-09T16:26:50.389463+0000 mon.a (mon.0) 3943 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-09T16:26:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:51 vm09 bash[22983]: cluster 2026-03-09T16:26:51.127481+0000 mgr.y (mgr.14520) 1226 : cluster [DBG] pgmap v1656: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:51.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:51 vm09 bash[22983]: cluster 2026-03-09T16:26:51.127481+0000 mgr.y (mgr.14520) 1226 : cluster [DBG] pgmap v1656: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:52 vm01 bash[28152]: cluster 2026-03-09T16:26:51.396564+0000 mon.a (mon.0) 3944 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:52 vm01 bash[28152]: cluster 2026-03-09T16:26:51.396564+0000 mon.a (mon.0) 3944 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:52 vm01 bash[28152]: cluster 2026-03-09T16:26:51.408649+0000 mon.a (mon.0) 3945 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T16:26:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:52 vm01 bash[28152]: cluster 2026-03-09T16:26:51.408649+0000 mon.a (mon.0) 3945 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T16:26:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:52 vm01 bash[20728]: cluster 2026-03-09T16:26:51.396564+0000 mon.a (mon.0) 3944 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:52 vm01 bash[20728]: cluster 2026-03-09T16:26:51.396564+0000 mon.a (mon.0) 3944 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:52 vm01 bash[20728]: cluster 2026-03-09T16:26:51.408649+0000 mon.a (mon.0) 3945 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T16:26:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:52 vm01 bash[20728]: cluster 2026-03-09T16:26:51.408649+0000 mon.a (mon.0) 3945 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T16:26:52.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:52 vm09 bash[22983]: cluster 2026-03-09T16:26:51.396564+0000 mon.a (mon.0) 3944 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:52.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:52 vm09 bash[22983]: cluster 2026-03-09T16:26:51.396564+0000 mon.a (mon.0) 3944 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:52.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:52 vm09 bash[22983]: cluster 2026-03-09T16:26:51.408649+0000 mon.a (mon.0) 3945 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T16:26:52.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:52 vm09 bash[22983]: cluster 2026-03-09T16:26:51.408649+0000 mon.a (mon.0) 3945 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-09T16:26:53.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:26:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:26:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:26:53.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:53 vm01 bash[28152]: cluster 2026-03-09T16:26:52.423965+0000 mon.a (mon.0) 3946 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T16:26:53.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:53 vm01 bash[28152]: cluster 2026-03-09T16:26:52.423965+0000 mon.a (mon.0) 3946 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T16:26:53.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:53 vm01 bash[28152]: cluster 2026-03-09T16:26:53.127822+0000 mgr.y (mgr.14520) 1227 : cluster [DBG] pgmap v1659: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:53.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:53 vm01 bash[28152]: cluster 2026-03-09T16:26:53.127822+0000 mgr.y (mgr.14520) 1227 : cluster [DBG] pgmap v1659: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:53.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:53 vm01 bash[20728]: cluster 2026-03-09T16:26:52.423965+0000 mon.a (mon.0) 3946 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T16:26:53.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:53 vm01 bash[20728]: cluster 2026-03-09T16:26:52.423965+0000 mon.a (mon.0) 3946 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T16:26:53.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:53 vm01 bash[20728]: cluster 2026-03-09T16:26:53.127822+0000 mgr.y (mgr.14520) 1227 : cluster [DBG] pgmap v1659: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:53.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:53 vm01 bash[20728]: cluster 2026-03-09T16:26:53.127822+0000 mgr.y (mgr.14520) 1227 : cluster [DBG] pgmap v1659: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:53 vm09 bash[22983]: cluster 2026-03-09T16:26:52.423965+0000 mon.a (mon.0) 3946 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T16:26:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:53 vm09 bash[22983]: cluster 2026-03-09T16:26:52.423965+0000 mon.a (mon.0) 3946 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-09T16:26:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:53 vm09 bash[22983]: cluster 2026-03-09T16:26:53.127822+0000 mgr.y (mgr.14520) 1227 : cluster [DBG] pgmap v1659: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:53.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:53 vm09 bash[22983]: cluster 2026-03-09T16:26:53.127822+0000 mgr.y (mgr.14520) 1227 : cluster [DBG] pgmap v1659: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: Running main() from gmock_main.cc 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [==========] Running 7 tests from 1 test suite. 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [----------] Global test environment set-up. 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertExists 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertExists (1800411 ms) 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertVersion 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertVersion (3023 ms) 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Xattrs 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ OK ] NeoRadosWriteOps.Xattrs (3061 ms) 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Write 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ OK ] NeoRadosWriteOps.Write (3032 ms) 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Exec 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ OK ] NeoRadosWriteOps.Exec (3043 ms) 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ RUN ] NeoRadosWriteOps.WriteSame 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ OK ] NeoRadosWriteOps.WriteSame (3037 ms) 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ RUN ] NeoRadosWriteOps.CmpExt 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ OK ] NeoRadosWriteOps.CmpExt (4092 ms) 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps (1819700 ms total) 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [----------] Global test environment tear-down 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [==========] 7 tests from 1 test suite ran. (1819700 ms total) 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stdout: write_operations: [ PASSED ] 7 tests. 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stderr:+ exit 0 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stderr:+ cleanup 2026-03-09T16:26:54.479 INFO:tasks.workunit.client.0.vm01.stderr:+ pkill -P 59591 2026-03-09T16:26:54.484 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-09T16:26:54.484 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T16:26:54.484 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T16:26:54.494 INFO:tasks.workunit:Running workunits matching rados/test_pool_quota.sh on client.0... 2026-03-09T16:26:54.494 INFO:tasks.workunit:Running workunit rados/test_pool_quota.sh... 2026-03-09T16:26:54.495 DEBUG:teuthology.orchestra.run.vm01:workunit test rados/test_pool_quota.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_pool_quota.sh 2026-03-09T16:26:54.543 INFO:tasks.workunit.client.0.vm01.stderr:+ uuidgen 2026-03-09T16:26:54.545 INFO:tasks.workunit.client.0.vm01.stderr:+ p=596e5e1f-ecde-406d-b4d0-afd8854e4a60 2026-03-09T16:26:54.545 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool create 596e5e1f-ecde-406d-b4d0-afd8854e4a60 12 2026-03-09T16:26:54.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 -- 192.168.123.101:0/4282928710 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 msgr2=0x7f8514075420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:54.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 --2- 192.168.123.101:0/4282928710 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 0x7f8514075420 secure :-1 s=READY pgs=3067 cs=0 l=1 rev1=1 crypto rx=0x7f8504009a30 tx=0x7f850401c990 comp rx=0 tx=0).stop 2026-03-09T16:26:54.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 -- 192.168.123.101:0/4282928710 shutdown_connections 2026-03-09T16:26:54.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 --2- 192.168.123.101:0/4282928710 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8514113750 0x7f8514115b80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:54.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 --2- 192.168.123.101:0/4282928710 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8514075960 0x7f8514075da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:54.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 --2- 192.168.123.101:0/4282928710 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 0x7f8514075420 unknown :-1 s=CLOSED pgs=3067 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 -- 192.168.123.101:0/4282928710 >> 192.168.123.101:0/4282928710 conn(0x7f85140fe6a0 msgr2=0x7f8514100ac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 -- 192.168.123.101:0/4282928710 shutdown_connections 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 -- 192.168.123.101:0/4282928710 wait complete. 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 Processor -- start 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 -- start start 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8514075960 0x7f85141a41b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 0x7f85141a46f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8514113750 0x7f85141a8a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f851411bba0 con 0x7f8514106850 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f851411ba20 con 0x7f8514113750 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851c5db640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f851411bd20 con 0x7f8514075960 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f8519b4f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 0x7f85141a46f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f851a350640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8514075960 0x7f85141a41b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:54.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f8519b4f640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 0x7f85141a46f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:37114/0 (socket says 192.168.123.101:37114) 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.691+0000 7f8519b4f640 1 -- 192.168.123.101:0/3272396138 learned_addr learned my addr 192.168.123.101:0/3272396138 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f851ab51640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8514113750 0x7f85141a8a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f8519b4f640 1 -- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8514075960 msgr2=0x7f85141a41b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f8519b4f640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8514075960 0x7f85141a41b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f8519b4f640 1 -- 192.168.123.101:0/3272396138 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8514113750 msgr2=0x7f85141a8a80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f8519b4f640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8514113750 0x7f85141a8a80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f8519b4f640 1 -- 192.168.123.101:0/3272396138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f85141a9160 con 0x7f8514106850 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f8519b4f640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 0x7f85141a46f0 secure :-1 s=READY pgs=3068 cs=0 l=1 rev1=1 crypto rx=0x7f850800cce0 tx=0x7f8508007590 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f851ab51640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8514113750 0x7f85141a8a80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f851a350640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8514075960 0x7f85141a41b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T16:26:54.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f85037fe640 1 -- 192.168.123.101:0/3272396138 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8508013070 con 0x7f8514106850 2026-03-09T16:26:54.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f85037fe640 1 -- 192.168.123.101:0/3272396138 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f8508004510 con 0x7f8514106850 2026-03-09T16:26:54.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f85037fe640 1 -- 192.168.123.101:0/3272396138 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f850800f450 con 0x7f8514106850 2026-03-09T16:26:54.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f85141a9450 con 0x7f8514106850 2026-03-09T16:26:54.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f85141b0d30 con 0x7f8514106850 2026-03-09T16:26:54.695 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f84e4005190 con 0x7f8514106850 2026-03-09T16:26:54.695 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.695+0000 7f85037fe640 1 -- 192.168.123.101:0/3272396138 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f8508020050 con 0x7f8514106850 2026-03-09T16:26:54.695 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.699+0000 7f85037fe640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f84e80776d0 0x7f84e8079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:54.695 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.699+0000 7f85037fe640 1 -- 192.168.123.101:0/3272396138 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(757..757 src has 257..757) ==== 8597+0+0 (secure 0 0 0) 0x7f850809a120 con 0x7f8514106850 2026-03-09T16:26:54.696 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.699+0000 7f85037fe640 1 -- 192.168.123.101:0/3272396138 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f8508066600 con 0x7f8514106850 2026-03-09T16:26:54.696 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.699+0000 7f851a350640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f84e80776d0 0x7f84e8079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:54.700 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.703+0000 7f851a350640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f84e80776d0 0x7f84e8079b90 secure :-1 s=READY pgs=4261 cs=0 l=1 rev1=1 crypto rx=0x7f850401ce70 tx=0x7f85040a7040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:26:54.794 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:54.799+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12} v 0) -- 0x7f84e4005480 con 0x7f8514106850 2026-03-09T16:26:54.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:54 vm09 bash[22983]: cluster 2026-03-09T16:26:53.432664+0000 mon.a (mon.0) 3947 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T16:26:54.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:54 vm09 bash[22983]: cluster 2026-03-09T16:26:53.432664+0000 mon.a (mon.0) 3947 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T16:26:54.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:54 vm01 bash[28152]: cluster 2026-03-09T16:26:53.432664+0000 mon.a (mon.0) 3947 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T16:26:54.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:54 vm01 bash[28152]: cluster 2026-03-09T16:26:53.432664+0000 mon.a (mon.0) 3947 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T16:26:54.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:54 vm01 bash[20728]: cluster 2026-03-09T16:26:53.432664+0000 mon.a (mon.0) 3947 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T16:26:54.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:54 vm01 bash[20728]: cluster 2026-03-09T16:26:53.432664+0000 mon.a (mon.0) 3947 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-09T16:26:55.592 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.595+0000 7f85037fe640 1 -- 192.168.123.101:0/3272396138 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]=0 pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' created v758) ==== 176+0+0 (secure 0 0 0) 0x7f850806b4b0 con 0x7f8514106850 2026-03-09T16:26:55.662 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.667+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12} v 0) -- 0x7f84e40049a0 con 0x7f8514106850 2026-03-09T16:26:55.662 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.667+0000 7f85037fe640 1 -- 192.168.123.101:0/3272396138 <== mon.0 v2:192.168.123.101:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]=0 pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' already exists v758) ==== 183+0+0 (secure 0 0 0) 0x7f850805e5f0 con 0x7f8514106850 2026-03-09T16:26:55.663 INFO:tasks.workunit.client.0.vm01.stderr:pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' already exists 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f84e80776d0 msgr2=0x7f84e8079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f84e80776d0 0x7f84e8079b90 secure :-1 s=READY pgs=4261 cs=0 l=1 rev1=1 crypto rx=0x7f850401ce70 tx=0x7f85040a7040 comp rx=0 tx=0).stop 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 msgr2=0x7f85141a46f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 0x7f85141a46f0 secure :-1 s=READY pgs=3068 cs=0 l=1 rev1=1 crypto rx=0x7f850800cce0 tx=0x7f8508007590 comp rx=0 tx=0).stop 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 shutdown_connections 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f84e80776d0 0x7f84e8079b90 unknown :-1 s=CLOSED pgs=4261 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8514113750 0x7f85141a8a80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f8514106850 0x7f85141a46f0 unknown :-1 s=CLOSED pgs=3068 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 --2- 192.168.123.101:0/3272396138 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f8514075960 0x7f85141a41b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 >> 192.168.123.101:0/3272396138 conn(0x7f85140fe6a0 msgr2=0x7f85140ff010 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 shutdown_connections 2026-03-09T16:26:55.665 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.671+0000 7f851c5db640 1 -- 192.168.123.101:0/3272396138 wait complete. 2026-03-09T16:26:55.677 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool set-quota 596e5e1f-ecde-406d-b4d0-afd8854e4a60 max_objects 10 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- 192.168.123.101:0/1382745456 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3d60101f80 msgr2=0x7f3d6010ee50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 --2- 192.168.123.101:0/1382745456 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3d60101f80 0x7f3d6010ee50 secure :-1 s=READY pgs=2997 cs=0 l=1 rev1=1 crypto rx=0x7f3d50009a30 tx=0x7f3d5001c900 comp rx=0 tx=0).stop 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- 192.168.123.101:0/1382745456 shutdown_connections 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 --2- 192.168.123.101:0/1382745456 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3d6010f390 0x7f3d60111780 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 --2- 192.168.123.101:0/1382745456 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3d60101f80 0x7f3d6010ee50 unknown :-1 s=CLOSED pgs=2997 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 --2- 192.168.123.101:0/1382745456 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3d60101660 0x7f3d60101a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- 192.168.123.101:0/1382745456 >> 192.168.123.101:0/1382745456 conn(0x7f3d600fd530 msgr2=0x7f3d600ff950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- 192.168.123.101:0/1382745456 shutdown_connections 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- 192.168.123.101:0/1382745456 wait complete. 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 Processor -- start 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- start start 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3d60101660 0x7f3d6019eff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3d60101f80 0x7f3d6019f530 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3d6010f390 0x7f3d601a38c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3d601169c0 con 0x7f3d60101660 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f3d60116840 con 0x7f3d60101f80 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f3d60116b40 con 0x7f3d6010f390 2026-03-09T16:26:55.742 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d66462640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3d6010f390 0x7f3d601a38c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d65460640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3d60101f80 0x7f3d6019f530 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d65460640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3d60101f80 0x7f3d6019f530 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:49720/0 (socket says 192.168.123.101:49720) 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d65460640 1 -- 192.168.123.101:0/3534417991 learned_addr learned my addr 192.168.123.101:0/3534417991 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d66462640 1 -- 192.168.123.101:0/3534417991 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3d60101f80 msgr2=0x7f3d6019f530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d66462640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3d60101f80 0x7f3d6019f530 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d66462640 1 -- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3d60101660 msgr2=0x7f3d6019eff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d65c61640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3d60101660 0x7f3d6019eff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d66462640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3d60101660 0x7f3d6019eff0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d66462640 1 -- 192.168.123.101:0/3534417991 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3d601a3fa0 con 0x7f3d6010f390 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d65c61640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3d60101660 0x7f3d6019eff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d66462640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3d6010f390 0x7f3d601a38c0 secure :-1 s=READY pgs=2998 cs=0 l=1 rev1=1 crypto rx=0x7f3d5c0047b0 tx=0x7f3d5c00d4a0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d4effd640 1 -- 192.168.123.101:0/3534417991 <== mon.2 v2:192.168.123.101:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3d5c004960 con 0x7f3d6010f390 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d4effd640 1 -- 192.168.123.101:0/3534417991 <== mon.2 v2:192.168.123.101:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3d5c007500 con 0x7f3d6010f390 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d4effd640 1 -- 192.168.123.101:0/3534417991 <== mon.2 v2:192.168.123.101:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3d5c013630 con 0x7f3d6010f390 2026-03-09T16:26:55.743 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3d601a4230 con 0x7f3d6010f390 2026-03-09T16:26:55.745 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.747+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f3d601a46f0 con 0x7f3d6010f390 2026-03-09T16:26:55.745 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.751+0000 7f3d4effd640 1 -- 192.168.123.101:0/3534417991 <== mon.2 v2:192.168.123.101:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f3d5c012070 con 0x7f3d6010f390 2026-03-09T16:26:55.745 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.751+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3d28005190 con 0x7f3d6010f390 2026-03-09T16:26:55.748 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.751+0000 7f3d4effd640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3d3c077820 0x7f3d3c079ce0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:55.748 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.751+0000 7f3d4effd640 1 -- 192.168.123.101:0/3534417991 <== mon.2 v2:192.168.123.101:3301/0 5 ==== osd_map(758..758 src has 257..758) ==== 8972+0+0 (secure 0 0 0) 0x7f3d5c09d170 con 0x7f3d6010f390 2026-03-09T16:26:55.748 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.751+0000 7f3d65c61640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3d3c077820 0x7f3d3c079ce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:55.748 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.751+0000 7f3d4effd640 1 -- 192.168.123.101:0/3534417991 <== mon.2 v2:192.168.123.101:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3d601a46f0 con 0x7f3d6010f390 2026-03-09T16:26:55.749 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.755+0000 7f3d65c61640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3d3c077820 0x7f3d3c079ce0 secure :-1 s=READY pgs=4262 cs=0 l=1 rev1=1 crypto rx=0x7f3d54005fd0 tx=0x7f3d54005ea0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:26:55.839 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:55.843+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"} v 0) -- 0x7f3d28005480 con 0x7f3d6010f390 2026-03-09T16:26:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:55 vm09 bash[22983]: cluster 2026-03-09T16:26:54.478934+0000 mon.a (mon.0) 3948 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T16:26:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:55 vm09 bash[22983]: cluster 2026-03-09T16:26:54.478934+0000 mon.a (mon.0) 3948 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T16:26:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:55 vm09 bash[22983]: audit 2026-03-09T16:26:54.803148+0000 mon.a (mon.0) 3949 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:55 vm09 bash[22983]: audit 2026-03-09T16:26:54.803148+0000 mon.a (mon.0) 3949 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:55 vm09 bash[22983]: cluster 2026-03-09T16:26:55.128189+0000 mgr.y (mgr.14520) 1228 : cluster [DBG] pgmap v1662: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:55.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:55 vm09 bash[22983]: cluster 2026-03-09T16:26:55.128189+0000 mgr.y (mgr.14520) 1228 : cluster [DBG] pgmap v1662: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:55.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:55 vm01 bash[28152]: cluster 2026-03-09T16:26:54.478934+0000 mon.a (mon.0) 3948 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T16:26:55.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:55 vm01 bash[28152]: cluster 2026-03-09T16:26:54.478934+0000 mon.a (mon.0) 3948 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T16:26:55.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:55 vm01 bash[28152]: audit 2026-03-09T16:26:54.803148+0000 mon.a (mon.0) 3949 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:55.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:55 vm01 bash[28152]: audit 2026-03-09T16:26:54.803148+0000 mon.a (mon.0) 3949 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:55.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:55 vm01 bash[28152]: cluster 2026-03-09T16:26:55.128189+0000 mgr.y (mgr.14520) 1228 : cluster [DBG] pgmap v1662: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:55.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:55 vm01 bash[28152]: cluster 2026-03-09T16:26:55.128189+0000 mgr.y (mgr.14520) 1228 : cluster [DBG] pgmap v1662: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:55.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:55 vm01 bash[20728]: cluster 2026-03-09T16:26:54.478934+0000 mon.a (mon.0) 3948 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T16:26:55.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:55 vm01 bash[20728]: cluster 2026-03-09T16:26:54.478934+0000 mon.a (mon.0) 3948 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-09T16:26:55.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:55 vm01 bash[20728]: audit 2026-03-09T16:26:54.803148+0000 mon.a (mon.0) 3949 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:55.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:55 vm01 bash[20728]: audit 2026-03-09T16:26:54.803148+0000 mon.a (mon.0) 3949 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:55.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:55 vm01 bash[20728]: cluster 2026-03-09T16:26:55.128189+0000 mgr.y (mgr.14520) 1228 : cluster [DBG] pgmap v1662: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:55.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:55 vm01 bash[20728]: cluster 2026-03-09T16:26:55.128189+0000 mgr.y (mgr.14520) 1228 : cluster [DBG] pgmap v1662: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:56.692 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:56.695+0000 7f3d4effd640 1 -- 192.168.123.101:0/3534417991 <== mon.2 v2:192.168.123.101:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v759) ==== 223+0+0 (secure 0 0 0) 0x7f3d5c010070 con 0x7f3d6010f390 2026-03-09T16:26:56.759 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:56.763+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"} v 0) -- 0x7f3d28004910 con 0x7f3d6010f390 2026-03-09T16:26:56.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: audit 2026-03-09T16:26:55.600240+0000 mon.a (mon.0) 3950 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]': finished 2026-03-09T16:26:56.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: audit 2026-03-09T16:26:55.600240+0000 mon.a (mon.0) 3950 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]': finished 2026-03-09T16:26:56.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: cluster 2026-03-09T16:26:55.608629+0000 mon.a (mon.0) 3951 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T16:26:56.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: cluster 2026-03-09T16:26:55.608629+0000 mon.a (mon.0) 3951 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T16:26:56.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: audit 2026-03-09T16:26:55.670798+0000 mon.a (mon.0) 3952 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:56.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: audit 2026-03-09T16:26:55.670798+0000 mon.a (mon.0) 3952 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: audit 2026-03-09T16:26:55.848314+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: audit 2026-03-09T16:26:55.848314+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: audit 2026-03-09T16:26:55.848684+0000 mon.a (mon.0) 3953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:56 vm01 bash[28152]: audit 2026-03-09T16:26:55.848684+0000 mon.a (mon.0) 3953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: audit 2026-03-09T16:26:55.600240+0000 mon.a (mon.0) 3950 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]': finished 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: audit 2026-03-09T16:26:55.600240+0000 mon.a (mon.0) 3950 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]': finished 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: cluster 2026-03-09T16:26:55.608629+0000 mon.a (mon.0) 3951 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: cluster 2026-03-09T16:26:55.608629+0000 mon.a (mon.0) 3951 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: audit 2026-03-09T16:26:55.670798+0000 mon.a (mon.0) 3952 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: audit 2026-03-09T16:26:55.670798+0000 mon.a (mon.0) 3952 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: audit 2026-03-09T16:26:55.848314+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: audit 2026-03-09T16:26:55.848314+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: audit 2026-03-09T16:26:55.848684+0000 mon.a (mon.0) 3953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:56.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:56 vm01 bash[20728]: audit 2026-03-09T16:26:55.848684+0000 mon.a (mon.0) 3953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: audit 2026-03-09T16:26:55.600240+0000 mon.a (mon.0) 3950 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]': finished 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: audit 2026-03-09T16:26:55.600240+0000 mon.a (mon.0) 3950 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]': finished 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: cluster 2026-03-09T16:26:55.608629+0000 mon.a (mon.0) 3951 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: cluster 2026-03-09T16:26:55.608629+0000 mon.a (mon.0) 3951 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: audit 2026-03-09T16:26:55.670798+0000 mon.a (mon.0) 3952 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: audit 2026-03-09T16:26:55.670798+0000 mon.a (mon.0) 3952 : audit [INF] from='client.? 192.168.123.101:0/3272396138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pg_num": 12}]: dispatch 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: audit 2026-03-09T16:26:55.848314+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: audit 2026-03-09T16:26:55.848314+0000 mon.c (mon.2) 679 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: audit 2026-03-09T16:26:55.848684+0000 mon.a (mon.0) 3953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:57.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:56 vm09 bash[22983]: audit 2026-03-09T16:26:55.848684+0000 mon.a (mon.0) 3953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:57.725 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.727+0000 7f3d4effd640 1 -- 192.168.123.101:0/3534417991 <== mon.2 v2:192.168.123.101:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v760) ==== 223+0+0 (secure 0 0 0) 0x7f3d5c065de0 con 0x7f3d6010f390 2026-03-09T16:26:57.725 INFO:tasks.workunit.client.0.vm01.stderr:set-quota max_objects = 10 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 2026-03-09T16:26:57.727 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3d3c077820 msgr2=0x7f3d3c079ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3d3c077820 0x7f3d3c079ce0 secure :-1 s=READY pgs=4262 cs=0 l=1 rev1=1 crypto rx=0x7f3d54005fd0 tx=0x7f3d54005ea0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3d6010f390 msgr2=0x7f3d601a38c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3d6010f390 0x7f3d601a38c0 secure :-1 s=READY pgs=2998 cs=0 l=1 rev1=1 crypto rx=0x7f3d5c0047b0 tx=0x7f3d5c00d4a0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 shutdown_connections 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3d3c077820 0x7f3d3c079ce0 unknown :-1 s=CLOSED pgs=4262 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3d6010f390 0x7f3d601a38c0 unknown :-1 s=CLOSED pgs=2998 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3d60101f80 0x7f3d6019f530 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 --2- 192.168.123.101:0/3534417991 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3d60101660 0x7f3d6019eff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 >> 192.168.123.101:0/3534417991 conn(0x7f3d600fd530 msgr2=0x7f3d6010fb70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 shutdown_connections 2026-03-09T16:26:57.728 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.731+0000 7f3d67eec640 1 -- 192.168.123.101:0/3534417991 wait complete. 2026-03-09T16:26:57.744 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool application enable 596e5e1f-ecde-406d-b4d0-afd8854e4a60 rados 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- 192.168.123.101:0/41476997 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6670101510 msgr2=0x7f667010eb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 --2- 192.168.123.101:0/41476997 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6670101510 0x7f667010eb70 secure :-1 s=READY pgs=2999 cs=0 l=1 rev1=1 crypto rx=0x7f6660009a30 tx=0x7f666001c990 comp rx=0 tx=0).stop 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- 192.168.123.101:0/41476997 shutdown_connections 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 --2- 192.168.123.101:0/41476997 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f667010f1e0 0x7f6670111660 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 --2- 192.168.123.101:0/41476997 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6670101510 0x7f667010eb70 unknown :-1 s=CLOSED pgs=2999 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 --2- 192.168.123.101:0/41476997 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6670100bf0 0x7f6670100fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- 192.168.123.101:0/41476997 >> 192.168.123.101:0/41476997 conn(0x7f66700fc820 msgr2=0x7f66700fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- 192.168.123.101:0/41476997 shutdown_connections 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- 192.168.123.101:0/41476997 wait complete. 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 Processor -- start 2026-03-09T16:26:57.809 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- start start 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6670100bf0 0x7f667019f010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6670101510 0x7f667019f550 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f667010f1e0 0x7f66701a38e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f6670116a20 con 0x7f667010f1e0 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f66701168a0 con 0x7f6670100bf0 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f6670116ba0 con 0x7f6670101510 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6675cab640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f667010f1e0 0x7f66701a38e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6675cab640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f667010f1e0 0x7f66701a38e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:56750/0 (socket says 192.168.123.101:56750) 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6675cab640 1 -- 192.168.123.101:0/3100980668 learned_addr learned my addr 192.168.123.101:0/3100980668 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6675cab640 1 -- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6670101510 msgr2=0x7f667019f550 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6675cab640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6670101510 0x7f667019f550 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6675cab640 1 -- 192.168.123.101:0/3100980668 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6670100bf0 msgr2=0x7f667019f010 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f66754aa640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6670100bf0 0x7f667019f010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6675cab640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6670100bf0 0x7f667019f010 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6675cab640 1 -- 192.168.123.101:0/3100980668 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f66701a3fc0 con 0x7f667010f1e0 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f66754aa640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6670100bf0 0x7f667019f010 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6675cab640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f667010f1e0 0x7f66701a38e0 secure :-1 s=READY pgs=3069 cs=0 l=1 rev1=1 crypto rx=0x7f666c007d70 tx=0x7f666c00a430 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f665e7fc640 1 -- 192.168.123.101:0/3100980668 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f666c017070 con 0x7f667010f1e0 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f665e7fc640 1 -- 192.168.123.101:0/3100980668 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f666c004070 con 0x7f667010f1e0 2026-03-09T16:26:57.810 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f665e7fc640 1 -- 192.168.123.101:0/3100980668 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f666c012660 con 0x7f667010f1e0 2026-03-09T16:26:57.811 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f66701a42b0 con 0x7f667010f1e0 2026-03-09T16:26:57.811 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f66701abb90 con 0x7f667010f1e0 2026-03-09T16:26:57.815 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6638005190 con 0x7f667010f1e0 2026-03-09T16:26:57.815 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f665e7fc640 1 -- 192.168.123.101:0/3100980668 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f666c005ce0 con 0x7f667010f1e0 2026-03-09T16:26:57.815 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.815+0000 7f665e7fc640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f664c0777e0 0x7f664c079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:26:57.816 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.819+0000 7f665e7fc640 1 -- 192.168.123.101:0/3100980668 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(760..760 src has 257..760) ==== 8972+0+0 (secure 0 0 0) 0x7f666c099b70 con 0x7f667010f1e0 2026-03-09T16:26:57.816 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.819+0000 7f665e7fc640 1 -- 192.168.123.101:0/3100980668 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f666c065ee0 con 0x7f667010f1e0 2026-03-09T16:26:57.816 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.819+0000 7f66754aa640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f664c0777e0 0x7f664c079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:26:57.816 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.819+0000 7f66754aa640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f664c0777e0 0x7f664c079ca0 secure :-1 s=READY pgs=4263 cs=0 l=1 rev1=1 crypto rx=0x7f6664005fd0 tx=0x7f6664005e40 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:26:57.914 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:57.919+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"} v 0) -- 0x7f6638005480 con 0x7f667010f1e0 2026-03-09T16:26:58.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:26:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: audit 2026-03-09T16:26:56.683531+0000 mon.a (mon.0) 3954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: audit 2026-03-09T16:26:56.683531+0000 mon.a (mon.0) 3954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: cluster 2026-03-09T16:26:56.694047+0000 mon.a (mon.0) 3955 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: cluster 2026-03-09T16:26:56.694047+0000 mon.a (mon.0) 3955 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: audit 2026-03-09T16:26:56.767851+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: audit 2026-03-09T16:26:56.767851+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: audit 2026-03-09T16:26:56.768240+0000 mon.a (mon.0) 3956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: audit 2026-03-09T16:26:56.768240+0000 mon.a (mon.0) 3956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: cluster 2026-03-09T16:26:57.128550+0000 mgr.y (mgr.14520) 1229 : cluster [DBG] pgmap v1665: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:58.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:57 vm09 bash[22983]: cluster 2026-03-09T16:26:57.128550+0000 mgr.y (mgr.14520) 1229 : cluster [DBG] pgmap v1665: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:58.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: audit 2026-03-09T16:26:56.683531+0000 mon.a (mon.0) 3954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: audit 2026-03-09T16:26:56.683531+0000 mon.a (mon.0) 3954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: cluster 2026-03-09T16:26:56.694047+0000 mon.a (mon.0) 3955 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: cluster 2026-03-09T16:26:56.694047+0000 mon.a (mon.0) 3955 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: audit 2026-03-09T16:26:56.767851+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: audit 2026-03-09T16:26:56.767851+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: audit 2026-03-09T16:26:56.768240+0000 mon.a (mon.0) 3956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: audit 2026-03-09T16:26:56.768240+0000 mon.a (mon.0) 3956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: cluster 2026-03-09T16:26:57.128550+0000 mgr.y (mgr.14520) 1229 : cluster [DBG] pgmap v1665: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:57 vm01 bash[28152]: cluster 2026-03-09T16:26:57.128550+0000 mgr.y (mgr.14520) 1229 : cluster [DBG] pgmap v1665: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: audit 2026-03-09T16:26:56.683531+0000 mon.a (mon.0) 3954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: audit 2026-03-09T16:26:56.683531+0000 mon.a (mon.0) 3954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: cluster 2026-03-09T16:26:56.694047+0000 mon.a (mon.0) 3955 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: cluster 2026-03-09T16:26:56.694047+0000 mon.a (mon.0) 3955 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: audit 2026-03-09T16:26:56.767851+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: audit 2026-03-09T16:26:56.767851+0000 mon.c (mon.2) 680 : audit [INF] from='client.? 192.168.123.101:0/3534417991' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: audit 2026-03-09T16:26:56.768240+0000 mon.a (mon.0) 3956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: audit 2026-03-09T16:26:56.768240+0000 mon.a (mon.0) 3956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: cluster 2026-03-09T16:26:57.128550+0000 mgr.y (mgr.14520) 1229 : cluster [DBG] pgmap v1665: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:58.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:57 vm01 bash[20728]: cluster 2026-03-09T16:26:57.128550+0000 mgr.y (mgr.14520) 1229 : cluster [DBG] pgmap v1665: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:26:58.744 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:58.747+0000 7f665e7fc640 1 -- 192.168.123.101:0/3100980668 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]=0 enabled application 'rados' on pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' v761) ==== 213+0+0 (secure 0 0 0) 0x7f666c06ad90 con 0x7f667010f1e0 2026-03-09T16:26:58.800 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:58.803+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"} v 0) -- 0x7f6638004900 con 0x7f667010f1e0 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: audit 2026-03-09T16:26:57.682465+0000 mgr.y (mgr.14520) 1230 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: audit 2026-03-09T16:26:57.682465+0000 mgr.y (mgr.14520) 1230 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: cluster 2026-03-09T16:26:57.682948+0000 mon.a (mon.0) 3957 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: cluster 2026-03-09T16:26:57.682948+0000 mon.a (mon.0) 3957 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: audit 2026-03-09T16:26:57.725006+0000 mon.a (mon.0) 3958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: audit 2026-03-09T16:26:57.725006+0000 mon.a (mon.0) 3958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: cluster 2026-03-09T16:26:57.743569+0000 mon.a (mon.0) 3959 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: cluster 2026-03-09T16:26:57.743569+0000 mon.a (mon.0) 3959 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: audit 2026-03-09T16:26:57.922757+0000 mon.a (mon.0) 3960 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:26:59.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:58 vm09 bash[22983]: audit 2026-03-09T16:26:57.922757+0000 mon.a (mon.0) 3960 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: audit 2026-03-09T16:26:57.682465+0000 mgr.y (mgr.14520) 1230 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: audit 2026-03-09T16:26:57.682465+0000 mgr.y (mgr.14520) 1230 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: cluster 2026-03-09T16:26:57.682948+0000 mon.a (mon.0) 3957 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: cluster 2026-03-09T16:26:57.682948+0000 mon.a (mon.0) 3957 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: audit 2026-03-09T16:26:57.725006+0000 mon.a (mon.0) 3958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: audit 2026-03-09T16:26:57.725006+0000 mon.a (mon.0) 3958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: cluster 2026-03-09T16:26:57.743569+0000 mon.a (mon.0) 3959 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: cluster 2026-03-09T16:26:57.743569+0000 mon.a (mon.0) 3959 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: audit 2026-03-09T16:26:57.922757+0000 mon.a (mon.0) 3960 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:58 vm01 bash[28152]: audit 2026-03-09T16:26:57.922757+0000 mon.a (mon.0) 3960 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:26:59.172 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: audit 2026-03-09T16:26:57.682465+0000 mgr.y (mgr.14520) 1230 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:59.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: audit 2026-03-09T16:26:57.682465+0000 mgr.y (mgr.14520) 1230 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:26:59.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: cluster 2026-03-09T16:26:57.682948+0000 mon.a (mon.0) 3957 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:59.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: cluster 2026-03-09T16:26:57.682948+0000 mon.a (mon.0) 3957 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:26:59.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: audit 2026-03-09T16:26:57.725006+0000 mon.a (mon.0) 3958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:59.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: audit 2026-03-09T16:26:57.725006+0000 mon.a (mon.0) 3958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:26:59.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: cluster 2026-03-09T16:26:57.743569+0000 mon.a (mon.0) 3959 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T16:26:59.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: cluster 2026-03-09T16:26:57.743569+0000 mon.a (mon.0) 3959 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-09T16:26:59.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: audit 2026-03-09T16:26:57.922757+0000 mon.a (mon.0) 3960 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:26:59.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:58 vm01 bash[20728]: audit 2026-03-09T16:26:57.922757+0000 mon.a (mon.0) 3960 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:26:59.758 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.763+0000 7f665e7fc640 1 -- 192.168.123.101:0/3100980668 <== mon.0 v2:192.168.123.101:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]=0 enabled application 'rados' on pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' v762) ==== 213+0+0 (secure 0 0 0) 0x7f666c05ded0 con 0x7f667010f1e0 2026-03-09T16:26:59.758 INFO:tasks.workunit.client.0.vm01.stderr:enabled application 'rados' on pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f664c0777e0 msgr2=0x7f664c079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f664c0777e0 0x7f664c079ca0 secure :-1 s=READY pgs=4263 cs=0 l=1 rev1=1 crypto rx=0x7f6664005fd0 tx=0x7f6664005e40 comp rx=0 tx=0).stop 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f667010f1e0 msgr2=0x7f66701a38e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f667010f1e0 0x7f66701a38e0 secure :-1 s=READY pgs=3069 cs=0 l=1 rev1=1 crypto rx=0x7f666c007d70 tx=0x7f666c00a430 comp rx=0 tx=0).stop 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 shutdown_connections 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f664c0777e0 0x7f664c079ca0 unknown :-1 s=CLOSED pgs=4263 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f667010f1e0 0x7f66701a38e0 unknown :-1 s=CLOSED pgs=3069 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f6670101510 0x7f667019f550 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 --2- 192.168.123.101:0/3100980668 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6670100bf0 0x7f667019f010 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 >> 192.168.123.101:0/3100980668 conn(0x7f66700fc820 msgr2=0x7f66700fec10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 shutdown_connections 2026-03-09T16:26:59.761 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:26:59.767+0000 7f6677735640 1 -- 192.168.123.101:0/3100980668 wait complete. 2026-03-09T16:26:59.778 INFO:tasks.workunit.client.0.vm01.stderr:+ seq 1 10 2026-03-09T16:26:59.780 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj1 /etc/passwd 2026-03-09T16:26:59.814 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj2 /etc/passwd 2026-03-09T16:26:59.845 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj3 /etc/passwd 2026-03-09T16:26:59.876 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj4 /etc/passwd 2026-03-09T16:26:59.906 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj5 /etc/passwd 2026-03-09T16:26:59.935 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj6 /etc/passwd 2026-03-09T16:26:59.960 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj7 /etc/passwd 2026-03-09T16:26:59.990 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj8 /etc/passwd 2026-03-09T16:27:00.019 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj9 /etc/passwd 2026-03-09T16:27:00.058 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put obj10 /etc/passwd 2026-03-09T16:27:00.087 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 30 2026-03-09T16:27:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:59 vm09 bash[22983]: audit 2026-03-09T16:26:58.751748+0000 mon.a (mon.0) 3961 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:59 vm09 bash[22983]: audit 2026-03-09T16:26:58.751748+0000 mon.a (mon.0) 3961 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:59 vm09 bash[22983]: cluster 2026-03-09T16:26:58.758883+0000 mon.a (mon.0) 3962 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T16:27:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:59 vm09 bash[22983]: cluster 2026-03-09T16:26:58.758883+0000 mon.a (mon.0) 3962 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T16:27:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:59 vm09 bash[22983]: audit 2026-03-09T16:26:58.808593+0000 mon.a (mon.0) 3963 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:27:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:59 vm09 bash[22983]: audit 2026-03-09T16:26:58.808593+0000 mon.a (mon.0) 3963 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:27:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:59 vm09 bash[22983]: cluster 2026-03-09T16:26:59.128941+0000 mgr.y (mgr.14520) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:00.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:26:59 vm09 bash[22983]: cluster 2026-03-09T16:26:59.128941+0000 mgr.y (mgr.14520) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:00.172 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:59 vm01 bash[20728]: audit 2026-03-09T16:26:58.751748+0000 mon.a (mon.0) 3961 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:00.172 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:59 vm01 bash[20728]: audit 2026-03-09T16:26:58.751748+0000 mon.a (mon.0) 3961 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:00.172 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:59 vm01 bash[20728]: cluster 2026-03-09T16:26:58.758883+0000 mon.a (mon.0) 3962 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T16:27:00.172 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:59 vm01 bash[20728]: cluster 2026-03-09T16:26:58.758883+0000 mon.a (mon.0) 3962 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:59 vm01 bash[20728]: audit 2026-03-09T16:26:58.808593+0000 mon.a (mon.0) 3963 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:59 vm01 bash[20728]: audit 2026-03-09T16:26:58.808593+0000 mon.a (mon.0) 3963 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:59 vm01 bash[20728]: cluster 2026-03-09T16:26:59.128941+0000 mgr.y (mgr.14520) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:26:59 vm01 bash[20728]: cluster 2026-03-09T16:26:59.128941+0000 mgr.y (mgr.14520) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:59 vm01 bash[28152]: audit 2026-03-09T16:26:58.751748+0000 mon.a (mon.0) 3961 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:59 vm01 bash[28152]: audit 2026-03-09T16:26:58.751748+0000 mon.a (mon.0) 3961 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:59 vm01 bash[28152]: cluster 2026-03-09T16:26:58.758883+0000 mon.a (mon.0) 3962 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:59 vm01 bash[28152]: cluster 2026-03-09T16:26:58.758883+0000 mon.a (mon.0) 3962 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:59 vm01 bash[28152]: audit 2026-03-09T16:26:58.808593+0000 mon.a (mon.0) 3963 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:59 vm01 bash[28152]: audit 2026-03-09T16:26:58.808593+0000 mon.a (mon.0) 3963 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]: dispatch 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:59 vm01 bash[28152]: cluster 2026-03-09T16:26:59.128941+0000 mgr.y (mgr.14520) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:00.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:26:59 vm01 bash[28152]: cluster 2026-03-09T16:26:59.128941+0000 mgr.y (mgr.14520) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:01.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:00 vm09 bash[22983]: audit 2026-03-09T16:26:59.766196+0000 mon.a (mon.0) 3964 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:01.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:00 vm09 bash[22983]: audit 2026-03-09T16:26:59.766196+0000 mon.a (mon.0) 3964 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:01.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:00 vm09 bash[22983]: cluster 2026-03-09T16:26:59.789808+0000 mon.a (mon.0) 3965 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T16:27:01.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:00 vm09 bash[22983]: cluster 2026-03-09T16:26:59.789808+0000 mon.a (mon.0) 3965 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T16:27:01.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:00 vm09 bash[22983]: audit 2026-03-09T16:26:59.986836+0000 mon.a (mon.0) 3966 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:01.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:00 vm09 bash[22983]: audit 2026-03-09T16:26:59.986836+0000 mon.a (mon.0) 3966 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:01.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:00 vm01 bash[28152]: audit 2026-03-09T16:26:59.766196+0000 mon.a (mon.0) 3964 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:01.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:00 vm01 bash[28152]: audit 2026-03-09T16:26:59.766196+0000 mon.a (mon.0) 3964 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:01.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:00 vm01 bash[28152]: cluster 2026-03-09T16:26:59.789808+0000 mon.a (mon.0) 3965 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T16:27:01.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:00 vm01 bash[28152]: cluster 2026-03-09T16:26:59.789808+0000 mon.a (mon.0) 3965 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T16:27:01.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:00 vm01 bash[28152]: audit 2026-03-09T16:26:59.986836+0000 mon.a (mon.0) 3966 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:01.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:00 vm01 bash[28152]: audit 2026-03-09T16:26:59.986836+0000 mon.a (mon.0) 3966 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:01.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:00 vm01 bash[20728]: audit 2026-03-09T16:26:59.766196+0000 mon.a (mon.0) 3964 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:01.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:00 vm01 bash[20728]: audit 2026-03-09T16:26:59.766196+0000 mon.a (mon.0) 3964 : audit [INF] from='client.? 192.168.123.101:0/3100980668' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "app": "rados"}]': finished 2026-03-09T16:27:01.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:00 vm01 bash[20728]: cluster 2026-03-09T16:26:59.789808+0000 mon.a (mon.0) 3965 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T16:27:01.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:00 vm01 bash[20728]: cluster 2026-03-09T16:26:59.789808+0000 mon.a (mon.0) 3965 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-09T16:27:01.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:00 vm01 bash[20728]: audit 2026-03-09T16:26:59.986836+0000 mon.a (mon.0) 3966 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:01.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:00 vm01 bash[20728]: audit 2026-03-09T16:26:59.986836+0000 mon.a (mon.0) 3966 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:02.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:01 vm09 bash[22983]: cluster 2026-03-09T16:27:01.129278+0000 mgr.y (mgr.14520) 1232 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:27:02.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:01 vm09 bash[22983]: cluster 2026-03-09T16:27:01.129278+0000 mgr.y (mgr.14520) 1232 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:27:02.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:01 vm01 bash[28152]: cluster 2026-03-09T16:27:01.129278+0000 mgr.y (mgr.14520) 1232 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:27:02.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:01 vm01 bash[28152]: cluster 2026-03-09T16:27:01.129278+0000 mgr.y (mgr.14520) 1232 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:27:02.172 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:01 vm01 bash[20728]: cluster 2026-03-09T16:27:01.129278+0000 mgr.y (mgr.14520) 1232 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:27:02.172 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:01 vm01 bash[20728]: cluster 2026-03-09T16:27:01.129278+0000 mgr.y (mgr.14520) 1232 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T16:27:03.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:27:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:27:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:27:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:04 vm09 bash[22983]: cluster 2026-03-09T16:27:03.129586+0000 mgr.y (mgr.14520) 1233 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:04 vm09 bash[22983]: cluster 2026-03-09T16:27:03.129586+0000 mgr.y (mgr.14520) 1233 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:04.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:04 vm01 bash[28152]: cluster 2026-03-09T16:27:03.129586+0000 mgr.y (mgr.14520) 1233 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:04.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:04 vm01 bash[28152]: cluster 2026-03-09T16:27:03.129586+0000 mgr.y (mgr.14520) 1233 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:04.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:04 vm01 bash[20728]: cluster 2026-03-09T16:27:03.129586+0000 mgr.y (mgr.14520) 1233 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:04.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:04 vm01 bash[20728]: cluster 2026-03-09T16:27:03.129586+0000 mgr.y (mgr.14520) 1233 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:05 vm09 bash[22983]: cluster 2026-03-09T16:27:04.588196+0000 mon.a (mon.0) 3967 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:27:05.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:05 vm09 bash[22983]: cluster 2026-03-09T16:27:04.588196+0000 mon.a (mon.0) 3967 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:27:05.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:05 vm01 bash[28152]: cluster 2026-03-09T16:27:04.588196+0000 mon.a (mon.0) 3967 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:27:05.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:05 vm01 bash[28152]: cluster 2026-03-09T16:27:04.588196+0000 mon.a (mon.0) 3967 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:27:05.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:05 vm01 bash[20728]: cluster 2026-03-09T16:27:04.588196+0000 mon.a (mon.0) 3967 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:27:05.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:05 vm01 bash[20728]: cluster 2026-03-09T16:27:04.588196+0000 mon.a (mon.0) 3967 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:27:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:06 vm09 bash[22983]: cluster 2026-03-09T16:27:05.130291+0000 mgr.y (mgr.14520) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.1 KiB/s wr, 2 op/s 2026-03-09T16:27:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:06 vm09 bash[22983]: cluster 2026-03-09T16:27:05.130291+0000 mgr.y (mgr.14520) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.1 KiB/s wr, 2 op/s 2026-03-09T16:27:06.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:06 vm01 bash[28152]: cluster 2026-03-09T16:27:05.130291+0000 mgr.y (mgr.14520) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.1 KiB/s wr, 2 op/s 2026-03-09T16:27:06.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:06 vm01 bash[28152]: cluster 2026-03-09T16:27:05.130291+0000 mgr.y (mgr.14520) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.1 KiB/s wr, 2 op/s 2026-03-09T16:27:06.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:06 vm01 bash[20728]: cluster 2026-03-09T16:27:05.130291+0000 mgr.y (mgr.14520) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.1 KiB/s wr, 2 op/s 2026-03-09T16:27:06.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:06 vm01 bash[20728]: cluster 2026-03-09T16:27:05.130291+0000 mgr.y (mgr.14520) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.1 KiB/s wr, 2 op/s 2026-03-09T16:27:08.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:27:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:27:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:08 vm09 bash[22983]: cluster 2026-03-09T16:27:07.130644+0000 mgr.y (mgr.14520) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 611 B/s rd, 3.6 KiB/s wr, 1 op/s 2026-03-09T16:27:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:08 vm09 bash[22983]: cluster 2026-03-09T16:27:07.130644+0000 mgr.y (mgr.14520) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 611 B/s rd, 3.6 KiB/s wr, 1 op/s 2026-03-09T16:27:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:08 vm09 bash[22983]: audit 2026-03-09T16:27:07.536518+0000 mon.a (mon.0) 3968 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:27:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:08 vm09 bash[22983]: audit 2026-03-09T16:27:07.536518+0000 mon.a (mon.0) 3968 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:27:08.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:08 vm01 bash[28152]: cluster 2026-03-09T16:27:07.130644+0000 mgr.y (mgr.14520) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 611 B/s rd, 3.6 KiB/s wr, 1 op/s 2026-03-09T16:27:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:08 vm01 bash[28152]: cluster 2026-03-09T16:27:07.130644+0000 mgr.y (mgr.14520) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 611 B/s rd, 3.6 KiB/s wr, 1 op/s 2026-03-09T16:27:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:08 vm01 bash[28152]: audit 2026-03-09T16:27:07.536518+0000 mon.a (mon.0) 3968 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:27:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:08 vm01 bash[28152]: audit 2026-03-09T16:27:07.536518+0000 mon.a (mon.0) 3968 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:27:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:08 vm01 bash[20728]: cluster 2026-03-09T16:27:07.130644+0000 mgr.y (mgr.14520) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 611 B/s rd, 3.6 KiB/s wr, 1 op/s 2026-03-09T16:27:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:08 vm01 bash[20728]: cluster 2026-03-09T16:27:07.130644+0000 mgr.y (mgr.14520) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 611 B/s rd, 3.6 KiB/s wr, 1 op/s 2026-03-09T16:27:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:08 vm01 bash[20728]: audit 2026-03-09T16:27:07.536518+0000 mon.a (mon.0) 3968 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:27:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:08 vm01 bash[20728]: audit 2026-03-09T16:27:07.536518+0000 mon.a (mon.0) 3968 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:27:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:09 vm09 bash[22983]: audit 2026-03-09T16:27:07.693296+0000 mgr.y (mgr.14520) 1236 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:09 vm09 bash[22983]: audit 2026-03-09T16:27:07.693296+0000 mgr.y (mgr.14520) 1236 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:09.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:09 vm01 bash[28152]: audit 2026-03-09T16:27:07.693296+0000 mgr.y (mgr.14520) 1236 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:09.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:09 vm01 bash[28152]: audit 2026-03-09T16:27:07.693296+0000 mgr.y (mgr.14520) 1236 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:09.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:09 vm01 bash[20728]: audit 2026-03-09T16:27:07.693296+0000 mgr.y (mgr.14520) 1236 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:09.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:09 vm01 bash[20728]: audit 2026-03-09T16:27:07.693296+0000 mgr.y (mgr.14520) 1236 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:10 vm09 bash[22983]: cluster 2026-03-09T16:27:09.131287+0000 mgr.y (mgr.14520) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:10 vm09 bash[22983]: cluster 2026-03-09T16:27:09.131287+0000 mgr.y (mgr.14520) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:10 vm09 bash[22983]: cluster 2026-03-09T16:27:09.588991+0000 mon.a (mon.0) 3969 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_objects: 10) 2026-03-09T16:27:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:10 vm09 bash[22983]: cluster 2026-03-09T16:27:09.588991+0000 mon.a (mon.0) 3969 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_objects: 10) 2026-03-09T16:27:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:10 vm09 bash[22983]: cluster 2026-03-09T16:27:09.589262+0000 mon.a (mon.0) 3970 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:10 vm09 bash[22983]: cluster 2026-03-09T16:27:09.589262+0000 mon.a (mon.0) 3970 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:10 vm09 bash[22983]: cluster 2026-03-09T16:27:09.606440+0000 mon.a (mon.0) 3971 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T16:27:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:10 vm09 bash[22983]: cluster 2026-03-09T16:27:09.606440+0000 mon.a (mon.0) 3971 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:10 vm01 bash[28152]: cluster 2026-03-09T16:27:09.131287+0000 mgr.y (mgr.14520) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:10 vm01 bash[28152]: cluster 2026-03-09T16:27:09.131287+0000 mgr.y (mgr.14520) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:10 vm01 bash[28152]: cluster 2026-03-09T16:27:09.588991+0000 mon.a (mon.0) 3969 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_objects: 10) 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:10 vm01 bash[28152]: cluster 2026-03-09T16:27:09.588991+0000 mon.a (mon.0) 3969 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_objects: 10) 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:10 vm01 bash[28152]: cluster 2026-03-09T16:27:09.589262+0000 mon.a (mon.0) 3970 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:10 vm01 bash[28152]: cluster 2026-03-09T16:27:09.589262+0000 mon.a (mon.0) 3970 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:10 vm01 bash[28152]: cluster 2026-03-09T16:27:09.606440+0000 mon.a (mon.0) 3971 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:10 vm01 bash[28152]: cluster 2026-03-09T16:27:09.606440+0000 mon.a (mon.0) 3971 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:10 vm01 bash[20728]: cluster 2026-03-09T16:27:09.131287+0000 mgr.y (mgr.14520) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:10 vm01 bash[20728]: cluster 2026-03-09T16:27:09.131287+0000 mgr.y (mgr.14520) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:10 vm01 bash[20728]: cluster 2026-03-09T16:27:09.588991+0000 mon.a (mon.0) 3969 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_objects: 10) 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:10 vm01 bash[20728]: cluster 2026-03-09T16:27:09.588991+0000 mon.a (mon.0) 3969 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_objects: 10) 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:10 vm01 bash[20728]: cluster 2026-03-09T16:27:09.589262+0000 mon.a (mon.0) 3970 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:10.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:10 vm01 bash[20728]: cluster 2026-03-09T16:27:09.589262+0000 mon.a (mon.0) 3970 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:10.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:10 vm01 bash[20728]: cluster 2026-03-09T16:27:09.606440+0000 mon.a (mon.0) 3971 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T16:27:10.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:10 vm01 bash[20728]: cluster 2026-03-09T16:27:09.606440+0000 mon.a (mon.0) 3971 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-09T16:27:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:12 vm09 bash[22983]: cluster 2026-03-09T16:27:11.131719+0000 mgr.y (mgr.14520) 1238 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:12 vm09 bash[22983]: cluster 2026-03-09T16:27:11.131719+0000 mgr.y (mgr.14520) 1238 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:12.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:12 vm01 bash[28152]: cluster 2026-03-09T16:27:11.131719+0000 mgr.y (mgr.14520) 1238 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:12.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:12 vm01 bash[28152]: cluster 2026-03-09T16:27:11.131719+0000 mgr.y (mgr.14520) 1238 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:12.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:12 vm01 bash[20728]: cluster 2026-03-09T16:27:11.131719+0000 mgr.y (mgr.14520) 1238 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:12.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:12 vm01 bash[20728]: cluster 2026-03-09T16:27:11.131719+0000 mgr.y (mgr.14520) 1238 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:13.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:27:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:27:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:12.676879+0000 mon.a (mon.0) 3972 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:12.676879+0000 mon.a (mon.0) 3972 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:12.682312+0000 mon.a (mon.0) 3973 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:12.682312+0000 mon.a (mon.0) 3973 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:12.892643+0000 mon.a (mon.0) 3974 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:12.892643+0000 mon.a (mon.0) 3974 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:12.898461+0000 mon.a (mon.0) 3975 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:12.898461+0000 mon.a (mon.0) 3975 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: cluster 2026-03-09T16:27:13.132131+0000 mgr.y (mgr.14520) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: cluster 2026-03-09T16:27:13.132131+0000 mgr.y (mgr.14520) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:13.222966+0000 mon.a (mon.0) 3976 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:13.222966+0000 mon.a (mon.0) 3976 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:13.223801+0000 mon.a (mon.0) 3977 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:13.223801+0000 mon.a (mon.0) 3977 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:13.229502+0000 mon.a (mon.0) 3978 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:13 vm09 bash[22983]: audit 2026-03-09T16:27:13.229502+0000 mon.a (mon.0) 3978 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:12.676879+0000 mon.a (mon.0) 3972 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:12.676879+0000 mon.a (mon.0) 3972 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:12.682312+0000 mon.a (mon.0) 3973 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:12.682312+0000 mon.a (mon.0) 3973 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:12.892643+0000 mon.a (mon.0) 3974 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:12.892643+0000 mon.a (mon.0) 3974 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:12.898461+0000 mon.a (mon.0) 3975 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:12.898461+0000 mon.a (mon.0) 3975 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: cluster 2026-03-09T16:27:13.132131+0000 mgr.y (mgr.14520) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: cluster 2026-03-09T16:27:13.132131+0000 mgr.y (mgr.14520) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:13.222966+0000 mon.a (mon.0) 3976 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:13.222966+0000 mon.a (mon.0) 3976 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:13.223801+0000 mon.a (mon.0) 3977 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:13.223801+0000 mon.a (mon.0) 3977 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:13.229502+0000 mon.a (mon.0) 3978 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:13 vm01 bash[28152]: audit 2026-03-09T16:27:13.229502+0000 mon.a (mon.0) 3978 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:12.676879+0000 mon.a (mon.0) 3972 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:12.676879+0000 mon.a (mon.0) 3972 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:12.682312+0000 mon.a (mon.0) 3973 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:12.682312+0000 mon.a (mon.0) 3973 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:12.892643+0000 mon.a (mon.0) 3974 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:12.892643+0000 mon.a (mon.0) 3974 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:12.898461+0000 mon.a (mon.0) 3975 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:12.898461+0000 mon.a (mon.0) 3975 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: cluster 2026-03-09T16:27:13.132131+0000 mgr.y (mgr.14520) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: cluster 2026-03-09T16:27:13.132131+0000 mgr.y (mgr.14520) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:13.222966+0000 mon.a (mon.0) 3976 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:13.222966+0000 mon.a (mon.0) 3976 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:13.223801+0000 mon.a (mon.0) 3977 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:13.223801+0000 mon.a (mon.0) 3977 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:13.229502+0000 mon.a (mon.0) 3978 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:14.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:13 vm01 bash[20728]: audit 2026-03-09T16:27:13.229502+0000 mon.a (mon.0) 3978 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:15 vm09 bash[22983]: audit 2026-03-09T16:27:14.997147+0000 mon.a (mon.0) 3979 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:15 vm09 bash[22983]: audit 2026-03-09T16:27:14.997147+0000 mon.a (mon.0) 3979 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:15 vm09 bash[22983]: audit 2026-03-09T16:27:15.000091+0000 mon.a (mon.0) 3980 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:15 vm09 bash[22983]: audit 2026-03-09T16:27:15.000091+0000 mon.a (mon.0) 3980 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:15 vm09 bash[22983]: cluster 2026-03-09T16:27:15.132757+0000 mgr.y (mgr.14520) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:16.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:15 vm09 bash[22983]: cluster 2026-03-09T16:27:15.132757+0000 mgr.y (mgr.14520) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:16.422 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:16 vm01 bash[28152]: audit 2026-03-09T16:27:14.997147+0000 mon.a (mon.0) 3979 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:16 vm01 bash[28152]: audit 2026-03-09T16:27:14.997147+0000 mon.a (mon.0) 3979 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:16 vm01 bash[28152]: audit 2026-03-09T16:27:15.000091+0000 mon.a (mon.0) 3980 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:16 vm01 bash[28152]: audit 2026-03-09T16:27:15.000091+0000 mon.a (mon.0) 3980 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:16 vm01 bash[28152]: cluster 2026-03-09T16:27:15.132757+0000 mgr.y (mgr.14520) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:16 vm01 bash[28152]: cluster 2026-03-09T16:27:15.132757+0000 mgr.y (mgr.14520) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:15 vm01 bash[20728]: audit 2026-03-09T16:27:14.997147+0000 mon.a (mon.0) 3979 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:15 vm01 bash[20728]: audit 2026-03-09T16:27:14.997147+0000 mon.a (mon.0) 3979 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:15 vm01 bash[20728]: audit 2026-03-09T16:27:15.000091+0000 mon.a (mon.0) 3980 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:15 vm01 bash[20728]: audit 2026-03-09T16:27:15.000091+0000 mon.a (mon.0) 3980 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:15 vm01 bash[20728]: cluster 2026-03-09T16:27:15.132757+0000 mgr.y (mgr.14520) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:16.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:15 vm01 bash[20728]: cluster 2026-03-09T16:27:15.132757+0000 mgr.y (mgr.14520) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:18.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:27:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:27:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:18 vm09 bash[22983]: cluster 2026-03-09T16:27:17.133172+0000 mgr.y (mgr.14520) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:18 vm09 bash[22983]: cluster 2026-03-09T16:27:17.133172+0000 mgr.y (mgr.14520) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:18 vm01 bash[28152]: cluster 2026-03-09T16:27:17.133172+0000 mgr.y (mgr.14520) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:18 vm01 bash[28152]: cluster 2026-03-09T16:27:17.133172+0000 mgr.y (mgr.14520) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:18.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:18 vm01 bash[20728]: cluster 2026-03-09T16:27:17.133172+0000 mgr.y (mgr.14520) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:18 vm01 bash[20728]: cluster 2026-03-09T16:27:17.133172+0000 mgr.y (mgr.14520) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:19 vm09 bash[22983]: audit 2026-03-09T16:27:17.698601+0000 mgr.y (mgr.14520) 1242 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:19 vm09 bash[22983]: audit 2026-03-09T16:27:17.698601+0000 mgr.y (mgr.14520) 1242 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:19 vm01 bash[28152]: audit 2026-03-09T16:27:17.698601+0000 mgr.y (mgr.14520) 1242 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:19 vm01 bash[28152]: audit 2026-03-09T16:27:17.698601+0000 mgr.y (mgr.14520) 1242 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:19.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:19 vm01 bash[20728]: audit 2026-03-09T16:27:17.698601+0000 mgr.y (mgr.14520) 1242 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:19.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:19 vm01 bash[20728]: audit 2026-03-09T16:27:17.698601+0000 mgr.y (mgr.14520) 1242 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:20 vm09 bash[22983]: cluster 2026-03-09T16:27:19.134039+0000 mgr.y (mgr.14520) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:20 vm09 bash[22983]: cluster 2026-03-09T16:27:19.134039+0000 mgr.y (mgr.14520) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:20 vm01 bash[28152]: cluster 2026-03-09T16:27:19.134039+0000 mgr.y (mgr.14520) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:20 vm01 bash[28152]: cluster 2026-03-09T16:27:19.134039+0000 mgr.y (mgr.14520) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:20.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:20 vm01 bash[20728]: cluster 2026-03-09T16:27:19.134039+0000 mgr.y (mgr.14520) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:20.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:20 vm01 bash[20728]: cluster 2026-03-09T16:27:19.134039+0000 mgr.y (mgr.14520) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:22 vm09 bash[22983]: cluster 2026-03-09T16:27:21.134406+0000 mgr.y (mgr.14520) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:22 vm09 bash[22983]: cluster 2026-03-09T16:27:21.134406+0000 mgr.y (mgr.14520) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:22 vm01 bash[28152]: cluster 2026-03-09T16:27:21.134406+0000 mgr.y (mgr.14520) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:22 vm01 bash[28152]: cluster 2026-03-09T16:27:21.134406+0000 mgr.y (mgr.14520) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:22 vm01 bash[20728]: cluster 2026-03-09T16:27:21.134406+0000 mgr.y (mgr.14520) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:22 vm01 bash[20728]: cluster 2026-03-09T16:27:21.134406+0000 mgr.y (mgr.14520) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:23.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:27:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:27:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:27:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:24 vm09 bash[22983]: cluster 2026-03-09T16:27:23.134790+0000 mgr.y (mgr.14520) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:24 vm09 bash[22983]: cluster 2026-03-09T16:27:23.134790+0000 mgr.y (mgr.14520) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:24.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:24 vm01 bash[28152]: cluster 2026-03-09T16:27:23.134790+0000 mgr.y (mgr.14520) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:24.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:24 vm01 bash[28152]: cluster 2026-03-09T16:27:23.134790+0000 mgr.y (mgr.14520) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:24.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:24 vm01 bash[20728]: cluster 2026-03-09T16:27:23.134790+0000 mgr.y (mgr.14520) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:24.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:24 vm01 bash[20728]: cluster 2026-03-09T16:27:23.134790+0000 mgr.y (mgr.14520) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:26 vm09 bash[22983]: cluster 2026-03-09T16:27:25.135406+0000 mgr.y (mgr.14520) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:26 vm09 bash[22983]: cluster 2026-03-09T16:27:25.135406+0000 mgr.y (mgr.14520) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:26.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:26 vm01 bash[28152]: cluster 2026-03-09T16:27:25.135406+0000 mgr.y (mgr.14520) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:26.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:26 vm01 bash[28152]: cluster 2026-03-09T16:27:25.135406+0000 mgr.y (mgr.14520) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:26.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:26 vm01 bash[20728]: cluster 2026-03-09T16:27:25.135406+0000 mgr.y (mgr.14520) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:26.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:26 vm01 bash[20728]: cluster 2026-03-09T16:27:25.135406+0000 mgr.y (mgr.14520) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:28.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:27:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:27:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:28 vm09 bash[22983]: cluster 2026-03-09T16:27:27.135696+0000 mgr.y (mgr.14520) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:28 vm09 bash[22983]: cluster 2026-03-09T16:27:27.135696+0000 mgr.y (mgr.14520) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:28 vm01 bash[28152]: cluster 2026-03-09T16:27:27.135696+0000 mgr.y (mgr.14520) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:28 vm01 bash[28152]: cluster 2026-03-09T16:27:27.135696+0000 mgr.y (mgr.14520) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:28.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:28 vm01 bash[20728]: cluster 2026-03-09T16:27:27.135696+0000 mgr.y (mgr.14520) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:28.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:28 vm01 bash[20728]: cluster 2026-03-09T16:27:27.135696+0000 mgr.y (mgr.14520) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:29 vm09 bash[22983]: audit 2026-03-09T16:27:27.705549+0000 mgr.y (mgr.14520) 1248 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:29 vm09 bash[22983]: audit 2026-03-09T16:27:27.705549+0000 mgr.y (mgr.14520) 1248 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:29 vm01 bash[28152]: audit 2026-03-09T16:27:27.705549+0000 mgr.y (mgr.14520) 1248 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:29 vm01 bash[28152]: audit 2026-03-09T16:27:27.705549+0000 mgr.y (mgr.14520) 1248 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:29.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:29 vm01 bash[20728]: audit 2026-03-09T16:27:27.705549+0000 mgr.y (mgr.14520) 1248 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:29.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:29 vm01 bash[20728]: audit 2026-03-09T16:27:27.705549+0000 mgr.y (mgr.14520) 1248 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:30.088 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=131224 2026-03-09T16:27:30.088 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool set-quota 596e5e1f-ecde-406d-b4d0-afd8854e4a60 max_objects 100 2026-03-09T16:27:30.088 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put onemore /etc/passwd 2026-03-09T16:27:30.151 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 -- 192.168.123.101:0/4097158816 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 msgr2=0x7f48b4106510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/4097158816 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 0x7f48b4106510 secure :-1 s=READY pgs=3084 cs=0 l=1 rev1=1 crypto rx=0x7f48a8009980 tx=0x7f48a801c840 comp rx=0 tx=0).stop 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 -- 192.168.123.101:0/4097158816 shutdown_connections 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/4097158816 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f48b4106c40 0x7f48b41075b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/4097158816 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 0x7f48b4106510 unknown :-1 s=CLOSED pgs=3084 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/4097158816 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f48b4100fb0 0x7f48b4101390 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 -- 192.168.123.101:0/4097158816 >> 192.168.123.101:0/4097158816 conn(0x7f48b4078f80 msgr2=0x7f48b40ff230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 -- 192.168.123.101:0/4097158816 shutdown_connections 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 -- 192.168.123.101:0/4097158816 wait complete. 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 Processor -- start 2026-03-09T16:27:30.152 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 -- start start 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f48b4100fb0 0x7f48b41a96b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 0x7f48b41a9bf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f48b4106c40 0x7f48b41a3830 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f48b411b420 con 0x7f48b4101960 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f48b411b2a0 con 0x7f48b4106c40 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.155+0000 7f48bb6f5640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f48b411b5a0 con 0x7f48b4100fb0 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b8c69640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 0x7f48b41a9bf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b8c69640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 0x7f48b41a9bf0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:44098/0 (socket says 192.168.123.101:44098) 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b8c69640 1 -- 192.168.123.101:0/484213698 learned_addr learned my addr 192.168.123.101:0/484213698 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b8c69640 1 -- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f48b4100fb0 msgr2=0x7f48b41a96b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b8c69640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f48b4100fb0 0x7f48b41a96b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b8c69640 1 -- 192.168.123.101:0/484213698 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f48b4106c40 msgr2=0x7f48b41a3830 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b9c6b640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f48b4106c40 0x7f48b41a3830 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b8c69640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f48b4106c40 0x7f48b41a3830 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b8c69640 1 -- 192.168.123.101:0/484213698 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f48b41a4060 con 0x7f48b4101960 2026-03-09T16:27:30.153 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b8c69640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 0x7f48b41a9bf0 secure :-1 s=READY pgs=3085 cs=0 l=1 rev1=1 crypto rx=0x7f48a8009950 tx=0x7f48a80a5e40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:27:30.154 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f48a8004310 con 0x7f48b4101960 2026-03-09T16:27:30.154 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f48a80044b0 con 0x7f48b4101960 2026-03-09T16:27:30.154 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f48a80ae860 con 0x7f48b4101960 2026-03-09T16:27:30.154 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f48b41a42f0 con 0x7f48b4101960 2026-03-09T16:27:30.156 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f48b41b0510 con 0x7f48b4101960 2026-03-09T16:27:30.156 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f48a80bf070 con 0x7f48b4101960 2026-03-09T16:27:30.156 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48a27fc640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f488c0777e0 0x7f488c079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:27:30.156 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(763..763 src has 257..763) ==== 8985+0+0 (secure 0 0 0) 0x7f48a8134ea0 con 0x7f48b4101960 2026-03-09T16:27:30.156 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48b946a640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f488c0777e0 0x7f488c079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:27:30.156 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.159+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f487c005190 con 0x7f48b4101960 2026-03-09T16:27:30.159 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.163+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=764}) -- 0x7f488c083260 con 0x7f48b4101960 2026-03-09T16:27:30.159 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.163+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f48a8016570 con 0x7f48b4101960 2026-03-09T16:27:30.160 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.163+0000 7f48b946a640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f488c0777e0 0x7f488c079ca0 secure :-1 s=READY pgs=4275 cs=0 l=1 rev1=1 crypto rx=0x7f48a4005ed0 tx=0x7f48a4005e40 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:27:30.255 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:30.259+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"} v 0) -- 0x7f487c005480 con 0x7f48b4101960 2026-03-09T16:27:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:30 vm09 bash[22983]: cluster 2026-03-09T16:27:29.136202+0000 mgr.y (mgr.14520) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:30 vm09 bash[22983]: cluster 2026-03-09T16:27:29.136202+0000 mgr.y (mgr.14520) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:30 vm09 bash[22983]: audit 2026-03-09T16:27:30.005959+0000 mon.a (mon.0) 3981 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:30 vm09 bash[22983]: audit 2026-03-09T16:27:30.005959+0000 mon.a (mon.0) 3981 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:30 vm01 bash[28152]: cluster 2026-03-09T16:27:29.136202+0000 mgr.y (mgr.14520) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:30 vm01 bash[28152]: cluster 2026-03-09T16:27:29.136202+0000 mgr.y (mgr.14520) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:30 vm01 bash[28152]: audit 2026-03-09T16:27:30.005959+0000 mon.a (mon.0) 3981 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:30 vm01 bash[28152]: audit 2026-03-09T16:27:30.005959+0000 mon.a (mon.0) 3981 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:30.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:30 vm01 bash[20728]: cluster 2026-03-09T16:27:29.136202+0000 mgr.y (mgr.14520) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:30.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:30 vm01 bash[20728]: cluster 2026-03-09T16:27:29.136202+0000 mgr.y (mgr.14520) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:30.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:30 vm01 bash[20728]: audit 2026-03-09T16:27:30.005959+0000 mon.a (mon.0) 3981 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:30.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:30 vm01 bash[20728]: audit 2026-03-09T16:27:30.005959+0000 mon.a (mon.0) 3981 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:31.256 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:31.259+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v764) ==== 225+0+0 (secure 0 0 0) 0x7f48a80ab2d0 con 0x7f48b4101960 2026-03-09T16:27:31.268 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:31.271+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 <== mon.0 v2:192.168.123.101:3300/0 8 ==== osd_map(764..764 src has 257..764) ==== 628+0+0 (secure 0 0 0) 0x7f48a80f91f0 con 0x7f48b4101960 2026-03-09T16:27:31.268 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:31.271+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=765}) -- 0x7f488c084240 con 0x7f48b4101960 2026-03-09T16:27:31.313 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:31.319+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"} v 0) -- 0x7f487c004910 con 0x7f48b4101960 2026-03-09T16:27:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:31 vm09 bash[22983]: audit 2026-03-09T16:27:30.264146+0000 mon.a (mon.0) 3982 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:31.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:31 vm09 bash[22983]: audit 2026-03-09T16:27:30.264146+0000 mon.a (mon.0) 3982 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:31.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:31 vm01 bash[28152]: audit 2026-03-09T16:27:30.264146+0000 mon.a (mon.0) 3982 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:31.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:31 vm01 bash[28152]: audit 2026-03-09T16:27:30.264146+0000 mon.a (mon.0) 3982 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:31.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:31 vm01 bash[20728]: audit 2026-03-09T16:27:30.264146+0000 mon.a (mon.0) 3982 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:31.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:31 vm01 bash[20728]: audit 2026-03-09T16:27:30.264146+0000 mon.a (mon.0) 3982 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:32.273 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48a27fc640 1 -- 192.168.123.101:0/484213698 <== mon.0 v2:192.168.123.101:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v765) ==== 225+0+0 (secure 0 0 0) 0x7f48a8101200 con 0x7f48b4101960 2026-03-09T16:27:32.273 INFO:tasks.workunit.client.0.vm01.stderr:set-quota max_objects = 100 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 2026-03-09T16:27:32.275 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f488c0777e0 msgr2=0x7f488c079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f488c0777e0 0x7f488c079ca0 secure :-1 s=READY pgs=4275 cs=0 l=1 rev1=1 crypto rx=0x7f48a4005ed0 tx=0x7f48a4005e40 comp rx=0 tx=0).stop 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 msgr2=0x7f48b41a9bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 0x7f48b41a9bf0 secure :-1 s=READY pgs=3085 cs=0 l=1 rev1=1 crypto rx=0x7f48a8009950 tx=0x7f48a80a5e40 comp rx=0 tx=0).stop 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 shutdown_connections 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f488c0777e0 0x7f488c079ca0 unknown :-1 s=CLOSED pgs=4275 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f48b4106c40 0x7f48b41a3830 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f48b4101960 0x7f48b41a9bf0 unknown :-1 s=CLOSED pgs=3085 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 --2- 192.168.123.101:0/484213698 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f48b4100fb0 0x7f48b41a96b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 >> 192.168.123.101:0/484213698 conn(0x7f48b4078f80 msgr2=0x7f48b40ff040 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 shutdown_connections 2026-03-09T16:27:32.276 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:32.279+0000 7f48bb6f5640 1 -- 192.168.123.101:0/484213698 wait complete. 2026-03-09T16:27:32.296 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 131224 2026-03-09T16:27:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:32 vm09 bash[22983]: cluster 2026-03-09T16:27:31.136474+0000 mgr.y (mgr.14520) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:32 vm09 bash[22983]: cluster 2026-03-09T16:27:31.136474+0000 mgr.y (mgr.14520) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:32 vm09 bash[22983]: audit 2026-03-09T16:27:31.264382+0000 mon.a (mon.0) 3983 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:32 vm09 bash[22983]: audit 2026-03-09T16:27:31.264382+0000 mon.a (mon.0) 3983 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:32 vm09 bash[22983]: cluster 2026-03-09T16:27:31.272523+0000 mon.a (mon.0) 3984 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T16:27:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:32 vm09 bash[22983]: cluster 2026-03-09T16:27:31.272523+0000 mon.a (mon.0) 3984 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T16:27:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:32 vm09 bash[22983]: audit 2026-03-09T16:27:31.321786+0000 mon.a (mon.0) 3985 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:32 vm09 bash[22983]: audit 2026-03-09T16:27:31.321786+0000 mon.a (mon.0) 3985 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:32 vm01 bash[28152]: cluster 2026-03-09T16:27:31.136474+0000 mgr.y (mgr.14520) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:32 vm01 bash[28152]: cluster 2026-03-09T16:27:31.136474+0000 mgr.y (mgr.14520) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:32 vm01 bash[28152]: audit 2026-03-09T16:27:31.264382+0000 mon.a (mon.0) 3983 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:32 vm01 bash[28152]: audit 2026-03-09T16:27:31.264382+0000 mon.a (mon.0) 3983 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:32 vm01 bash[28152]: cluster 2026-03-09T16:27:31.272523+0000 mon.a (mon.0) 3984 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:32 vm01 bash[28152]: cluster 2026-03-09T16:27:31.272523+0000 mon.a (mon.0) 3984 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:32 vm01 bash[28152]: audit 2026-03-09T16:27:31.321786+0000 mon.a (mon.0) 3985 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:32 vm01 bash[28152]: audit 2026-03-09T16:27:31.321786+0000 mon.a (mon.0) 3985 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:32 vm01 bash[20728]: cluster 2026-03-09T16:27:31.136474+0000 mgr.y (mgr.14520) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:32 vm01 bash[20728]: cluster 2026-03-09T16:27:31.136474+0000 mgr.y (mgr.14520) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:32.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:32 vm01 bash[20728]: audit 2026-03-09T16:27:31.264382+0000 mon.a (mon.0) 3983 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:32 vm01 bash[20728]: audit 2026-03-09T16:27:31.264382+0000 mon.a (mon.0) 3983 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:32 vm01 bash[20728]: cluster 2026-03-09T16:27:31.272523+0000 mon.a (mon.0) 3984 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T16:27:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:32 vm01 bash[20728]: cluster 2026-03-09T16:27:31.272523+0000 mon.a (mon.0) 3984 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-09T16:27:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:32 vm01 bash[20728]: audit 2026-03-09T16:27:31.321786+0000 mon.a (mon.0) 3985 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:32 vm01 bash[20728]: audit 2026-03-09T16:27:31.321786+0000 mon.a (mon.0) 3985 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T16:27:33.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:27:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:27:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:27:33.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:33 vm09 bash[22983]: audit 2026-03-09T16:27:32.281598+0000 mon.a (mon.0) 3986 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:33.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:33 vm09 bash[22983]: audit 2026-03-09T16:27:32.281598+0000 mon.a (mon.0) 3986 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:33.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:33 vm09 bash[22983]: cluster 2026-03-09T16:27:32.298375+0000 mon.a (mon.0) 3987 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T16:27:33.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:33 vm09 bash[22983]: cluster 2026-03-09T16:27:32.298375+0000 mon.a (mon.0) 3987 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T16:27:33.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:33 vm01 bash[28152]: audit 2026-03-09T16:27:32.281598+0000 mon.a (mon.0) 3986 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:33.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:33 vm01 bash[28152]: audit 2026-03-09T16:27:32.281598+0000 mon.a (mon.0) 3986 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:33.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:33 vm01 bash[28152]: cluster 2026-03-09T16:27:32.298375+0000 mon.a (mon.0) 3987 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T16:27:33.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:33 vm01 bash[28152]: cluster 2026-03-09T16:27:32.298375+0000 mon.a (mon.0) 3987 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T16:27:33.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:33 vm01 bash[20728]: audit 2026-03-09T16:27:32.281598+0000 mon.a (mon.0) 3986 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:33.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:33 vm01 bash[20728]: audit 2026-03-09T16:27:32.281598+0000 mon.a (mon.0) 3986 : audit [INF] from='client.? 192.168.123.101:0/484213698' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "100"}]': finished 2026-03-09T16:27:33.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:33 vm01 bash[20728]: cluster 2026-03-09T16:27:32.298375+0000 mon.a (mon.0) 3987 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T16:27:33.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:33 vm01 bash[20728]: cluster 2026-03-09T16:27:32.298375+0000 mon.a (mon.0) 3987 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-09T16:27:34.600 INFO:tasks.workunit.client.0.vm01.stderr:+ [ 0 -ne 0 ] 2026-03-09T16:27:34.601 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-09T16:27:34.601 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put twomore /etc/passwd 2026-03-09T16:27:34.626 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool set-quota 596e5e1f-ecde-406d-b4d0-afd8854e4a60 max_bytes 100 2026-03-09T16:27:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:34 vm09 bash[22983]: cluster 2026-03-09T16:27:33.136791+0000 mgr.y (mgr.14520) 1251 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:27:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:34 vm09 bash[22983]: cluster 2026-03-09T16:27:33.136791+0000 mgr.y (mgr.14520) 1251 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:27:34.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:34 vm01 bash[20728]: cluster 2026-03-09T16:27:33.136791+0000 mgr.y (mgr.14520) 1251 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:27:34.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:34 vm01 bash[20728]: cluster 2026-03-09T16:27:33.136791+0000 mgr.y (mgr.14520) 1251 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:27:34.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:34 vm01 bash[28152]: cluster 2026-03-09T16:27:33.136791+0000 mgr.y (mgr.14520) 1251 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:27:34.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:34 vm01 bash[28152]: cluster 2026-03-09T16:27:33.136791+0000 mgr.y (mgr.14520) 1251 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- 192.168.123.101:0/3610039852 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 msgr2=0x7f6220108de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 --2- 192.168.123.101:0/3610039852 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 0x7f6220108de0 secure :-1 s=READY pgs=3087 cs=0 l=1 rev1=1 crypto rx=0x7f6210009a00 tx=0x7f621001c880 comp rx=0 tx=0).stop 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- 192.168.123.101:0/3610039852 shutdown_connections 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 --2- 192.168.123.101:0/3610039852 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f622006ba20 0x7f622010f8d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 --2- 192.168.123.101:0/3610039852 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f622006b080 0x7f622006b4e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 --2- 192.168.123.101:0/3610039852 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 0x7f6220108de0 unknown :-1 s=CLOSED pgs=3087 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- 192.168.123.101:0/3610039852 >> 192.168.123.101:0/3610039852 conn(0x7f62200fd5c0 msgr2=0x7f62200ff9e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- 192.168.123.101:0/3610039852 shutdown_connections 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- 192.168.123.101:0/3610039852 wait complete. 2026-03-09T16:27:34.685 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 Processor -- start 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- start start 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f622006b080 0x7f622019f120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f622006ba20 0x7f622019f660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 0x7f62201a39f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f6220102450 con 0x7f6220108a00 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f62201022d0 con 0x7f622006ba20 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f62201025d0 con 0x7f622006b080 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621ffff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 0x7f62201a39f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621ffff640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 0x7f62201a39f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:44142/0 (socket says 192.168.123.101:44142) 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621ffff640 1 -- 192.168.123.101:0/2139236076 learned_addr learned my addr 192.168.123.101:0/2139236076 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621effd640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f622006ba20 0x7f622019f660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621ffff640 1 -- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f622006b080 msgr2=0x7f622019f120 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621f7fe640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f622006b080 0x7f622019f120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621ffff640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f622006b080 0x7f622019f120 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621ffff640 1 -- 192.168.123.101:0/2139236076 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f622006ba20 msgr2=0x7f622019f660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621ffff640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f622006ba20 0x7f622019f660 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621ffff640 1 -- 192.168.123.101:0/2139236076 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f62201a4170 con 0x7f6220108a00 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621effd640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f622006ba20 0x7f622019f660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T16:27:34.686 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621f7fe640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f622006b080 0x7f622019f120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T16:27:34.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621ffff640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 0x7f62201a39f0 secure :-1 s=READY pgs=3088 cs=0 l=1 rev1=1 crypto rx=0x7f621400ef30 tx=0x7f621400c550 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:27:34.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621cff9640 1 -- 192.168.123.101:0/2139236076 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6214019070 con 0x7f6220108a00 2026-03-09T16:27:34.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621cff9640 1 -- 192.168.123.101:0/2139236076 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f62140092d0 con 0x7f6220108a00 2026-03-09T16:27:34.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f62201a4400 con 0x7f6220108a00 2026-03-09T16:27:34.687 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621cff9640 1 -- 192.168.123.101:0/2139236076 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f62140047b0 con 0x7f6220108a00 2026-03-09T16:27:34.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f62201abca0 con 0x7f6220108a00 2026-03-09T16:27:34.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f621cff9640 1 -- 192.168.123.101:0/2139236076 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f6214005ce0 con 0x7f6220108a00 2026-03-09T16:27:34.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.691+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f61e4005190 con 0x7f6220108a00 2026-03-09T16:27:34.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.695+0000 7f621cff9640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f61f40776d0 0x7f61f4079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:27:34.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.695+0000 7f621cff9640 1 -- 192.168.123.101:0/2139236076 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(766..766 src has 257..766) ==== 8985+0+0 (secure 0 0 0) 0x7f621409d170 con 0x7f6220108a00 2026-03-09T16:27:34.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.695+0000 7f621f7fe640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f61f40776d0 0x7f61f4079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:27:34.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.695+0000 7f621cff9640 1 -- 192.168.123.101:0/2139236076 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6214010040 con 0x7f6220108a00 2026-03-09T16:27:34.692 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.695+0000 7f621f7fe640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f61f40776d0 0x7f61f4079b90 secure :-1 s=READY pgs=4277 cs=0 l=1 rev1=1 crypto rx=0x7f621001c6b0 tx=0x7f62100023e0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:27:34.783 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:34.787+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"} v 0) -- 0x7f61e4005480 con 0x7f6220108a00 2026-03-09T16:27:35.596 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:35.599+0000 7f621cff9640 1 -- 192.168.123.101:0/2139236076 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v767) ==== 221+0+0 (secure 0 0 0) 0x7f6214066060 con 0x7f6220108a00 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: cluster 2026-03-09T16:27:34.593094+0000 mon.a (mon.0) 3988 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: cluster 2026-03-09T16:27:34.593094+0000 mon.a (mon.0) 3988 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: cluster 2026-03-09T16:27:34.593354+0000 mon.a (mon.0) 3989 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: cluster 2026-03-09T16:27:34.593354+0000 mon.a (mon.0) 3989 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: cluster 2026-03-09T16:27:34.610580+0000 mon.a (mon.0) 3990 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: cluster 2026-03-09T16:27:34.610580+0000 mon.a (mon.0) 3990 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: audit 2026-03-09T16:27:34.791800+0000 mon.a (mon.0) 3991 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: audit 2026-03-09T16:27:34.791800+0000 mon.a (mon.0) 3991 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: cluster 2026-03-09T16:27:35.137107+0000 mgr.y (mgr.14520) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:35 vm09 bash[22983]: cluster 2026-03-09T16:27:35.137107+0000 mgr.y (mgr.14520) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:35.654 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:35.659+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"} v 0) -- 0x7f61e4004910 con 0x7f6220108a00 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: cluster 2026-03-09T16:27:34.593094+0000 mon.a (mon.0) 3988 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: cluster 2026-03-09T16:27:34.593094+0000 mon.a (mon.0) 3988 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: cluster 2026-03-09T16:27:34.593354+0000 mon.a (mon.0) 3989 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: cluster 2026-03-09T16:27:34.593354+0000 mon.a (mon.0) 3989 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: cluster 2026-03-09T16:27:34.610580+0000 mon.a (mon.0) 3990 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: cluster 2026-03-09T16:27:34.610580+0000 mon.a (mon.0) 3990 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: audit 2026-03-09T16:27:34.791800+0000 mon.a (mon.0) 3991 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: audit 2026-03-09T16:27:34.791800+0000 mon.a (mon.0) 3991 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: cluster 2026-03-09T16:27:35.137107+0000 mgr.y (mgr.14520) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:35 vm01 bash[28152]: cluster 2026-03-09T16:27:35.137107+0000 mgr.y (mgr.14520) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: cluster 2026-03-09T16:27:34.593094+0000 mon.a (mon.0) 3988 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: cluster 2026-03-09T16:27:34.593094+0000 mon.a (mon.0) 3988 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: cluster 2026-03-09T16:27:34.593354+0000 mon.a (mon.0) 3989 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: cluster 2026-03-09T16:27:34.593354+0000 mon.a (mon.0) 3989 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: cluster 2026-03-09T16:27:34.610580+0000 mon.a (mon.0) 3990 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: cluster 2026-03-09T16:27:34.610580+0000 mon.a (mon.0) 3990 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: audit 2026-03-09T16:27:34.791800+0000 mon.a (mon.0) 3991 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: audit 2026-03-09T16:27:34.791800+0000 mon.a (mon.0) 3991 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: cluster 2026-03-09T16:27:35.137107+0000 mgr.y (mgr.14520) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:35.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:35 vm01 bash[20728]: cluster 2026-03-09T16:27:35.137107+0000 mgr.y (mgr.14520) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:36.600 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.603+0000 7f621cff9640 1 -- 192.168.123.101:0/2139236076 <== mon.0 v2:192.168.123.101:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v768) ==== 221+0+0 (secure 0 0 0) 0x7f621406af10 con 0x7f6220108a00 2026-03-09T16:27:36.600 INFO:tasks.workunit.client.0.vm01.stderr:set-quota max_bytes = 100 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f61f40776d0 msgr2=0x7f61f4079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f61f40776d0 0x7f61f4079b90 secure :-1 s=READY pgs=4277 cs=0 l=1 rev1=1 crypto rx=0x7f621001c6b0 tx=0x7f62100023e0 comp rx=0 tx=0).stop 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 msgr2=0x7f62201a39f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 0x7f62201a39f0 secure :-1 s=READY pgs=3088 cs=0 l=1 rev1=1 crypto rx=0x7f621400ef30 tx=0x7f621400c550 comp rx=0 tx=0).stop 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 shutdown_connections 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f61f40776d0 0x7f61f4079b90 unknown :-1 s=CLOSED pgs=4277 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f6220108a00 0x7f62201a39f0 unknown :-1 s=CLOSED pgs=3088 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f622006ba20 0x7f622019f660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 --2- 192.168.123.101:0/2139236076 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f622006b080 0x7f622019f120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:27:36.603 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 >> 192.168.123.101:0/2139236076 conn(0x7f62200fd5c0 msgr2=0x7f6220106510 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:27:36.604 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 shutdown_connections 2026-03-09T16:27:36.604 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:27:36.607+0000 7f6225e2d640 1 -- 192.168.123.101:0/2139236076 wait complete. 2026-03-09T16:27:36.619 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 30 2026-03-09T16:27:36.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:36 vm09 bash[22983]: audit 2026-03-09T16:27:35.604350+0000 mon.a (mon.0) 3992 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:36.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:36 vm09 bash[22983]: audit 2026-03-09T16:27:35.604350+0000 mon.a (mon.0) 3992 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:36.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:36 vm09 bash[22983]: cluster 2026-03-09T16:27:35.620214+0000 mon.a (mon.0) 3993 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T16:27:36.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:36 vm09 bash[22983]: cluster 2026-03-09T16:27:35.620214+0000 mon.a (mon.0) 3993 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T16:27:36.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:36 vm09 bash[22983]: audit 2026-03-09T16:27:35.662380+0000 mon.a (mon.0) 3994 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:36.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:36 vm09 bash[22983]: audit 2026-03-09T16:27:35.662380+0000 mon.a (mon.0) 3994 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:36 vm01 bash[28152]: audit 2026-03-09T16:27:35.604350+0000 mon.a (mon.0) 3992 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:36 vm01 bash[28152]: audit 2026-03-09T16:27:35.604350+0000 mon.a (mon.0) 3992 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:36 vm01 bash[28152]: cluster 2026-03-09T16:27:35.620214+0000 mon.a (mon.0) 3993 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:36 vm01 bash[28152]: cluster 2026-03-09T16:27:35.620214+0000 mon.a (mon.0) 3993 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:36 vm01 bash[28152]: audit 2026-03-09T16:27:35.662380+0000 mon.a (mon.0) 3994 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:36 vm01 bash[28152]: audit 2026-03-09T16:27:35.662380+0000 mon.a (mon.0) 3994 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:36 vm01 bash[20728]: audit 2026-03-09T16:27:35.604350+0000 mon.a (mon.0) 3992 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:36 vm01 bash[20728]: audit 2026-03-09T16:27:35.604350+0000 mon.a (mon.0) 3992 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:36 vm01 bash[20728]: cluster 2026-03-09T16:27:35.620214+0000 mon.a (mon.0) 3993 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:36 vm01 bash[20728]: cluster 2026-03-09T16:27:35.620214+0000 mon.a (mon.0) 3993 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:36 vm01 bash[20728]: audit 2026-03-09T16:27:35.662380+0000 mon.a (mon.0) 3994 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:36.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:36 vm01 bash[20728]: audit 2026-03-09T16:27:35.662380+0000 mon.a (mon.0) 3994 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T16:27:37.882 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:27:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:27:37.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:37 vm09 bash[22983]: audit 2026-03-09T16:27:36.608218+0000 mon.a (mon.0) 3995 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:37.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:37 vm09 bash[22983]: audit 2026-03-09T16:27:36.608218+0000 mon.a (mon.0) 3995 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:37.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:37 vm09 bash[22983]: cluster 2026-03-09T16:27:36.617905+0000 mon.a (mon.0) 3996 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T16:27:37.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:37 vm09 bash[22983]: cluster 2026-03-09T16:27:36.617905+0000 mon.a (mon.0) 3996 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T16:27:37.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:37 vm09 bash[22983]: cluster 2026-03-09T16:27:37.137416+0000 mgr.y (mgr.14520) 1253 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:27:37.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:37 vm09 bash[22983]: cluster 2026-03-09T16:27:37.137416+0000 mgr.y (mgr.14520) 1253 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:37 vm01 bash[28152]: audit 2026-03-09T16:27:36.608218+0000 mon.a (mon.0) 3995 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:37 vm01 bash[28152]: audit 2026-03-09T16:27:36.608218+0000 mon.a (mon.0) 3995 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:37 vm01 bash[28152]: cluster 2026-03-09T16:27:36.617905+0000 mon.a (mon.0) 3996 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:37 vm01 bash[28152]: cluster 2026-03-09T16:27:36.617905+0000 mon.a (mon.0) 3996 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:37 vm01 bash[28152]: cluster 2026-03-09T16:27:37.137416+0000 mgr.y (mgr.14520) 1253 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:37 vm01 bash[28152]: cluster 2026-03-09T16:27:37.137416+0000 mgr.y (mgr.14520) 1253 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:37 vm01 bash[20728]: audit 2026-03-09T16:27:36.608218+0000 mon.a (mon.0) 3995 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:37 vm01 bash[20728]: audit 2026-03-09T16:27:36.608218+0000 mon.a (mon.0) 3995 : audit [INF] from='client.? 192.168.123.101:0/2139236076' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:37 vm01 bash[20728]: cluster 2026-03-09T16:27:36.617905+0000 mon.a (mon.0) 3996 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:37 vm01 bash[20728]: cluster 2026-03-09T16:27:36.617905+0000 mon.a (mon.0) 3996 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:37 vm01 bash[20728]: cluster 2026-03-09T16:27:37.137416+0000 mgr.y (mgr.14520) 1253 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:27:37.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:37 vm01 bash[20728]: cluster 2026-03-09T16:27:37.137416+0000 mgr.y (mgr.14520) 1253 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 476 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T16:27:38.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:38 vm09 bash[22983]: audit 2026-03-09T16:27:37.713599+0000 mgr.y (mgr.14520) 1254 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:38.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:38 vm09 bash[22983]: audit 2026-03-09T16:27:37.713599+0000 mgr.y (mgr.14520) 1254 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:38.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:38 vm01 bash[28152]: audit 2026-03-09T16:27:37.713599+0000 mgr.y (mgr.14520) 1254 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:38.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:38 vm01 bash[28152]: audit 2026-03-09T16:27:37.713599+0000 mgr.y (mgr.14520) 1254 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:38.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:38 vm01 bash[20728]: audit 2026-03-09T16:27:37.713599+0000 mgr.y (mgr.14520) 1254 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:38.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:38 vm01 bash[20728]: audit 2026-03-09T16:27:37.713599+0000 mgr.y (mgr.14520) 1254 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:39.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:39 vm09 bash[22983]: cluster 2026-03-09T16:27:39.138009+0000 mgr.y (mgr.14520) 1255 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:39.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:39 vm09 bash[22983]: cluster 2026-03-09T16:27:39.138009+0000 mgr.y (mgr.14520) 1255 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:39.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:39 vm09 bash[22983]: cluster 2026-03-09T16:27:39.595104+0000 mon.a (mon.0) 3997 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_bytes: 100 B) 2026-03-09T16:27:39.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:39 vm09 bash[22983]: cluster 2026-03-09T16:27:39.595104+0000 mon.a (mon.0) 3997 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_bytes: 100 B) 2026-03-09T16:27:39.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:39 vm09 bash[22983]: cluster 2026-03-09T16:27:39.595309+0000 mon.a (mon.0) 3998 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:39.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:39 vm09 bash[22983]: cluster 2026-03-09T16:27:39.595309+0000 mon.a (mon.0) 3998 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:39.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:39 vm09 bash[22983]: cluster 2026-03-09T16:27:39.600559+0000 mon.a (mon.0) 3999 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T16:27:39.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:39 vm09 bash[22983]: cluster 2026-03-09T16:27:39.600559+0000 mon.a (mon.0) 3999 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:39 vm01 bash[28152]: cluster 2026-03-09T16:27:39.138009+0000 mgr.y (mgr.14520) 1255 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:39 vm01 bash[28152]: cluster 2026-03-09T16:27:39.138009+0000 mgr.y (mgr.14520) 1255 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:39 vm01 bash[28152]: cluster 2026-03-09T16:27:39.595104+0000 mon.a (mon.0) 3997 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_bytes: 100 B) 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:39 vm01 bash[28152]: cluster 2026-03-09T16:27:39.595104+0000 mon.a (mon.0) 3997 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_bytes: 100 B) 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:39 vm01 bash[28152]: cluster 2026-03-09T16:27:39.595309+0000 mon.a (mon.0) 3998 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:39 vm01 bash[28152]: cluster 2026-03-09T16:27:39.595309+0000 mon.a (mon.0) 3998 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:39 vm01 bash[28152]: cluster 2026-03-09T16:27:39.600559+0000 mon.a (mon.0) 3999 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:39 vm01 bash[28152]: cluster 2026-03-09T16:27:39.600559+0000 mon.a (mon.0) 3999 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:39 vm01 bash[20728]: cluster 2026-03-09T16:27:39.138009+0000 mgr.y (mgr.14520) 1255 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:39.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:39 vm01 bash[20728]: cluster 2026-03-09T16:27:39.138009+0000 mgr.y (mgr.14520) 1255 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:39.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:39 vm01 bash[20728]: cluster 2026-03-09T16:27:39.595104+0000 mon.a (mon.0) 3997 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_bytes: 100 B) 2026-03-09T16:27:39.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:39 vm01 bash[20728]: cluster 2026-03-09T16:27:39.595104+0000 mon.a (mon.0) 3997 : cluster [WRN] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' is full (reached quota's max_bytes: 100 B) 2026-03-09T16:27:39.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:39 vm01 bash[20728]: cluster 2026-03-09T16:27:39.595309+0000 mon.a (mon.0) 3998 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:39.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:39 vm01 bash[20728]: cluster 2026-03-09T16:27:39.595309+0000 mon.a (mon.0) 3998 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:27:39.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:39 vm01 bash[20728]: cluster 2026-03-09T16:27:39.600559+0000 mon.a (mon.0) 3999 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T16:27:39.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:39 vm01 bash[20728]: cluster 2026-03-09T16:27:39.600559+0000 mon.a (mon.0) 3999 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-09T16:27:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:42 vm09 bash[22983]: cluster 2026-03-09T16:27:41.138338+0000 mgr.y (mgr.14520) 1256 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:42 vm09 bash[22983]: cluster 2026-03-09T16:27:41.138338+0000 mgr.y (mgr.14520) 1256 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:42.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:42 vm01 bash[28152]: cluster 2026-03-09T16:27:41.138338+0000 mgr.y (mgr.14520) 1256 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:42.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:42 vm01 bash[28152]: cluster 2026-03-09T16:27:41.138338+0000 mgr.y (mgr.14520) 1256 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:42.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:42 vm01 bash[20728]: cluster 2026-03-09T16:27:41.138338+0000 mgr.y (mgr.14520) 1256 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:42.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:42 vm01 bash[20728]: cluster 2026-03-09T16:27:41.138338+0000 mgr.y (mgr.14520) 1256 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-09T16:27:43.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:27:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:27:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:27:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:44 vm09 bash[22983]: cluster 2026-03-09T16:27:43.138664+0000 mgr.y (mgr.14520) 1257 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 680 B/s rd, 816 B/s wr, 0 op/s 2026-03-09T16:27:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:44 vm09 bash[22983]: cluster 2026-03-09T16:27:43.138664+0000 mgr.y (mgr.14520) 1257 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 680 B/s rd, 816 B/s wr, 0 op/s 2026-03-09T16:27:44.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:44 vm01 bash[28152]: cluster 2026-03-09T16:27:43.138664+0000 mgr.y (mgr.14520) 1257 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 680 B/s rd, 816 B/s wr, 0 op/s 2026-03-09T16:27:44.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:44 vm01 bash[28152]: cluster 2026-03-09T16:27:43.138664+0000 mgr.y (mgr.14520) 1257 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 680 B/s rd, 816 B/s wr, 0 op/s 2026-03-09T16:27:44.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:44 vm01 bash[20728]: cluster 2026-03-09T16:27:43.138664+0000 mgr.y (mgr.14520) 1257 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 680 B/s rd, 816 B/s wr, 0 op/s 2026-03-09T16:27:44.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:44 vm01 bash[20728]: cluster 2026-03-09T16:27:43.138664+0000 mgr.y (mgr.14520) 1257 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 680 B/s rd, 816 B/s wr, 0 op/s 2026-03-09T16:27:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:45 vm09 bash[22983]: audit 2026-03-09T16:27:45.011842+0000 mon.a (mon.0) 4000 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:45 vm09 bash[22983]: audit 2026-03-09T16:27:45.011842+0000 mon.a (mon.0) 4000 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:45 vm01 bash[28152]: audit 2026-03-09T16:27:45.011842+0000 mon.a (mon.0) 4000 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:45 vm01 bash[28152]: audit 2026-03-09T16:27:45.011842+0000 mon.a (mon.0) 4000 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:45.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:45 vm01 bash[20728]: audit 2026-03-09T16:27:45.011842+0000 mon.a (mon.0) 4000 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:45.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:45 vm01 bash[20728]: audit 2026-03-09T16:27:45.011842+0000 mon.a (mon.0) 4000 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:27:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:46 vm09 bash[22983]: cluster 2026-03-09T16:27:45.139226+0000 mgr.y (mgr.14520) 1258 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 720 B/s wr, 1 op/s 2026-03-09T16:27:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:46 vm09 bash[22983]: cluster 2026-03-09T16:27:45.139226+0000 mgr.y (mgr.14520) 1258 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 720 B/s wr, 1 op/s 2026-03-09T16:27:46.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:46 vm01 bash[28152]: cluster 2026-03-09T16:27:45.139226+0000 mgr.y (mgr.14520) 1258 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 720 B/s wr, 1 op/s 2026-03-09T16:27:46.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:46 vm01 bash[28152]: cluster 2026-03-09T16:27:45.139226+0000 mgr.y (mgr.14520) 1258 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 720 B/s wr, 1 op/s 2026-03-09T16:27:46.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:46 vm01 bash[20728]: cluster 2026-03-09T16:27:45.139226+0000 mgr.y (mgr.14520) 1258 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 720 B/s wr, 1 op/s 2026-03-09T16:27:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:46 vm01 bash[20728]: cluster 2026-03-09T16:27:45.139226+0000 mgr.y (mgr.14520) 1258 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 720 B/s wr, 1 op/s 2026-03-09T16:27:48.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:27:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:27:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:48 vm09 bash[22983]: cluster 2026-03-09T16:27:47.139529+0000 mgr.y (mgr.14520) 1259 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 1 op/s 2026-03-09T16:27:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:48 vm09 bash[22983]: cluster 2026-03-09T16:27:47.139529+0000 mgr.y (mgr.14520) 1259 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 1 op/s 2026-03-09T16:27:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:48 vm01 bash[28152]: cluster 2026-03-09T16:27:47.139529+0000 mgr.y (mgr.14520) 1259 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 1 op/s 2026-03-09T16:27:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:48 vm01 bash[28152]: cluster 2026-03-09T16:27:47.139529+0000 mgr.y (mgr.14520) 1259 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 1 op/s 2026-03-09T16:27:48.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:48 vm01 bash[20728]: cluster 2026-03-09T16:27:47.139529+0000 mgr.y (mgr.14520) 1259 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 1 op/s 2026-03-09T16:27:48.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:48 vm01 bash[20728]: cluster 2026-03-09T16:27:47.139529+0000 mgr.y (mgr.14520) 1259 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 1 op/s 2026-03-09T16:27:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:49 vm09 bash[22983]: audit 2026-03-09T16:27:47.724100+0000 mgr.y (mgr.14520) 1260 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:49 vm09 bash[22983]: audit 2026-03-09T16:27:47.724100+0000 mgr.y (mgr.14520) 1260 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:49 vm01 bash[28152]: audit 2026-03-09T16:27:47.724100+0000 mgr.y (mgr.14520) 1260 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:49 vm01 bash[28152]: audit 2026-03-09T16:27:47.724100+0000 mgr.y (mgr.14520) 1260 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:49.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:49 vm01 bash[20728]: audit 2026-03-09T16:27:47.724100+0000 mgr.y (mgr.14520) 1260 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:49.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:49 vm01 bash[20728]: audit 2026-03-09T16:27:47.724100+0000 mgr.y (mgr.14520) 1260 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:50 vm09 bash[22983]: cluster 2026-03-09T16:27:49.140135+0000 mgr.y (mgr.14520) 1261 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:50 vm09 bash[22983]: cluster 2026-03-09T16:27:49.140135+0000 mgr.y (mgr.14520) 1261 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:50 vm01 bash[28152]: cluster 2026-03-09T16:27:49.140135+0000 mgr.y (mgr.14520) 1261 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:50.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:50 vm01 bash[28152]: cluster 2026-03-09T16:27:49.140135+0000 mgr.y (mgr.14520) 1261 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:50.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:50 vm01 bash[20728]: cluster 2026-03-09T16:27:49.140135+0000 mgr.y (mgr.14520) 1261 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:50.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:50 vm01 bash[20728]: cluster 2026-03-09T16:27:49.140135+0000 mgr.y (mgr.14520) 1261 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:27:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:52 vm09 bash[22983]: cluster 2026-03-09T16:27:51.140450+0000 mgr.y (mgr.14520) 1262 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:52 vm09 bash[22983]: cluster 2026-03-09T16:27:51.140450+0000 mgr.y (mgr.14520) 1262 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:52 vm01 bash[28152]: cluster 2026-03-09T16:27:51.140450+0000 mgr.y (mgr.14520) 1262 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:52.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:52 vm01 bash[28152]: cluster 2026-03-09T16:27:51.140450+0000 mgr.y (mgr.14520) 1262 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:52 vm01 bash[20728]: cluster 2026-03-09T16:27:51.140450+0000 mgr.y (mgr.14520) 1262 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:52 vm01 bash[20728]: cluster 2026-03-09T16:27:51.140450+0000 mgr.y (mgr.14520) 1262 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-09T16:27:53.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:27:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:27:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:27:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:54 vm09 bash[22983]: cluster 2026-03-09T16:27:53.140833+0000 mgr.y (mgr.14520) 1263 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:54 vm09 bash[22983]: cluster 2026-03-09T16:27:53.140833+0000 mgr.y (mgr.14520) 1263 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:54 vm01 bash[28152]: cluster 2026-03-09T16:27:53.140833+0000 mgr.y (mgr.14520) 1263 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:54.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:54 vm01 bash[28152]: cluster 2026-03-09T16:27:53.140833+0000 mgr.y (mgr.14520) 1263 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:54.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:54 vm01 bash[20728]: cluster 2026-03-09T16:27:53.140833+0000 mgr.y (mgr.14520) 1263 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:54.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:54 vm01 bash[20728]: cluster 2026-03-09T16:27:53.140833+0000 mgr.y (mgr.14520) 1263 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:56 vm09 bash[22983]: cluster 2026-03-09T16:27:55.141593+0000 mgr.y (mgr.14520) 1264 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:56 vm09 bash[22983]: cluster 2026-03-09T16:27:55.141593+0000 mgr.y (mgr.14520) 1264 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:56 vm01 bash[28152]: cluster 2026-03-09T16:27:55.141593+0000 mgr.y (mgr.14520) 1264 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:56.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:56 vm01 bash[28152]: cluster 2026-03-09T16:27:55.141593+0000 mgr.y (mgr.14520) 1264 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:56.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:56 vm01 bash[20728]: cluster 2026-03-09T16:27:55.141593+0000 mgr.y (mgr.14520) 1264 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:56.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:56 vm01 bash[20728]: cluster 2026-03-09T16:27:55.141593+0000 mgr.y (mgr.14520) 1264 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:27:58.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:27:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:27:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:58 vm09 bash[22983]: cluster 2026-03-09T16:27:57.141997+0000 mgr.y (mgr.14520) 1265 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:58 vm09 bash[22983]: cluster 2026-03-09T16:27:57.141997+0000 mgr.y (mgr.14520) 1265 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:58 vm01 bash[28152]: cluster 2026-03-09T16:27:57.141997+0000 mgr.y (mgr.14520) 1265 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:58.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:58 vm01 bash[28152]: cluster 2026-03-09T16:27:57.141997+0000 mgr.y (mgr.14520) 1265 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:58.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:58 vm01 bash[20728]: cluster 2026-03-09T16:27:57.141997+0000 mgr.y (mgr.14520) 1265 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:58.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:58 vm01 bash[20728]: cluster 2026-03-09T16:27:57.141997+0000 mgr.y (mgr.14520) 1265 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:27:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:59 vm09 bash[22983]: audit 2026-03-09T16:27:57.732310+0000 mgr.y (mgr.14520) 1266 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:27:59 vm09 bash[22983]: audit 2026-03-09T16:27:57.732310+0000 mgr.y (mgr.14520) 1266 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:59 vm01 bash[28152]: audit 2026-03-09T16:27:57.732310+0000 mgr.y (mgr.14520) 1266 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:59.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:27:59 vm01 bash[28152]: audit 2026-03-09T16:27:57.732310+0000 mgr.y (mgr.14520) 1266 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:59.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:59 vm01 bash[20728]: audit 2026-03-09T16:27:57.732310+0000 mgr.y (mgr.14520) 1266 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:27:59.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:27:59 vm01 bash[20728]: audit 2026-03-09T16:27:57.732310+0000 mgr.y (mgr.14520) 1266 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:00 vm09 bash[22983]: cluster 2026-03-09T16:27:59.142714+0000 mgr.y (mgr.14520) 1267 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:00 vm09 bash[22983]: cluster 2026-03-09T16:27:59.142714+0000 mgr.y (mgr.14520) 1267 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:00 vm09 bash[22983]: audit 2026-03-09T16:28:00.017879+0000 mon.a (mon.0) 4001 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:00 vm09 bash[22983]: audit 2026-03-09T16:28:00.017879+0000 mon.a (mon.0) 4001 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:00 vm01 bash[28152]: cluster 2026-03-09T16:27:59.142714+0000 mgr.y (mgr.14520) 1267 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:00 vm01 bash[28152]: cluster 2026-03-09T16:27:59.142714+0000 mgr.y (mgr.14520) 1267 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:00.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:00 vm01 bash[28152]: audit 2026-03-09T16:28:00.017879+0000 mon.a (mon.0) 4001 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:00 vm01 bash[28152]: audit 2026-03-09T16:28:00.017879+0000 mon.a (mon.0) 4001 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:00 vm01 bash[20728]: cluster 2026-03-09T16:27:59.142714+0000 mgr.y (mgr.14520) 1267 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:00 vm01 bash[20728]: cluster 2026-03-09T16:27:59.142714+0000 mgr.y (mgr.14520) 1267 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:00 vm01 bash[20728]: audit 2026-03-09T16:28:00.017879+0000 mon.a (mon.0) 4001 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:00 vm01 bash[20728]: audit 2026-03-09T16:28:00.017879+0000 mon.a (mon.0) 4001 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:01 vm09 bash[22983]: cluster 2026-03-09T16:28:01.143144+0000 mgr.y (mgr.14520) 1268 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:01 vm09 bash[22983]: cluster 2026-03-09T16:28:01.143144+0000 mgr.y (mgr.14520) 1268 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:01.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:01 vm01 bash[28152]: cluster 2026-03-09T16:28:01.143144+0000 mgr.y (mgr.14520) 1268 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:01.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:01 vm01 bash[28152]: cluster 2026-03-09T16:28:01.143144+0000 mgr.y (mgr.14520) 1268 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:01 vm01 bash[20728]: cluster 2026-03-09T16:28:01.143144+0000 mgr.y (mgr.14520) 1268 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:01 vm01 bash[20728]: cluster 2026-03-09T16:28:01.143144+0000 mgr.y (mgr.14520) 1268 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:03.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:28:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:28:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:28:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:04 vm09 bash[22983]: cluster 2026-03-09T16:28:03.143543+0000 mgr.y (mgr.14520) 1269 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:04 vm09 bash[22983]: cluster 2026-03-09T16:28:03.143543+0000 mgr.y (mgr.14520) 1269 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:04.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:04 vm01 bash[28152]: cluster 2026-03-09T16:28:03.143543+0000 mgr.y (mgr.14520) 1269 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:04.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:04 vm01 bash[28152]: cluster 2026-03-09T16:28:03.143543+0000 mgr.y (mgr.14520) 1269 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:04.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:04 vm01 bash[20728]: cluster 2026-03-09T16:28:03.143543+0000 mgr.y (mgr.14520) 1269 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:04.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:04 vm01 bash[20728]: cluster 2026-03-09T16:28:03.143543+0000 mgr.y (mgr.14520) 1269 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:05 vm09 bash[22983]: cluster 2026-03-09T16:28:05.144202+0000 mgr.y (mgr.14520) 1270 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:05.882 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:05 vm09 bash[22983]: cluster 2026-03-09T16:28:05.144202+0000 mgr.y (mgr.14520) 1270 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:05.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:05 vm01 bash[28152]: cluster 2026-03-09T16:28:05.144202+0000 mgr.y (mgr.14520) 1270 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:05.922 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:05 vm01 bash[28152]: cluster 2026-03-09T16:28:05.144202+0000 mgr.y (mgr.14520) 1270 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:05.922 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:05 vm01 bash[20728]: cluster 2026-03-09T16:28:05.144202+0000 mgr.y (mgr.14520) 1270 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:05.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:05 vm01 bash[20728]: cluster 2026-03-09T16:28:05.144202+0000 mgr.y (mgr.14520) 1270 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:06.620 INFO:tasks.workunit.client.0.vm01.stderr:+ pid=131313 2026-03-09T16:28:06.620 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool set-quota 596e5e1f-ecde-406d-b4d0-afd8854e4a60 max_bytes 0 2026-03-09T16:28:06.621 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put two /etc/passwd 2026-03-09T16:28:06.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 -- 192.168.123.101:0/3992296329 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9218069380 msgr2=0x7f9218100b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:06.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 --2- 192.168.123.101:0/3992296329 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9218069380 0x7f9218100b00 secure :-1 s=READY pgs=3013 cs=0 l=1 rev1=1 crypto rx=0x7f920c00ab80 tx=0x7f920c01dcf0 comp rx=0 tx=0).stop 2026-03-09T16:28:06.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 -- 192.168.123.101:0/3992296329 shutdown_connections 2026-03-09T16:28:06.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 --2- 192.168.123.101:0/3992296329 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f921810b9c0 0x7f921810ddb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:06.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 --2- 192.168.123.101:0/3992296329 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f92181074d0 0x7f9218101040 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:06.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 --2- 192.168.123.101:0/3992296329 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9218069380 0x7f9218100b00 unknown :-1 s=CLOSED pgs=3013 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:06.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 -- 192.168.123.101:0/3992296329 >> 192.168.123.101:0/3992296329 conn(0x7f92180fc820 msgr2=0x7f92180fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:06.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 -- 192.168.123.101:0/3992296329 shutdown_connections 2026-03-09T16:28:06.688 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 -- 192.168.123.101:0/3992296329 wait complete. 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 Processor -- start 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 -- start start 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9218069380 0x7f92181041d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f92181074d0 0x7f9218102760 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f921810b9c0 0x7f9218102ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f9218112e00 con 0x7f92181074d0 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f9218112c80 con 0x7f9218069380 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.691+0000 7f921d49c640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f9218112f80 con 0x7f921810b9c0 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f9216ffd640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9218069380 0x7f92181041d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f9216ffd640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9218069380 0x7f92181041d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:56188/0 (socket says 192.168.123.101:56188) 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f9216ffd640 1 -- 192.168.123.101:0/3613906012 learned_addr learned my addr 192.168.123.101:0/3613906012 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f92177fe640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f921810b9c0 0x7f9218102ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f9216ffd640 1 -- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f921810b9c0 msgr2=0x7f9218102ca0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f92167fc640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f92181074d0 0x7f9218102760 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f9216ffd640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f921810b9c0 0x7f9218102ca0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:06.689 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f9216ffd640 1 -- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f92181074d0 msgr2=0x7f9218102760 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f9216ffd640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f92181074d0 0x7f9218102760 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f9216ffd640 1 -- 192.168.123.101:0/3613906012 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9218103510 con 0x7f9218069380 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f92167fc640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f92181074d0 0x7f9218102760 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f92177fe640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f921810b9c0 0x7f9218102ca0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f9216ffd640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9218069380 0x7f92181041d0 secure :-1 s=READY pgs=3097 cs=0 l=1 rev1=1 crypto rx=0x7f920c01d760 tx=0x7f920c0045a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f920c004be0 con 0x7f9218069380 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f92181b03c0 con 0x7f9218069380 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f92181b08c0 con 0x7f9218069380 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f920c004d80 con 0x7f9218069380 2026-03-09T16:28:06.690 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f91e4005190 con 0x7f9218069380 2026-03-09T16:28:06.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f920c005590 con 0x7f9218069380 2026-03-09T16:28:06.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f920c0b6020 con 0x7f9218069380 2026-03-09T16:28:06.691 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f91f7fff640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f91f0077680 0x7f91f0079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:06.692 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f92167fc640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f91f0077680 0x7f91f0079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:06.692 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(769..769 src has 257..769) ==== 8985+0+0 (secure 0 0 0) 0x7f920c134b40 con 0x7f9218069380 2026-03-09T16:28:06.692 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f92167fc640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f91f0077680 0x7f91f0079b40 secure :-1 s=READY pgs=4279 cs=0 l=1 rev1=1 crypto rx=0x7f9200005fd0 tx=0x7f9200005ea0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:06.692 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.695+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=770}) -- 0x7f91f0083080 con 0x7f9218069380 2026-03-09T16:28:06.694 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.699+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f920c017a00 con 0x7f9218069380 2026-03-09T16:28:06.794 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.799+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"} v 0) -- 0x7f91e4005480 con 0x7f9218069380 2026-03-09T16:28:06.853 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.859+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 7 ==== osd_map(770..770 src has 257..770) ==== 628+0+0 (secure 0 0 0) 0x7f920c0f8f00 con 0x7f9218069380 2026-03-09T16:28:06.853 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.859+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=771}) -- 0x7f91f0083c20 con 0x7f9218069380 2026-03-09T16:28:06.863 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.867+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v770) ==== 217+0+0 (secure 0 0 0) 0x7f920c0fde20 con 0x7f9218069380 2026-03-09T16:28:06.921 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:06.927+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"} v 0) -- 0x7f91e4003560 con 0x7f9218069380 2026-03-09T16:28:07.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:06 vm09 bash[22983]: audit 2026-03-09T16:28:06.795893+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:06 vm09 bash[22983]: audit 2026-03-09T16:28:06.795893+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:06 vm09 bash[22983]: audit 2026-03-09T16:28:06.803352+0000 mon.a (mon.0) 4002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:06 vm09 bash[22983]: audit 2026-03-09T16:28:06.803352+0000 mon.a (mon.0) 4002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:06 vm01 bash[20728]: audit 2026-03-09T16:28:06.795893+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:06 vm01 bash[20728]: audit 2026-03-09T16:28:06.795893+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:06 vm01 bash[20728]: audit 2026-03-09T16:28:06.803352+0000 mon.a (mon.0) 4002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:06 vm01 bash[20728]: audit 2026-03-09T16:28:06.803352+0000 mon.a (mon.0) 4002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:06 vm01 bash[28152]: audit 2026-03-09T16:28:06.795893+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:06 vm01 bash[28152]: audit 2026-03-09T16:28:06.795893+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:06 vm01 bash[28152]: audit 2026-03-09T16:28:06.803352+0000 mon.a (mon.0) 4002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:06 vm01 bash[28152]: audit 2026-03-09T16:28:06.803352+0000 mon.a (mon.0) 4002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:07.882 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.887+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 9 ==== osd_map(771..771 src has 257..771) ==== 628+0+0 (secure 0 0 0) 0x7f920c0b6e20 con 0x7f9218069380 2026-03-09T16:28:07.882 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.887+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=772}) -- 0x7f91f0084050 con 0x7f9218069380 2026-03-09T16:28:07.895 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.899+0000 7f91f7fff640 1 -- 192.168.123.101:0/3613906012 <== mon.1 v2:192.168.123.109:3300/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v771) ==== 217+0+0 (secure 0 0 0) 0x7f920c100ea0 con 0x7f9218069380 2026-03-09T16:28:07.895 INFO:tasks.workunit.client.0.vm01.stderr:set-quota max_bytes = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f91f0077680 msgr2=0x7f91f0079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f91f0077680 0x7f91f0079b40 secure :-1 s=READY pgs=4279 cs=0 l=1 rev1=1 crypto rx=0x7f9200005fd0 tx=0x7f9200005ea0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9218069380 msgr2=0x7f92181041d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9218069380 0x7f92181041d0 secure :-1 s=READY pgs=3097 cs=0 l=1 rev1=1 crypto rx=0x7f920c01d760 tx=0x7f920c0045a0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 shutdown_connections 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f91f0077680 0x7f91f0079b40 unknown :-1 s=CLOSED pgs=4279 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f921810b9c0 0x7f9218102ca0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f92181074d0 0x7f9218102760 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 --2- 192.168.123.101:0/3613906012 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9218069380 0x7f92181041d0 unknown :-1 s=CLOSED pgs=3097 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 >> 192.168.123.101:0/3613906012 conn(0x7f92180fc820 msgr2=0x7f9218104f40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:07.897 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 shutdown_connections 2026-03-09T16:28:07.898 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.903+0000 7f921d49c640 1 -- 192.168.123.101:0/3613906012 wait complete. 2026-03-09T16:28:07.908 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool set-quota 596e5e1f-ecde-406d-b4d0-afd8854e4a60 max_objects 0 2026-03-09T16:28:07.971 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- 192.168.123.101:0/2926774348 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70103000 msgr2=0x7f1f701064d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:07.971 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 --2- 192.168.123.101:0/2926774348 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70103000 0x7f1f701064d0 secure :-1 s=READY pgs=3090 cs=0 l=1 rev1=1 crypto rx=0x7f1f5c009a30 tx=0x7f1f5c01c940 comp rx=0 tx=0).stop 2026-03-09T16:28:07.971 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- 192.168.123.101:0/2926774348 shutdown_connections 2026-03-09T16:28:07.971 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 --2- 192.168.123.101:0/2926774348 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1f70106c30 0x7f1f7010e740 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.971 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 --2- 192.168.123.101:0/2926774348 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70103000 0x7f1f701064d0 unknown :-1 s=CLOSED pgs=3090 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.971 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 --2- 192.168.123.101:0/2926774348 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f70102650 0x7f1f70102a30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.971 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- 192.168.123.101:0/2926774348 >> 192.168.123.101:0/2926774348 conn(0x7f1f700fc820 msgr2=0x7f1f700fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:07.971 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- 192.168.123.101:0/2926774348 shutdown_connections 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- 192.168.123.101:0/2926774348 wait complete. 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 Processor -- start 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- start start 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f70102650 0x7f1f7019cdb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1f70103000 0x7f1f7019d2f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70106c30 0x7f1f701a1680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f1f70114840 con 0x7f1f70106c30 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f1f701146c0 con 0x7f1f70102650 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f1f701149c0 con 0x7f1f70103000 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f74cc6640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70106c30 0x7f1f701a1680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f74cc6640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70106c30 0x7f1f701a1680 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:46392/0 (socket says 192.168.123.101:46392) 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f74cc6640 1 -- 192.168.123.101:0/1886818805 learned_addr learned my addr 192.168.123.101:0/1886818805 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f6f7fe640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1f70103000 0x7f1f7019d2f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f74cc6640 1 -- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1f70103000 msgr2=0x7f1f7019d2f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f74cc6640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1f70103000 0x7f1f7019d2f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f74cc6640 1 -- 192.168.123.101:0/1886818805 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f70102650 msgr2=0x7f1f7019cdb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f6ffff640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f70102650 0x7f1f7019cdb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f74cc6640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f70102650 0x7f1f7019cdb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:07.972 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f74cc6640 1 -- 192.168.123.101:0/1886818805 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1f701a1e00 con 0x7f1f70106c30 2026-03-09T16:28:07.973 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f6ffff640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f70102650 0x7f1f7019cdb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T16:28:07.973 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f74cc6640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70106c30 0x7f1f701a1680 secure :-1 s=READY pgs=3091 cs=0 l=1 rev1=1 crypto rx=0x7f1f64004a30 tx=0x7f1f6400d4a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:07.973 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1f6400dbb0 con 0x7f1f70106c30 2026-03-09T16:28:07.973 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f1f6400dd50 con 0x7f1f70106c30 2026-03-09T16:28:07.973 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1f64013650 con 0x7f1f70106c30 2026-03-09T16:28:07.973 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f1f701a2090 con 0x7f1f70106c30 2026-03-09T16:28:07.973 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.975+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f1f7010ee00 con 0x7f1f70106c30 2026-03-09T16:28:07.974 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.979+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f1f64012070 con 0x7f1f70106c30 2026-03-09T16:28:07.974 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.979+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1f34005190 con 0x7f1f70106c30 2026-03-09T16:28:07.977 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.983+0000 7f1f6d7fa640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f1f44077760 0x7f1f44079c20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:07.977 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.983+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(771..771 src has 257..771) ==== 8985+0+0 (secure 0 0 0) 0x7f1f6409a400 con 0x7f1f70106c30 2026-03-09T16:28:07.977 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.983+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=772}) -- 0x7f1f440831e0 con 0x7f1f70106c30 2026-03-09T16:28:07.977 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.983+0000 7f1f6ffff640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f1f44077760 0x7f1f44079c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:07.978 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.983+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f1f64010070 con 0x7f1f70106c30 2026-03-09T16:28:07.978 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:07.983+0000 7f1f6ffff640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f1f44077760 0x7f1f44079c20 secure :-1 s=READY pgs=4280 cs=0 l=1 rev1=1 crypto rx=0x7f1f60006fd0 tx=0x7f1f60008040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:08.074 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:08.079+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"} v 0) -- 0x7f1f34005480 con 0x7f1f70106c30 2026-03-09T16:28:08.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:28:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: audit 2026-03-09T16:28:06.856876+0000 mon.a (mon.0) 4003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: audit 2026-03-09T16:28:06.856876+0000 mon.a (mon.0) 4003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: cluster 2026-03-09T16:28:06.873985+0000 mon.a (mon.0) 4004 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: cluster 2026-03-09T16:28:06.873985+0000 mon.a (mon.0) 4004 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: audit 2026-03-09T16:28:06.923012+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: audit 2026-03-09T16:28:06.923012+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: audit 2026-03-09T16:28:06.930348+0000 mon.a (mon.0) 4005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: audit 2026-03-09T16:28:06.930348+0000 mon.a (mon.0) 4005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: cluster 2026-03-09T16:28:07.144611+0000 mgr.y (mgr.14520) 1271 : cluster [DBG] pgmap v1711: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:28:08.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:07 vm09 bash[22983]: cluster 2026-03-09T16:28:07.144611+0000 mgr.y (mgr.14520) 1271 : cluster [DBG] pgmap v1711: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:28:08.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: audit 2026-03-09T16:28:06.856876+0000 mon.a (mon.0) 4003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:08.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: audit 2026-03-09T16:28:06.856876+0000 mon.a (mon.0) 4003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:08.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: cluster 2026-03-09T16:28:06.873985+0000 mon.a (mon.0) 4004 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T16:28:08.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: cluster 2026-03-09T16:28:06.873985+0000 mon.a (mon.0) 4004 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: audit 2026-03-09T16:28:06.923012+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: audit 2026-03-09T16:28:06.923012+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: audit 2026-03-09T16:28:06.930348+0000 mon.a (mon.0) 4005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: audit 2026-03-09T16:28:06.930348+0000 mon.a (mon.0) 4005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: cluster 2026-03-09T16:28:07.144611+0000 mgr.y (mgr.14520) 1271 : cluster [DBG] pgmap v1711: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:07 vm01 bash[28152]: cluster 2026-03-09T16:28:07.144611+0000 mgr.y (mgr.14520) 1271 : cluster [DBG] pgmap v1711: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: audit 2026-03-09T16:28:06.856876+0000 mon.a (mon.0) 4003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: audit 2026-03-09T16:28:06.856876+0000 mon.a (mon.0) 4003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: cluster 2026-03-09T16:28:06.873985+0000 mon.a (mon.0) 4004 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: cluster 2026-03-09T16:28:06.873985+0000 mon.a (mon.0) 4004 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: audit 2026-03-09T16:28:06.923012+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: audit 2026-03-09T16:28:06.923012+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.101:0/3613906012' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: audit 2026-03-09T16:28:06.930348+0000 mon.a (mon.0) 4005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: audit 2026-03-09T16:28:06.930348+0000 mon.a (mon.0) 4005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: cluster 2026-03-09T16:28:07.144611+0000 mgr.y (mgr.14520) 1271 : cluster [DBG] pgmap v1711: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:28:08.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:07 vm01 bash[20728]: cluster 2026-03-09T16:28:07.144611+0000 mgr.y (mgr.14520) 1271 : cluster [DBG] pgmap v1711: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:28:08.898 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:08.903+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v772) ==== 221+0+0 (secure 0 0 0) 0x7f1f6409a1c0 con 0x7f1f70106c30 2026-03-09T16:28:08.902 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:08.907+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 <== mon.0 v2:192.168.123.101:3300/0 8 ==== osd_map(772..772 src has 257..772) ==== 628+0+0 (secure 0 0 0) 0x7f1f640d09f0 con 0x7f1f70106c30 2026-03-09T16:28:08.902 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:08.907+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=773}) -- 0x7f1f440841f0 con 0x7f1f70106c30 2026-03-09T16:28:08.955 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:08.959+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"} v 0) -- 0x7f1f34004ae0 con 0x7f1f70106c30 2026-03-09T16:28:09.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:08 vm01 bash[28152]: audit 2026-03-09T16:28:07.741475+0000 mgr.y (mgr.14520) 1272 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:09.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:08 vm01 bash[28152]: audit 2026-03-09T16:28:07.741475+0000 mgr.y (mgr.14520) 1272 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:09.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:08 vm01 bash[28152]: audit 2026-03-09T16:28:07.888904+0000 mon.a (mon.0) 4006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:09.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:08 vm01 bash[28152]: audit 2026-03-09T16:28:07.888904+0000 mon.a (mon.0) 4006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:08 vm01 bash[28152]: cluster 2026-03-09T16:28:07.907869+0000 mon.a (mon.0) 4007 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:08 vm01 bash[28152]: cluster 2026-03-09T16:28:07.907869+0000 mon.a (mon.0) 4007 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:08 vm01 bash[28152]: audit 2026-03-09T16:28:08.082327+0000 mon.a (mon.0) 4008 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:08 vm01 bash[28152]: audit 2026-03-09T16:28:08.082327+0000 mon.a (mon.0) 4008 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:08 vm01 bash[20728]: audit 2026-03-09T16:28:07.741475+0000 mgr.y (mgr.14520) 1272 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:08 vm01 bash[20728]: audit 2026-03-09T16:28:07.741475+0000 mgr.y (mgr.14520) 1272 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:08 vm01 bash[20728]: audit 2026-03-09T16:28:07.888904+0000 mon.a (mon.0) 4006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:08 vm01 bash[20728]: audit 2026-03-09T16:28:07.888904+0000 mon.a (mon.0) 4006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:08 vm01 bash[20728]: cluster 2026-03-09T16:28:07.907869+0000 mon.a (mon.0) 4007 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:08 vm01 bash[20728]: cluster 2026-03-09T16:28:07.907869+0000 mon.a (mon.0) 4007 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:08 vm01 bash[20728]: audit 2026-03-09T16:28:08.082327+0000 mon.a (mon.0) 4008 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:09.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:08 vm01 bash[20728]: audit 2026-03-09T16:28:08.082327+0000 mon.a (mon.0) 4008 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:08 vm09 bash[22983]: audit 2026-03-09T16:28:07.741475+0000 mgr.y (mgr.14520) 1272 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:08 vm09 bash[22983]: audit 2026-03-09T16:28:07.741475+0000 mgr.y (mgr.14520) 1272 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:08 vm09 bash[22983]: audit 2026-03-09T16:28:07.888904+0000 mon.a (mon.0) 4006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:08 vm09 bash[22983]: audit 2026-03-09T16:28:07.888904+0000 mon.a (mon.0) 4006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:28:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:08 vm09 bash[22983]: cluster 2026-03-09T16:28:07.907869+0000 mon.a (mon.0) 4007 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T16:28:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:08 vm09 bash[22983]: cluster 2026-03-09T16:28:07.907869+0000 mon.a (mon.0) 4007 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-09T16:28:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:08 vm09 bash[22983]: audit 2026-03-09T16:28:08.082327+0000 mon.a (mon.0) 4008 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:09.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:08 vm09 bash[22983]: audit 2026-03-09T16:28:08.082327+0000 mon.a (mon.0) 4008 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:09.597 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.603+0000 7f1f6d7fa640 1 -- 192.168.123.101:0/1886818805 <== mon.0 v2:192.168.123.101:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v773) ==== 221+0+0 (secure 0 0 0) 0x7f1f64066820 con 0x7f1f70106c30 2026-03-09T16:28:09.597 INFO:tasks.workunit.client.0.vm01.stderr:set-quota max_objects = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 2026-03-09T16:28:09.600 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.603+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f1f44077760 msgr2=0x7f1f44079c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:09.600 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.603+0000 7f1f76750640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f1f44077760 0x7f1f44079c20 secure :-1 s=READY pgs=4280 cs=0 l=1 rev1=1 crypto rx=0x7f1f60006fd0 tx=0x7f1f60008040 comp rx=0 tx=0).stop 2026-03-09T16:28:09.600 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.603+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70106c30 msgr2=0x7f1f701a1680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:09.600 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.603+0000 7f1f76750640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70106c30 0x7f1f701a1680 secure :-1 s=READY pgs=3091 cs=0 l=1 rev1=1 crypto rx=0x7f1f64004a30 tx=0x7f1f6400d4a0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.601 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.607+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 shutdown_connections 2026-03-09T16:28:09.601 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.607+0000 7f1f76750640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f1f44077760 0x7f1f44079c20 unknown :-1 s=CLOSED pgs=4280 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.601 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.607+0000 7f1f76750640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f1f70106c30 0x7f1f701a1680 unknown :-1 s=CLOSED pgs=3091 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.601 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.607+0000 7f1f76750640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f1f70103000 0x7f1f7019d2f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.601 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.607+0000 7f1f76750640 1 --2- 192.168.123.101:0/1886818805 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f70102650 0x7f1f7019cdb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.601 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.607+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 >> 192.168.123.101:0/1886818805 conn(0x7f1f700fc820 msgr2=0x7f1f700fcf10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:09.601 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.607+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 shutdown_connections 2026-03-09T16:28:09.601 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.607+0000 7f1f76750640 1 -- 192.168.123.101:0/1886818805 wait complete. 2026-03-09T16:28:09.621 INFO:tasks.workunit.client.0.vm01.stderr:+ wait 131313 2026-03-09T16:28:09.621 INFO:tasks.workunit.client.0.vm01.stderr:+ [ 0 -ne 0 ] 2026-03-09T16:28:09.621 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-09T16:28:09.621 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put three /etc/passwd 2026-03-09T16:28:09.648 INFO:tasks.workunit.client.0.vm01.stderr:+ uuidgen 2026-03-09T16:28:09.649 INFO:tasks.workunit.client.0.vm01.stderr:+ pp=bdba544a-024a-4b6d-a6dd-2ee6648240fd 2026-03-09T16:28:09.649 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool create bdba544a-024a-4b6d-a6dd-2ee6648240fd 12 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- 192.168.123.101:0/2290992824 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 msgr2=0x7f3e00115e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 --2- 192.168.123.101:0/2290992824 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 0x7f3e00115e50 secure :-1 s=READY pgs=3093 cs=0 l=1 rev1=1 crypto rx=0x7f3dfc00b0a0 tx=0x7f3dfc01ca50 comp rx=0 tx=0).stop 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- 192.168.123.101:0/2290992824 shutdown_connections 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 --2- 192.168.123.101:0/2290992824 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 0x7f3e00115e50 unknown :-1 s=CLOSED pgs=3093 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 --2- 192.168.123.101:0/2290992824 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3e00077f50 0x7f3e00113520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 --2- 192.168.123.101:0/2290992824 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3e00077630 0x7f3e00077a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- 192.168.123.101:0/2290992824 >> 192.168.123.101:0/2290992824 conn(0x7f3e00100880 msgr2=0x7f3e00102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- 192.168.123.101:0/2290992824 shutdown_connections 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- 192.168.123.101:0/2290992824 wait complete. 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 Processor -- start 2026-03-09T16:28:09.711 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- start start 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3e00077630 0x7f3e001a4040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3e00077f50 0x7f3e001a4580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 0x7f3e001a8910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f3e0011bbe0 con 0x7f3e00113a60 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f3e0011ba60 con 0x7f3e00077f50 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f3e0011bd60 con 0x7f3e00077630 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05b02640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 0x7f3e001a8910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05b02640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 0x7f3e001a8910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:46434/0 (socket says 192.168.123.101:46434) 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05b02640 1 -- 192.168.123.101:0/840518915 learned_addr learned my addr 192.168.123.101:0/840518915 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05b02640 1 -- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3e00077630 msgr2=0x7f3e001a4040 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05301640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3e00077630 0x7f3e001a4040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05b02640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3e00077630 0x7f3e001a4040 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05b02640 1 -- 192.168.123.101:0/840518915 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3e00077f50 msgr2=0x7f3e001a4580 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05b02640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3e00077f50 0x7f3e001a4580 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05b02640 1 -- 192.168.123.101:0/840518915 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3e001a8ff0 con 0x7f3e00113a60 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05301640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3e00077630 0x7f3e001a4040 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T16:28:09.712 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e05b02640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 0x7f3e001a8910 secure :-1 s=READY pgs=3094 cs=0 l=1 rev1=1 crypto rx=0x7f3dfc098570 tx=0x7f3dfc007890 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:09.713 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3dee7fc640 1 -- 192.168.123.101:0/840518915 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3dfc007a00 con 0x7f3e00113a60 2026-03-09T16:28:09.713 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3dee7fc640 1 -- 192.168.123.101:0/840518915 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3dfc004070 con 0x7f3e00113a60 2026-03-09T16:28:09.713 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3dee7fc640 1 -- 192.168.123.101:0/840518915 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3dfc0b76e0 con 0x7f3e00113a60 2026-03-09T16:28:09.713 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3e001a9280 con 0x7f3e00113a60 2026-03-09T16:28:09.713 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.715+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3e001a9710 con 0x7f3e00113a60 2026-03-09T16:28:09.714 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.719+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3dc8005190 con 0x7f3e00113a60 2026-03-09T16:28:09.717 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.719+0000 7f3dee7fc640 1 -- 192.168.123.101:0/840518915 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f3dfc005ce0 con 0x7f3e00113a60 2026-03-09T16:28:09.717 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.719+0000 7f3dee7fc640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3ddc0777e0 0x7f3ddc079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:09.717 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.719+0000 7f3dee7fc640 1 -- 192.168.123.101:0/840518915 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(773..773 src has 257..773) ==== 8985+0+0 (secure 0 0 0) 0x7f3dfc134300 con 0x7f3e00113a60 2026-03-09T16:28:09.717 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.723+0000 7f3dee7fc640 1 -- 192.168.123.101:0/840518915 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3dfc016760 con 0x7f3e00113a60 2026-03-09T16:28:09.717 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.723+0000 7f3e05301640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3ddc0777e0 0x7f3ddc079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:09.717 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.723+0000 7f3e05301640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3ddc0777e0 0x7f3ddc079ca0 secure :-1 s=READY pgs=4282 cs=0 l=1 rev1=1 crypto rx=0x7f3df4005fd0 tx=0x7f3df4005ea0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:09.813 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:09.815+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12} v 0) -- 0x7f3dc8005480 con 0x7f3e00113a60 2026-03-09T16:28:10.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: audit 2026-03-09T16:28:08.905754+0000 mon.a (mon.0) 4009 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: audit 2026-03-09T16:28:08.905754+0000 mon.a (mon.0) 4009 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:08.911969+0000 mon.a (mon.0) 4010 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:08.911969+0000 mon.a (mon.0) 4010 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: audit 2026-03-09T16:28:08.963899+0000 mon.a (mon.0) 4011 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: audit 2026-03-09T16:28:08.963899+0000 mon.a (mon.0) 4011 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:09.145027+0000 mgr.y (mgr.14520) 1273 : cluster [DBG] pgmap v1714: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:09.145027+0000 mgr.y (mgr.14520) 1273 : cluster [DBG] pgmap v1714: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:09.601516+0000 mon.a (mon.0) 4012 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:09.601516+0000 mon.a (mon.0) 4012 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:09.601715+0000 mon.a (mon.0) 4013 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:09.601715+0000 mon.a (mon.0) 4013 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: audit 2026-03-09T16:28:09.605315+0000 mon.a (mon.0) 4014 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: audit 2026-03-09T16:28:09.605315+0000 mon.a (mon.0) 4014 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:09.621526+0000 mon.a (mon.0) 4015 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: cluster 2026-03-09T16:28:09.621526+0000 mon.a (mon.0) 4015 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: audit 2026-03-09T16:28:09.819613+0000 mon.a (mon.0) 4016 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:09 vm01 bash[28152]: audit 2026-03-09T16:28:09.819613+0000 mon.a (mon.0) 4016 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: audit 2026-03-09T16:28:08.905754+0000 mon.a (mon.0) 4009 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: audit 2026-03-09T16:28:08.905754+0000 mon.a (mon.0) 4009 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:08.911969+0000 mon.a (mon.0) 4010 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:08.911969+0000 mon.a (mon.0) 4010 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: audit 2026-03-09T16:28:08.963899+0000 mon.a (mon.0) 4011 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: audit 2026-03-09T16:28:08.963899+0000 mon.a (mon.0) 4011 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:09.145027+0000 mgr.y (mgr.14520) 1273 : cluster [DBG] pgmap v1714: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:09.145027+0000 mgr.y (mgr.14520) 1273 : cluster [DBG] pgmap v1714: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:09.601516+0000 mon.a (mon.0) 4012 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:09.601516+0000 mon.a (mon.0) 4012 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:09.601715+0000 mon.a (mon.0) 4013 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:09.601715+0000 mon.a (mon.0) 4013 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: audit 2026-03-09T16:28:09.605315+0000 mon.a (mon.0) 4014 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: audit 2026-03-09T16:28:09.605315+0000 mon.a (mon.0) 4014 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:09.621526+0000 mon.a (mon.0) 4015 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T16:28:10.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: cluster 2026-03-09T16:28:09.621526+0000 mon.a (mon.0) 4015 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T16:28:10.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: audit 2026-03-09T16:28:09.819613+0000 mon.a (mon.0) 4016 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:10.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:09 vm01 bash[20728]: audit 2026-03-09T16:28:09.819613+0000 mon.a (mon.0) 4016 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: audit 2026-03-09T16:28:08.905754+0000 mon.a (mon.0) 4009 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: audit 2026-03-09T16:28:08.905754+0000 mon.a (mon.0) 4009 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:08.911969+0000 mon.a (mon.0) 4010 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:08.911969+0000 mon.a (mon.0) 4010 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: audit 2026-03-09T16:28:08.963899+0000 mon.a (mon.0) 4011 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: audit 2026-03-09T16:28:08.963899+0000 mon.a (mon.0) 4011 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:09.145027+0000 mgr.y (mgr.14520) 1273 : cluster [DBG] pgmap v1714: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:09.145027+0000 mgr.y (mgr.14520) 1273 : cluster [DBG] pgmap v1714: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:09.601516+0000 mon.a (mon.0) 4012 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:09.601516+0000 mon.a (mon.0) 4012 : cluster [INF] pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' no longer out of quota; removing NO_QUOTA flag 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:09.601715+0000 mon.a (mon.0) 4013 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:09.601715+0000 mon.a (mon.0) 4013 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: audit 2026-03-09T16:28:09.605315+0000 mon.a (mon.0) 4014 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: audit 2026-03-09T16:28:09.605315+0000 mon.a (mon.0) 4014 : audit [INF] from='client.? 192.168.123.101:0/1886818805' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:09.621526+0000 mon.a (mon.0) 4015 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: cluster 2026-03-09T16:28:09.621526+0000 mon.a (mon.0) 4015 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: audit 2026-03-09T16:28:09.819613+0000 mon.a (mon.0) 4016 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:10.382 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:09 vm09 bash[22983]: audit 2026-03-09T16:28:09.819613+0000 mon.a (mon.0) 4016 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:10.601 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.607+0000 7f3dee7fc640 1 -- 192.168.123.101:0/840518915 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]=0 pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' created v774) ==== 176+0+0 (secure 0 0 0) 0x7f3dfc0bf330 con 0x7f3e00113a60 2026-03-09T16:28:10.673 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12} v 0) -- 0x7f3dc8004970 con 0x7f3e00113a60 2026-03-09T16:28:10.674 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3dee7fc640 1 -- 192.168.123.101:0/840518915 <== mon.0 v2:192.168.123.101:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]=0 pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' already exists v774) ==== 183+0+0 (secure 0 0 0) 0x7f3dfc100660 con 0x7f3e00113a60 2026-03-09T16:28:10.674 INFO:tasks.workunit.client.0.vm01.stderr:pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' already exists 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3ddc0777e0 msgr2=0x7f3ddc079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3ddc0777e0 0x7f3ddc079ca0 secure :-1 s=READY pgs=4282 cs=0 l=1 rev1=1 crypto rx=0x7f3df4005fd0 tx=0x7f3df4005ea0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 msgr2=0x7f3e001a8910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 0x7f3e001a8910 secure :-1 s=READY pgs=3094 cs=0 l=1 rev1=1 crypto rx=0x7f3dfc098570 tx=0x7f3dfc007890 comp rx=0 tx=0).stop 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 shutdown_connections 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3ddc0777e0 0x7f3ddc079ca0 unknown :-1 s=CLOSED pgs=4282 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3e00113a60 0x7f3e001a8910 unknown :-1 s=CLOSED pgs=3094 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3e00077f50 0x7f3e001a4580 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 --2- 192.168.123.101:0/840518915 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3e00077630 0x7f3e001a4040 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 >> 192.168.123.101:0/840518915 conn(0x7f3e00100880 msgr2=0x7f3e001150b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 shutdown_connections 2026-03-09T16:28:10.676 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.679+0000 7f3e0758c640 1 -- 192.168.123.101:0/840518915 wait complete. 2026-03-09T16:28:10.688 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool application enable bdba544a-024a-4b6d-a6dd-2ee6648240fd rados 2026-03-09T16:28:10.754 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/997329531 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ae0104e50 msgr2=0x7f9ae0105230 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:10.754 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/997329531 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ae0104e50 0x7f9ae0105230 secure :-1 s=READY pgs=3095 cs=0 l=1 rev1=1 crypto rx=0x7f9ad0009a30 tx=0x7f9ad001c900 comp rx=0 tx=0).stop 2026-03-09T16:28:10.754 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/997329531 shutdown_connections 2026-03-09T16:28:10.754 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/997329531 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ae0109f80 0x7f9ae0111b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.754 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/997329531 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ae0105800 0x7f9ae0109850 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.754 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/997329531 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ae0104e50 0x7f9ae0105230 unknown :-1 s=CLOSED pgs=3095 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.754 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/997329531 >> 192.168.123.101:0/997329531 conn(0x7f9ae01008f0 msgr2=0x7f9ae0102d10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:10.754 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/997329531 shutdown_connections 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/997329531 wait complete. 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 Processor -- start 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- start start 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ae0104e50 0x7f9ae019f060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ae0105800 0x7f9ae019f5a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ae0109f80 0x7f9ae01a3930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f9ae0116a40 con 0x7f9ae0109f80 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f9ae01168c0 con 0x7f9ae0104e50 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f9ae0116bc0 con 0x7f9ae0105800 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae62f3640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ae0109f80 0x7f9ae01a3930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae52f1640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ae0105800 0x7f9ae019f5a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae52f1640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ae0105800 0x7f9ae019f5a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3301/0 says I am v2:192.168.123.101:60378/0 (socket says 192.168.123.101:60378) 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae52f1640 1 -- 192.168.123.101:0/2827460990 learned_addr learned my addr 192.168.123.101:0/2827460990 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:28:10.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae5af2640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ae0104e50 0x7f9ae019f060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae62f3640 1 -- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ae0105800 msgr2=0x7f9ae019f5a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae62f3640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ae0105800 0x7f9ae019f5a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae62f3640 1 -- 192.168.123.101:0/2827460990 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ae0104e50 msgr2=0x7f9ae019f060 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae62f3640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ae0104e50 0x7f9ae019f060 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae62f3640 1 -- 192.168.123.101:0/2827460990 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9ae01a4010 con 0x7f9ae0109f80 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae62f3640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ae0109f80 0x7f9ae01a3930 secure :-1 s=READY pgs=3096 cs=0 l=1 rev1=1 crypto rx=0x7f9adc00ceb0 tx=0x7f9adc00a430 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9aceffd640 1 -- 192.168.123.101:0/2827460990 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9adc017070 con 0x7f9ae0109f80 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9aceffd640 1 -- 192.168.123.101:0/2827460990 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f9adc00ae00 con 0x7f9ae0109f80 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9aceffd640 1 -- 192.168.123.101:0/2827460990 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9adc012400 con 0x7f9ae0109f80 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f9ae01a4300 con 0x7f9ae0109f80 2026-03-09T16:28:10.756 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.759+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f9ae01abbe0 con 0x7f9ae0109f80 2026-03-09T16:28:10.759 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.763+0000 7f9aceffd640 1 -- 192.168.123.101:0/2827460990 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f9adc005ce0 con 0x7f9ae0109f80 2026-03-09T16:28:10.759 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.763+0000 7f9aceffd640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9abc0777e0 0x7f9abc079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:10.759 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.763+0000 7f9aceffd640 1 -- 192.168.123.101:0/2827460990 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(774..774 src has 257..774) ==== 9360+0+0 (secure 0 0 0) 0x7f9adc09a750 con 0x7f9ae0109f80 2026-03-09T16:28:10.759 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.763+0000 7f9ae5af2640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9abc0777e0 0x7f9abc079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:10.759 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.763+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9aa8005190 con 0x7f9ae0109f80 2026-03-09T16:28:10.762 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.767+0000 7f9ae5af2640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9abc0777e0 0x7f9abc079ca0 secure :-1 s=READY pgs=4283 cs=0 l=1 rev1=1 crypto rx=0x7f9ad001cde0 tx=0x7f9ad00a7040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:10.762 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.767+0000 7f9aceffd640 1 -- 192.168.123.101:0/2827460990 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9adc066960 con 0x7f9ae0109f80 2026-03-09T16:28:10.858 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:10.863+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"} v 0) -- 0x7f9aa8005480 con 0x7f9ae0109f80 2026-03-09T16:28:11.604 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:11.607+0000 7f9aceffd640 1 -- 192.168.123.101:0/2827460990 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]=0 enabled application 'rados' on pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' v775) ==== 213+0+0 (secure 0 0 0) 0x7f9adc06b810 con 0x7f9ae0109f80 2026-03-09T16:28:11.674 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:11.679+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"} v 0) -- 0x7f9aa8004900 con 0x7f9ae0109f80 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: audit 2026-03-09T16:28:10.608938+0000 mon.a (mon.0) 4017 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]': finished 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: audit 2026-03-09T16:28:10.608938+0000 mon.a (mon.0) 4017 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]': finished 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: cluster 2026-03-09T16:28:10.624888+0000 mon.a (mon.0) 4018 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: cluster 2026-03-09T16:28:10.624888+0000 mon.a (mon.0) 4018 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: audit 2026-03-09T16:28:10.681902+0000 mon.a (mon.0) 4019 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: audit 2026-03-09T16:28:10.681902+0000 mon.a (mon.0) 4019 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: audit 2026-03-09T16:28:10.866189+0000 mon.a (mon.0) 4020 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: audit 2026-03-09T16:28:10.866189+0000 mon.a (mon.0) 4020 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: cluster 2026-03-09T16:28:11.145386+0000 mgr.y (mgr.14520) 1274 : cluster [DBG] pgmap v1717: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:11 vm01 bash[20728]: cluster 2026-03-09T16:28:11.145386+0000 mgr.y (mgr.14520) 1274 : cluster [DBG] pgmap v1717: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: audit 2026-03-09T16:28:10.608938+0000 mon.a (mon.0) 4017 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]': finished 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: audit 2026-03-09T16:28:10.608938+0000 mon.a (mon.0) 4017 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]': finished 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: cluster 2026-03-09T16:28:10.624888+0000 mon.a (mon.0) 4018 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: cluster 2026-03-09T16:28:10.624888+0000 mon.a (mon.0) 4018 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: audit 2026-03-09T16:28:10.681902+0000 mon.a (mon.0) 4019 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: audit 2026-03-09T16:28:10.681902+0000 mon.a (mon.0) 4019 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: audit 2026-03-09T16:28:10.866189+0000 mon.a (mon.0) 4020 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: audit 2026-03-09T16:28:10.866189+0000 mon.a (mon.0) 4020 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: cluster 2026-03-09T16:28:11.145386+0000 mgr.y (mgr.14520) 1274 : cluster [DBG] pgmap v1717: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:11.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:11 vm01 bash[28152]: cluster 2026-03-09T16:28:11.145386+0000 mgr.y (mgr.14520) 1274 : cluster [DBG] pgmap v1717: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: audit 2026-03-09T16:28:10.608938+0000 mon.a (mon.0) 4017 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]': finished 2026-03-09T16:28:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: audit 2026-03-09T16:28:10.608938+0000 mon.a (mon.0) 4017 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]': finished 2026-03-09T16:28:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: cluster 2026-03-09T16:28:10.624888+0000 mon.a (mon.0) 4018 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T16:28:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: cluster 2026-03-09T16:28:10.624888+0000 mon.a (mon.0) 4018 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-09T16:28:12.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: audit 2026-03-09T16:28:10.681902+0000 mon.a (mon.0) 4019 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: audit 2026-03-09T16:28:10.681902+0000 mon.a (mon.0) 4019 : audit [INF] from='client.? 192.168.123.101:0/840518915' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pg_num": 12}]: dispatch 2026-03-09T16:28:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: audit 2026-03-09T16:28:10.866189+0000 mon.a (mon.0) 4020 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: audit 2026-03-09T16:28:10.866189+0000 mon.a (mon.0) 4020 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: cluster 2026-03-09T16:28:11.145386+0000 mgr.y (mgr.14520) 1274 : cluster [DBG] pgmap v1717: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:12.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:11 vm09 bash[22983]: cluster 2026-03-09T16:28:11.145386+0000 mgr.y (mgr.14520) 1274 : cluster [DBG] pgmap v1717: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:12.609 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9aceffd640 1 -- 192.168.123.101:0/2827460990 <== mon.0 v2:192.168.123.101:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]=0 enabled application 'rados' on pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' v776) ==== 213+0+0 (secure 0 0 0) 0x7f9adc05ea00 con 0x7f9ae0109f80 2026-03-09T16:28:12.609 INFO:tasks.workunit.client.0.vm01.stderr:enabled application 'rados' on pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9abc0777e0 msgr2=0x7f9abc079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9abc0777e0 0x7f9abc079ca0 secure :-1 s=READY pgs=4283 cs=0 l=1 rev1=1 crypto rx=0x7f9ad001cde0 tx=0x7f9ad00a7040 comp rx=0 tx=0).stop 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ae0109f80 msgr2=0x7f9ae01a3930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ae0109f80 0x7f9ae01a3930 secure :-1 s=READY pgs=3096 cs=0 l=1 rev1=1 crypto rx=0x7f9adc00ceb0 tx=0x7f9adc00a430 comp rx=0 tx=0).stop 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 shutdown_connections 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f9abc0777e0 0x7f9abc079ca0 unknown :-1 s=CLOSED pgs=4283 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f9ae0109f80 0x7f9ae01a3930 unknown :-1 s=CLOSED pgs=3096 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f9ae0105800 0x7f9ae019f5a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 --2- 192.168.123.101:0/2827460990 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ae0104e50 0x7f9ae019f060 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 >> 192.168.123.101:0/2827460990 conn(0x7f9ae01008f0 msgr2=0x7f9ae01010e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 shutdown_connections 2026-03-09T16:28:12.612 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.615+0000 7f9ae7d7d640 1 -- 192.168.123.101:0/2827460990 wait complete. 2026-03-09T16:28:12.630 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool set-quota bdba544a-024a-4b6d-a6dd-2ee6648240fd max_objects 10 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- 192.168.123.101:0/1754930936 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5ec0104e50 msgr2=0x7f5ec0105230 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/1754930936 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5ec0104e50 0x7f5ec0105230 secure :-1 s=READY pgs=3016 cs=0 l=1 rev1=1 crypto rx=0x7f5eac009a30 tx=0x7f5eac01c920 comp rx=0 tx=0).stop 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- 192.168.123.101:0/1754930936 shutdown_connections 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/1754930936 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5ec0109f80 0x7f5ec0111b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/1754930936 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5ec0105800 0x7f5ec0109850 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/1754930936 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5ec0104e50 0x7f5ec0105230 unknown :-1 s=CLOSED pgs=3016 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- 192.168.123.101:0/1754930936 >> 192.168.123.101:0/1754930936 conn(0x7f5ec0100910 msgr2=0x7f5ec0102d30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- 192.168.123.101:0/1754930936 shutdown_connections 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- 192.168.123.101:0/1754930936 wait complete. 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 Processor -- start 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- start start 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5ec0104e50 0x7f5ec019f080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5ec0105800 0x7f5ec019f5c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5ec0109f80 0x7f5ec01a3950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f5ec0116af0 con 0x7f5ec0109f80 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f5ec0116970 con 0x7f5ec0104e50 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f5ec0116c70 con 0x7f5ec0105800 2026-03-09T16:28:12.706 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebffff640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5ec0104e50 0x7f5ec019f080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebffff640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5ec0104e50 0x7f5ec019f080 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:36452/0 (socket says 192.168.123.101:36452) 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebffff640 1 -- 192.168.123.101:0/913258061 learned_addr learned my addr 192.168.123.101:0/913258061 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec4f1b640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5ec0109f80 0x7f5ec01a3950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebf7fe640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5ec0105800 0x7f5ec019f5c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebffff640 1 -- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5ec0105800 msgr2=0x7f5ec019f5c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebffff640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5ec0105800 0x7f5ec019f5c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebffff640 1 -- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5ec0109f80 msgr2=0x7f5ec01a3950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebffff640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5ec0109f80 0x7f5ec01a3950 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebffff640 1 -- 192.168.123.101:0/913258061 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5ec01a40d0 con 0x7f5ec0104e50 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec4f1b640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5ec0109f80 0x7f5ec01a3950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebffff640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5ec0104e50 0x7f5ec019f080 secure :-1 s=READY pgs=3098 cs=0 l=1 rev1=1 crypto rx=0x7f5eac01ce00 tx=0x7f5eac0a5f10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebd7fa640 1 -- 192.168.123.101:0/913258061 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5eac0bb070 con 0x7f5ec0104e50 2026-03-09T16:28:12.707 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5ec01a4360 con 0x7f5ec0104e50 2026-03-09T16:28:12.708 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5ec01abc00 con 0x7f5ec0104e50 2026-03-09T16:28:12.708 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebd7fa640 1 -- 192.168.123.101:0/913258061 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f5eac0a9020 con 0x7f5ec0104e50 2026-03-09T16:28:12.708 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ebd7fa640 1 -- 192.168.123.101:0/913258061 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5eac0ae420 con 0x7f5ec0104e50 2026-03-09T16:28:12.708 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.711+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5ec0106330 con 0x7f5ec0104e50 2026-03-09T16:28:12.710 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.715+0000 7f5ebd7fa640 1 -- 192.168.123.101:0/913258061 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f5eac0026f0 con 0x7f5ec0104e50 2026-03-09T16:28:12.710 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.715+0000 7f5ebd7fa640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5ea0077710 0x7f5ea0079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:28:12.710 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.715+0000 7f5ebd7fa640 1 -- 192.168.123.101:0/913258061 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(776..776 src has 257..776) ==== 9373+0+0 (secure 0 0 0) 0x7f5eac134420 con 0x7f5ec0104e50 2026-03-09T16:28:12.710 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.715+0000 7f5ebf7fe640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5ea0077710 0x7f5ea0079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:28:12.710 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.715+0000 7f5ebf7fe640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5ea0077710 0x7f5ea0079bd0 secure :-1 s=READY pgs=4284 cs=0 l=1 rev1=1 crypto rx=0x7f5eb0005e10 tx=0x7f5eb0005d60 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:28:12.714 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.719+0000 7f5ebd7fa640 1 -- 192.168.123.101:0/913258061 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f5eac016630 con 0x7f5ec0104e50 2026-03-09T16:28:12.806 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:12.811+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"} v 0) -- 0x7f5ec01a03e0 con 0x7f5ec0104e50 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:28:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:28:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: cluster 2026-03-09T16:28:11.609694+0000 mon.a (mon.0) 4021 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: cluster 2026-03-09T16:28:11.609694+0000 mon.a (mon.0) 4021 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: audit 2026-03-09T16:28:11.612010+0000 mon.a (mon.0) 4022 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: audit 2026-03-09T16:28:11.612010+0000 mon.a (mon.0) 4022 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: cluster 2026-03-09T16:28:11.615413+0000 mon.a (mon.0) 4023 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: cluster 2026-03-09T16:28:11.615413+0000 mon.a (mon.0) 4023 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: audit 2026-03-09T16:28:11.682964+0000 mon.a (mon.0) 4024 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: audit 2026-03-09T16:28:11.682964+0000 mon.a (mon.0) 4024 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: audit 2026-03-09T16:28:12.617211+0000 mon.a (mon.0) 4025 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: audit 2026-03-09T16:28:12.617211+0000 mon.a (mon.0) 4025 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: cluster 2026-03-09T16:28:12.622326+0000 mon.a (mon.0) 4026 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:12 vm01 bash[28152]: cluster 2026-03-09T16:28:12.622326+0000 mon.a (mon.0) 4026 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: cluster 2026-03-09T16:28:11.609694+0000 mon.a (mon.0) 4021 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: cluster 2026-03-09T16:28:11.609694+0000 mon.a (mon.0) 4021 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: audit 2026-03-09T16:28:11.612010+0000 mon.a (mon.0) 4022 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: audit 2026-03-09T16:28:11.612010+0000 mon.a (mon.0) 4022 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: cluster 2026-03-09T16:28:11.615413+0000 mon.a (mon.0) 4023 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: cluster 2026-03-09T16:28:11.615413+0000 mon.a (mon.0) 4023 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: audit 2026-03-09T16:28:11.682964+0000 mon.a (mon.0) 4024 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: audit 2026-03-09T16:28:11.682964+0000 mon.a (mon.0) 4024 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: audit 2026-03-09T16:28:12.617211+0000 mon.a (mon.0) 4025 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: audit 2026-03-09T16:28:12.617211+0000 mon.a (mon.0) 4025 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: cluster 2026-03-09T16:28:12.622326+0000 mon.a (mon.0) 4026 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T16:28:12.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:12 vm01 bash[20728]: cluster 2026-03-09T16:28:12.622326+0000 mon.a (mon.0) 4026 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: cluster 2026-03-09T16:28:11.609694+0000 mon.a (mon.0) 4021 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: cluster 2026-03-09T16:28:11.609694+0000 mon.a (mon.0) 4021 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: audit 2026-03-09T16:28:11.612010+0000 mon.a (mon.0) 4022 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: audit 2026-03-09T16:28:11.612010+0000 mon.a (mon.0) 4022 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: cluster 2026-03-09T16:28:11.615413+0000 mon.a (mon.0) 4023 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: cluster 2026-03-09T16:28:11.615413+0000 mon.a (mon.0) 4023 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: audit 2026-03-09T16:28:11.682964+0000 mon.a (mon.0) 4024 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: audit 2026-03-09T16:28:11.682964+0000 mon.a (mon.0) 4024 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]: dispatch 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: audit 2026-03-09T16:28:12.617211+0000 mon.a (mon.0) 4025 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: audit 2026-03-09T16:28:12.617211+0000 mon.a (mon.0) 4025 : audit [INF] from='client.? 192.168.123.101:0/2827460990' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "app": "rados"}]': finished 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: cluster 2026-03-09T16:28:12.622326+0000 mon.a (mon.0) 4026 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T16:28:13.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:12 vm09 bash[22983]: cluster 2026-03-09T16:28:12.622326+0000 mon.a (mon.0) 4026 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-09T16:28:13.658 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:13.663+0000 7f5ebd7fa640 1 -- 192.168.123.101:0/913258061 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool bdba544a-024a-4b6d-a6dd-2ee6648240fd v777) ==== 223+0+0 (secure 0 0 0) 0x7f5eac100600 con 0x7f5ec0104e50 2026-03-09T16:28:13.718 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:13.723+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"} v 0) -- 0x7f5ec00632a0 con 0x7f5ec0104e50 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:12.807885+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:12.807885+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:12.815439+0000 mon.a (mon.0) 4027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:12.815439+0000 mon.a (mon.0) 4027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: cluster 2026-03-09T16:28:13.145753+0000 mgr.y (mgr.14520) 1275 : cluster [DBG] pgmap v1720: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: cluster 2026-03-09T16:28:13.145753+0000 mgr.y (mgr.14520) 1275 : cluster [DBG] pgmap v1720: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:13.274062+0000 mon.a (mon.0) 4028 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:13.274062+0000 mon.a (mon.0) 4028 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:13.603737+0000 mon.a (mon.0) 4029 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:13.603737+0000 mon.a (mon.0) 4029 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:13.604319+0000 mon.a (mon.0) 4030 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:13.604319+0000 mon.a (mon.0) 4030 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:13.609938+0000 mon.a (mon.0) 4031 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:13 vm01 bash[28152]: audit 2026-03-09T16:28:13.609938+0000 mon.a (mon.0) 4031 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:12.807885+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:12.807885+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:12.815439+0000 mon.a (mon.0) 4027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:12.815439+0000 mon.a (mon.0) 4027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:13.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: cluster 2026-03-09T16:28:13.145753+0000 mgr.y (mgr.14520) 1275 : cluster [DBG] pgmap v1720: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:28:13.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: cluster 2026-03-09T16:28:13.145753+0000 mgr.y (mgr.14520) 1275 : cluster [DBG] pgmap v1720: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:28:13.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:13.274062+0000 mon.a (mon.0) 4028 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:28:13.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:13.274062+0000 mon.a (mon.0) 4028 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:28:13.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:13.603737+0000 mon.a (mon.0) 4029 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:28:13.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:13.603737+0000 mon.a (mon.0) 4029 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:28:13.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:13.604319+0000 mon.a (mon.0) 4030 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:28:13.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:13.604319+0000 mon.a (mon.0) 4030 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:28:13.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:13.609938+0000 mon.a (mon.0) 4031 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:13.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:13 vm01 bash[20728]: audit 2026-03-09T16:28:13.609938+0000 mon.a (mon.0) 4031 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:12.807885+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:12.807885+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:12.815439+0000 mon.a (mon.0) 4027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:12.815439+0000 mon.a (mon.0) 4027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: cluster 2026-03-09T16:28:13.145753+0000 mgr.y (mgr.14520) 1275 : cluster [DBG] pgmap v1720: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: cluster 2026-03-09T16:28:13.145753+0000 mgr.y (mgr.14520) 1275 : cluster [DBG] pgmap v1720: 188 pgs: 12 unknown, 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:13.274062+0000 mon.a (mon.0) 4028 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:13.274062+0000 mon.a (mon.0) 4028 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:13.603737+0000 mon.a (mon.0) 4029 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:13.603737+0000 mon.a (mon.0) 4029 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:13.604319+0000 mon.a (mon.0) 4030 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:13.604319+0000 mon.a (mon.0) 4030 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:13.609938+0000 mon.a (mon.0) 4031 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:14.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:13 vm09 bash[22983]: audit 2026-03-09T16:28:13.609938+0000 mon.a (mon.0) 4031 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:14.669 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ebd7fa640 1 -- 192.168.123.101:0/913258061 <== mon.1 v2:192.168.123.109:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool bdba544a-024a-4b6d-a6dd-2ee6648240fd v778) ==== 223+0+0 (secure 0 0 0) 0x7f5eac1054b0 con 0x7f5ec0104e50 2026-03-09T16:28:14.669 INFO:tasks.workunit.client.0.vm01.stderr:set-quota max_objects = 10 for pool bdba544a-024a-4b6d-a6dd-2ee6648240fd 2026-03-09T16:28:14.671 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5ea0077710 msgr2=0x7f5ea0079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:14.671 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5ea0077710 0x7f5ea0079bd0 secure :-1 s=READY pgs=4284 cs=0 l=1 rev1=1 crypto rx=0x7f5eb0005e10 tx=0x7f5eb0005d60 comp rx=0 tx=0).stop 2026-03-09T16:28:14.671 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5ec0104e50 msgr2=0x7f5ec019f080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:28:14.671 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5ec0104e50 0x7f5ec019f080 secure :-1 s=READY pgs=3098 cs=0 l=1 rev1=1 crypto rx=0x7f5eac01ce00 tx=0x7f5eac0a5f10 comp rx=0 tx=0).stop 2026-03-09T16:28:14.672 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 shutdown_connections 2026-03-09T16:28:14.672 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f5ea0077710 0x7f5ea0079bd0 unknown :-1 s=CLOSED pgs=4284 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:14.672 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f5ec0109f80 0x7f5ec01a3950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:14.672 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f5ec0105800 0x7f5ec019f5c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:14.672 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 --2- 192.168.123.101:0/913258061 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5ec0104e50 0x7f5ec019f080 unknown :-1 s=CLOSED pgs=3098 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:28:14.672 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 >> 192.168.123.101:0/913258061 conn(0x7f5ec0100910 msgr2=0x7f5ec0101030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:28:14.672 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 shutdown_connections 2026-03-09T16:28:14.672 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:28:14.675+0000 7f5ec69a5640 1 -- 192.168.123.101:0/913258061 wait complete. 2026-03-09T16:28:14.685 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 30 2026-03-09T16:28:15.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:14 vm09 bash[22983]: audit 2026-03-09T16:28:13.657120+0000 mon.a (mon.0) 4032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:15.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:14 vm09 bash[22983]: audit 2026-03-09T16:28:13.657120+0000 mon.a (mon.0) 4032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:15.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:14 vm09 bash[22983]: cluster 2026-03-09T16:28:13.664356+0000 mon.a (mon.0) 4033 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T16:28:15.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:14 vm09 bash[22983]: cluster 2026-03-09T16:28:13.664356+0000 mon.a (mon.0) 4033 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T16:28:15.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:14 vm09 bash[22983]: audit 2026-03-09T16:28:13.719420+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:14 vm09 bash[22983]: audit 2026-03-09T16:28:13.719420+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:14 vm09 bash[22983]: audit 2026-03-09T16:28:13.726931+0000 mon.a (mon.0) 4034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:14 vm09 bash[22983]: audit 2026-03-09T16:28:13.726931+0000 mon.a (mon.0) 4034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:14 vm01 bash[28152]: audit 2026-03-09T16:28:13.657120+0000 mon.a (mon.0) 4032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:14 vm01 bash[28152]: audit 2026-03-09T16:28:13.657120+0000 mon.a (mon.0) 4032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:14 vm01 bash[28152]: cluster 2026-03-09T16:28:13.664356+0000 mon.a (mon.0) 4033 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:14 vm01 bash[28152]: cluster 2026-03-09T16:28:13.664356+0000 mon.a (mon.0) 4033 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:14 vm01 bash[28152]: audit 2026-03-09T16:28:13.719420+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:14 vm01 bash[28152]: audit 2026-03-09T16:28:13.719420+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:14 vm01 bash[28152]: audit 2026-03-09T16:28:13.726931+0000 mon.a (mon.0) 4034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:14 vm01 bash[28152]: audit 2026-03-09T16:28:13.726931+0000 mon.a (mon.0) 4034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:14 vm01 bash[20728]: audit 2026-03-09T16:28:13.657120+0000 mon.a (mon.0) 4032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:14 vm01 bash[20728]: audit 2026-03-09T16:28:13.657120+0000 mon.a (mon.0) 4032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:14 vm01 bash[20728]: cluster 2026-03-09T16:28:13.664356+0000 mon.a (mon.0) 4033 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:14 vm01 bash[20728]: cluster 2026-03-09T16:28:13.664356+0000 mon.a (mon.0) 4033 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:14 vm01 bash[20728]: audit 2026-03-09T16:28:13.719420+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:14 vm01 bash[20728]: audit 2026-03-09T16:28:13.719420+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.101:0/913258061' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:14 vm01 bash[20728]: audit 2026-03-09T16:28:13.726931+0000 mon.a (mon.0) 4034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:15.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:14 vm01 bash[20728]: audit 2026-03-09T16:28:13.726931+0000 mon.a (mon.0) 4034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: audit 2026-03-09T16:28:14.674507+0000 mon.a (mon.0) 4035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: audit 2026-03-09T16:28:14.674507+0000 mon.a (mon.0) 4035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: cluster 2026-03-09T16:28:14.690127+0000 mon.a (mon.0) 4036 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: cluster 2026-03-09T16:28:14.690127+0000 mon.a (mon.0) 4036 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: audit 2026-03-09T16:28:15.028366+0000 mon.a (mon.0) 4037 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: audit 2026-03-09T16:28:15.028366+0000 mon.a (mon.0) 4037 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: audit 2026-03-09T16:28:15.029394+0000 mon.a (mon.0) 4038 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: audit 2026-03-09T16:28:15.029394+0000 mon.a (mon.0) 4038 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: cluster 2026-03-09T16:28:15.146229+0000 mgr.y (mgr.14520) 1276 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-09T16:28:16.132 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:15 vm09 bash[22983]: cluster 2026-03-09T16:28:15.146229+0000 mgr.y (mgr.14520) 1276 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-09T16:28:16.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: audit 2026-03-09T16:28:14.674507+0000 mon.a (mon.0) 4035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:16.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: audit 2026-03-09T16:28:14.674507+0000 mon.a (mon.0) 4035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:16.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: cluster 2026-03-09T16:28:14.690127+0000 mon.a (mon.0) 4036 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T16:28:16.172 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: cluster 2026-03-09T16:28:14.690127+0000 mon.a (mon.0) 4036 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: audit 2026-03-09T16:28:15.028366+0000 mon.a (mon.0) 4037 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: audit 2026-03-09T16:28:15.028366+0000 mon.a (mon.0) 4037 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: audit 2026-03-09T16:28:15.029394+0000 mon.a (mon.0) 4038 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: audit 2026-03-09T16:28:15.029394+0000 mon.a (mon.0) 4038 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: cluster 2026-03-09T16:28:15.146229+0000 mgr.y (mgr.14520) 1276 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:15 vm01 bash[28152]: cluster 2026-03-09T16:28:15.146229+0000 mgr.y (mgr.14520) 1276 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: audit 2026-03-09T16:28:14.674507+0000 mon.a (mon.0) 4035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: audit 2026-03-09T16:28:14.674507+0000 mon.a (mon.0) 4035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "field": "max_objects", "val": "10"}]': finished 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: cluster 2026-03-09T16:28:14.690127+0000 mon.a (mon.0) 4036 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: cluster 2026-03-09T16:28:14.690127+0000 mon.a (mon.0) 4036 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: audit 2026-03-09T16:28:15.028366+0000 mon.a (mon.0) 4037 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: audit 2026-03-09T16:28:15.028366+0000 mon.a (mon.0) 4037 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: audit 2026-03-09T16:28:15.029394+0000 mon.a (mon.0) 4038 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: audit 2026-03-09T16:28:15.029394+0000 mon.a (mon.0) 4038 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: cluster 2026-03-09T16:28:15.146229+0000 mgr.y (mgr.14520) 1276 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-09T16:28:16.173 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:15 vm01 bash[20728]: cluster 2026-03-09T16:28:15.146229+0000 mgr.y (mgr.14520) 1276 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-09T16:28:18.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:28:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:28:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:18 vm09 bash[22983]: cluster 2026-03-09T16:28:17.146576+0000 mgr.y (mgr.14520) 1277 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 926 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-09T16:28:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:18 vm09 bash[22983]: cluster 2026-03-09T16:28:17.146576+0000 mgr.y (mgr.14520) 1277 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 926 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-09T16:28:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:18 vm01 bash[28152]: cluster 2026-03-09T16:28:17.146576+0000 mgr.y (mgr.14520) 1277 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 926 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-09T16:28:18.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:18 vm01 bash[28152]: cluster 2026-03-09T16:28:17.146576+0000 mgr.y (mgr.14520) 1277 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 926 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-09T16:28:18.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:18 vm01 bash[20728]: cluster 2026-03-09T16:28:17.146576+0000 mgr.y (mgr.14520) 1277 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 926 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-09T16:28:18.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:18 vm01 bash[20728]: cluster 2026-03-09T16:28:17.146576+0000 mgr.y (mgr.14520) 1277 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 926 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-09T16:28:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:19 vm09 bash[22983]: audit 2026-03-09T16:28:17.746319+0000 mgr.y (mgr.14520) 1278 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:19 vm09 bash[22983]: audit 2026-03-09T16:28:17.746319+0000 mgr.y (mgr.14520) 1278 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:19 vm01 bash[28152]: audit 2026-03-09T16:28:17.746319+0000 mgr.y (mgr.14520) 1278 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:19.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:19 vm01 bash[28152]: audit 2026-03-09T16:28:17.746319+0000 mgr.y (mgr.14520) 1278 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:19.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:19 vm01 bash[20728]: audit 2026-03-09T16:28:17.746319+0000 mgr.y (mgr.14520) 1278 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:19 vm01 bash[20728]: audit 2026-03-09T16:28:17.746319+0000 mgr.y (mgr.14520) 1278 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:20 vm09 bash[22983]: cluster 2026-03-09T16:28:19.147303+0000 mgr.y (mgr.14520) 1279 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 941 B/s wr, 1 op/s 2026-03-09T16:28:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:20 vm09 bash[22983]: cluster 2026-03-09T16:28:19.147303+0000 mgr.y (mgr.14520) 1279 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 941 B/s wr, 1 op/s 2026-03-09T16:28:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:20 vm09 bash[22983]: cluster 2026-03-09T16:28:19.603988+0000 mon.a (mon.0) 4039 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:20.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:20 vm09 bash[22983]: cluster 2026-03-09T16:28:19.603988+0000 mon.a (mon.0) 4039 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:20 vm01 bash[28152]: cluster 2026-03-09T16:28:19.147303+0000 mgr.y (mgr.14520) 1279 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 941 B/s wr, 1 op/s 2026-03-09T16:28:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:20 vm01 bash[28152]: cluster 2026-03-09T16:28:19.147303+0000 mgr.y (mgr.14520) 1279 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 941 B/s wr, 1 op/s 2026-03-09T16:28:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:20 vm01 bash[28152]: cluster 2026-03-09T16:28:19.603988+0000 mon.a (mon.0) 4039 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:20.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:20 vm01 bash[28152]: cluster 2026-03-09T16:28:19.603988+0000 mon.a (mon.0) 4039 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:20.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:20 vm01 bash[20728]: cluster 2026-03-09T16:28:19.147303+0000 mgr.y (mgr.14520) 1279 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 941 B/s wr, 1 op/s 2026-03-09T16:28:20.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:20 vm01 bash[20728]: cluster 2026-03-09T16:28:19.147303+0000 mgr.y (mgr.14520) 1279 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 941 B/s wr, 1 op/s 2026-03-09T16:28:20.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:20 vm01 bash[20728]: cluster 2026-03-09T16:28:19.603988+0000 mon.a (mon.0) 4039 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:20.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:20 vm01 bash[20728]: cluster 2026-03-09T16:28:19.603988+0000 mon.a (mon.0) 4039 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T16:28:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:22 vm09 bash[22983]: cluster 2026-03-09T16:28:21.147684+0000 mgr.y (mgr.14520) 1280 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:28:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:22 vm09 bash[22983]: cluster 2026-03-09T16:28:21.147684+0000 mgr.y (mgr.14520) 1280 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:28:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:22 vm01 bash[28152]: cluster 2026-03-09T16:28:21.147684+0000 mgr.y (mgr.14520) 1280 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:28:22.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:22 vm01 bash[28152]: cluster 2026-03-09T16:28:21.147684+0000 mgr.y (mgr.14520) 1280 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:28:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:22 vm01 bash[20728]: cluster 2026-03-09T16:28:21.147684+0000 mgr.y (mgr.14520) 1280 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:28:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:22 vm01 bash[20728]: cluster 2026-03-09T16:28:21.147684+0000 mgr.y (mgr.14520) 1280 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:28:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:28:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:28:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:28:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:24 vm09 bash[22983]: cluster 2026-03-09T16:28:23.148176+0000 mgr.y (mgr.14520) 1281 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 647 B/s wr, 1 op/s 2026-03-09T16:28:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:24 vm09 bash[22983]: cluster 2026-03-09T16:28:23.148176+0000 mgr.y (mgr.14520) 1281 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 647 B/s wr, 1 op/s 2026-03-09T16:28:24.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:24 vm01 bash[28152]: cluster 2026-03-09T16:28:23.148176+0000 mgr.y (mgr.14520) 1281 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 647 B/s wr, 1 op/s 2026-03-09T16:28:24.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:24 vm01 bash[28152]: cluster 2026-03-09T16:28:23.148176+0000 mgr.y (mgr.14520) 1281 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 647 B/s wr, 1 op/s 2026-03-09T16:28:24.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:24 vm01 bash[20728]: cluster 2026-03-09T16:28:23.148176+0000 mgr.y (mgr.14520) 1281 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 647 B/s wr, 1 op/s 2026-03-09T16:28:24.672 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:24 vm01 bash[20728]: cluster 2026-03-09T16:28:23.148176+0000 mgr.y (mgr.14520) 1281 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 647 B/s wr, 1 op/s 2026-03-09T16:28:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:26 vm09 bash[22983]: cluster 2026-03-09T16:28:25.148827+0000 mgr.y (mgr.14520) 1282 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 B/s rd, 0 op/s 2026-03-09T16:28:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:26 vm09 bash[22983]: cluster 2026-03-09T16:28:25.148827+0000 mgr.y (mgr.14520) 1282 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 B/s rd, 0 op/s 2026-03-09T16:28:26.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:26 vm01 bash[28152]: cluster 2026-03-09T16:28:25.148827+0000 mgr.y (mgr.14520) 1282 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 B/s rd, 0 op/s 2026-03-09T16:28:26.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:26 vm01 bash[28152]: cluster 2026-03-09T16:28:25.148827+0000 mgr.y (mgr.14520) 1282 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 B/s rd, 0 op/s 2026-03-09T16:28:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:26 vm01 bash[20728]: cluster 2026-03-09T16:28:25.148827+0000 mgr.y (mgr.14520) 1282 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 B/s rd, 0 op/s 2026-03-09T16:28:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:26 vm01 bash[20728]: cluster 2026-03-09T16:28:25.148827+0000 mgr.y (mgr.14520) 1282 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 977 B/s rd, 0 op/s 2026-03-09T16:28:28.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:28:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:28:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:28 vm09 bash[22983]: cluster 2026-03-09T16:28:27.149274+0000 mgr.y (mgr.14520) 1283 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:28 vm09 bash[22983]: cluster 2026-03-09T16:28:27.149274+0000 mgr.y (mgr.14520) 1283 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:28 vm01 bash[28152]: cluster 2026-03-09T16:28:27.149274+0000 mgr.y (mgr.14520) 1283 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:28.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:28 vm01 bash[28152]: cluster 2026-03-09T16:28:27.149274+0000 mgr.y (mgr.14520) 1283 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:28 vm01 bash[20728]: cluster 2026-03-09T16:28:27.149274+0000 mgr.y (mgr.14520) 1283 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:28 vm01 bash[20728]: cluster 2026-03-09T16:28:27.149274+0000 mgr.y (mgr.14520) 1283 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:29 vm09 bash[22983]: audit 2026-03-09T16:28:27.747753+0000 mgr.y (mgr.14520) 1284 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:29 vm09 bash[22983]: audit 2026-03-09T16:28:27.747753+0000 mgr.y (mgr.14520) 1284 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:29 vm01 bash[28152]: audit 2026-03-09T16:28:27.747753+0000 mgr.y (mgr.14520) 1284 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:29.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:29 vm01 bash[28152]: audit 2026-03-09T16:28:27.747753+0000 mgr.y (mgr.14520) 1284 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:29 vm01 bash[20728]: audit 2026-03-09T16:28:27.747753+0000 mgr.y (mgr.14520) 1284 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:29 vm01 bash[20728]: audit 2026-03-09T16:28:27.747753+0000 mgr.y (mgr.14520) 1284 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:30 vm09 bash[22983]: cluster 2026-03-09T16:28:29.149914+0000 mgr.y (mgr.14520) 1285 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:30 vm09 bash[22983]: cluster 2026-03-09T16:28:29.149914+0000 mgr.y (mgr.14520) 1285 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:30 vm09 bash[22983]: audit 2026-03-09T16:28:30.036091+0000 mon.a (mon.0) 4040 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:30 vm09 bash[22983]: audit 2026-03-09T16:28:30.036091+0000 mon.a (mon.0) 4040 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:30 vm01 bash[28152]: cluster 2026-03-09T16:28:29.149914+0000 mgr.y (mgr.14520) 1285 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:30 vm01 bash[28152]: cluster 2026-03-09T16:28:29.149914+0000 mgr.y (mgr.14520) 1285 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:30 vm01 bash[28152]: audit 2026-03-09T16:28:30.036091+0000 mon.a (mon.0) 4040 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:30.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:30 vm01 bash[28152]: audit 2026-03-09T16:28:30.036091+0000 mon.a (mon.0) 4040 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:30 vm01 bash[20728]: cluster 2026-03-09T16:28:29.149914+0000 mgr.y (mgr.14520) 1285 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:30 vm01 bash[20728]: cluster 2026-03-09T16:28:29.149914+0000 mgr.y (mgr.14520) 1285 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:30 vm01 bash[20728]: audit 2026-03-09T16:28:30.036091+0000 mon.a (mon.0) 4040 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:30 vm01 bash[20728]: audit 2026-03-09T16:28:30.036091+0000 mon.a (mon.0) 4040 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:32 vm09 bash[22983]: cluster 2026-03-09T16:28:31.150227+0000 mgr.y (mgr.14520) 1286 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:32 vm09 bash[22983]: cluster 2026-03-09T16:28:31.150227+0000 mgr.y (mgr.14520) 1286 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:32 vm01 bash[28152]: cluster 2026-03-09T16:28:31.150227+0000 mgr.y (mgr.14520) 1286 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:32.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:32 vm01 bash[28152]: cluster 2026-03-09T16:28:31.150227+0000 mgr.y (mgr.14520) 1286 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:32 vm01 bash[20728]: cluster 2026-03-09T16:28:31.150227+0000 mgr.y (mgr.14520) 1286 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:32 vm01 bash[20728]: cluster 2026-03-09T16:28:31.150227+0000 mgr.y (mgr.14520) 1286 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:33.172 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:28:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:28:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:28:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:34 vm09 bash[22983]: cluster 2026-03-09T16:28:33.150639+0000 mgr.y (mgr.14520) 1287 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:34 vm09 bash[22983]: cluster 2026-03-09T16:28:33.150639+0000 mgr.y (mgr.14520) 1287 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:34.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:34 vm01 bash[28152]: cluster 2026-03-09T16:28:33.150639+0000 mgr.y (mgr.14520) 1287 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:34.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:34 vm01 bash[28152]: cluster 2026-03-09T16:28:33.150639+0000 mgr.y (mgr.14520) 1287 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:34 vm01 bash[20728]: cluster 2026-03-09T16:28:33.150639+0000 mgr.y (mgr.14520) 1287 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:34 vm01 bash[20728]: cluster 2026-03-09T16:28:33.150639+0000 mgr.y (mgr.14520) 1287 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:35 vm09 bash[22983]: cluster 2026-03-09T16:28:35.151342+0000 mgr.y (mgr.14520) 1288 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:35.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:35 vm09 bash[22983]: cluster 2026-03-09T16:28:35.151342+0000 mgr.y (mgr.14520) 1288 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:35.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:35 vm01 bash[28152]: cluster 2026-03-09T16:28:35.151342+0000 mgr.y (mgr.14520) 1288 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:35.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:35 vm01 bash[28152]: cluster 2026-03-09T16:28:35.151342+0000 mgr.y (mgr.14520) 1288 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:35.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:35 vm01 bash[20728]: cluster 2026-03-09T16:28:35.151342+0000 mgr.y (mgr.14520) 1288 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:35.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:35 vm01 bash[20728]: cluster 2026-03-09T16:28:35.151342+0000 mgr.y (mgr.14520) 1288 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:38.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:28:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:28:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:38 vm09 bash[22983]: cluster 2026-03-09T16:28:37.151695+0000 mgr.y (mgr.14520) 1289 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:38 vm09 bash[22983]: cluster 2026-03-09T16:28:37.151695+0000 mgr.y (mgr.14520) 1289 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:38.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:38 vm01 bash[28152]: cluster 2026-03-09T16:28:37.151695+0000 mgr.y (mgr.14520) 1289 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:38.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:38 vm01 bash[28152]: cluster 2026-03-09T16:28:37.151695+0000 mgr.y (mgr.14520) 1289 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:38 vm01 bash[20728]: cluster 2026-03-09T16:28:37.151695+0000 mgr.y (mgr.14520) 1289 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:38 vm01 bash[20728]: cluster 2026-03-09T16:28:37.151695+0000 mgr.y (mgr.14520) 1289 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:39 vm09 bash[22983]: audit 2026-03-09T16:28:37.754173+0000 mgr.y (mgr.14520) 1290 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:39 vm09 bash[22983]: audit 2026-03-09T16:28:37.754173+0000 mgr.y (mgr.14520) 1290 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:39 vm01 bash[28152]: audit 2026-03-09T16:28:37.754173+0000 mgr.y (mgr.14520) 1290 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:39 vm01 bash[28152]: audit 2026-03-09T16:28:37.754173+0000 mgr.y (mgr.14520) 1290 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:39 vm01 bash[20728]: audit 2026-03-09T16:28:37.754173+0000 mgr.y (mgr.14520) 1290 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:39 vm01 bash[20728]: audit 2026-03-09T16:28:37.754173+0000 mgr.y (mgr.14520) 1290 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:40 vm09 bash[22983]: cluster 2026-03-09T16:28:39.152330+0000 mgr.y (mgr.14520) 1291 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:40 vm09 bash[22983]: cluster 2026-03-09T16:28:39.152330+0000 mgr.y (mgr.14520) 1291 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:40 vm01 bash[28152]: cluster 2026-03-09T16:28:39.152330+0000 mgr.y (mgr.14520) 1291 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:40 vm01 bash[28152]: cluster 2026-03-09T16:28:39.152330+0000 mgr.y (mgr.14520) 1291 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:40 vm01 bash[20728]: cluster 2026-03-09T16:28:39.152330+0000 mgr.y (mgr.14520) 1291 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:40 vm01 bash[20728]: cluster 2026-03-09T16:28:39.152330+0000 mgr.y (mgr.14520) 1291 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:42 vm09 bash[22983]: cluster 2026-03-09T16:28:41.152691+0000 mgr.y (mgr.14520) 1292 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:42.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:42 vm09 bash[22983]: cluster 2026-03-09T16:28:41.152691+0000 mgr.y (mgr.14520) 1292 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:42.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:42 vm01 bash[28152]: cluster 2026-03-09T16:28:41.152691+0000 mgr.y (mgr.14520) 1292 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:42.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:42 vm01 bash[28152]: cluster 2026-03-09T16:28:41.152691+0000 mgr.y (mgr.14520) 1292 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:42 vm01 bash[20728]: cluster 2026-03-09T16:28:41.152691+0000 mgr.y (mgr.14520) 1292 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:42.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:42 vm01 bash[20728]: cluster 2026-03-09T16:28:41.152691+0000 mgr.y (mgr.14520) 1292 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:43.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:28:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:28:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:28:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:44 vm09 bash[22983]: cluster 2026-03-09T16:28:43.153118+0000 mgr.y (mgr.14520) 1293 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:44 vm09 bash[22983]: cluster 2026-03-09T16:28:43.153118+0000 mgr.y (mgr.14520) 1293 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:44.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:44 vm01 bash[28152]: cluster 2026-03-09T16:28:43.153118+0000 mgr.y (mgr.14520) 1293 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:44.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:44 vm01 bash[28152]: cluster 2026-03-09T16:28:43.153118+0000 mgr.y (mgr.14520) 1293 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:44.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:44 vm01 bash[20728]: cluster 2026-03-09T16:28:43.153118+0000 mgr.y (mgr.14520) 1293 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:44.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:44 vm01 bash[20728]: cluster 2026-03-09T16:28:43.153118+0000 mgr.y (mgr.14520) 1293 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:44.688 INFO:tasks.workunit.client.0.vm01.stderr:+ seq 1 10 2026-03-09T16:28:44.689 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj1 /etc/passwd 2026-03-09T16:28:44.718 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj2 /etc/passwd 2026-03-09T16:28:44.751 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj3 /etc/passwd 2026-03-09T16:28:44.776 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj4 /etc/passwd 2026-03-09T16:28:44.802 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj5 /etc/passwd 2026-03-09T16:28:44.828 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj6 /etc/passwd 2026-03-09T16:28:44.855 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj7 /etc/passwd 2026-03-09T16:28:44.882 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj8 /etc/passwd 2026-03-09T16:28:44.908 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj9 /etc/passwd 2026-03-09T16:28:44.934 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p bdba544a-024a-4b6d-a6dd-2ee6648240fd put obj10 /etc/passwd 2026-03-09T16:28:44.965 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 30 2026-03-09T16:28:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:45 vm09 bash[22983]: audit 2026-03-09T16:28:45.041772+0000 mon.a (mon.0) 4041 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:45 vm09 bash[22983]: audit 2026-03-09T16:28:45.041772+0000 mon.a (mon.0) 4041 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:45 vm01 bash[28152]: audit 2026-03-09T16:28:45.041772+0000 mon.a (mon.0) 4041 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:45.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:45 vm01 bash[28152]: audit 2026-03-09T16:28:45.041772+0000 mon.a (mon.0) 4041 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:45 vm01 bash[20728]: audit 2026-03-09T16:28:45.041772+0000 mon.a (mon.0) 4041 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:45 vm01 bash[20728]: audit 2026-03-09T16:28:45.041772+0000 mon.a (mon.0) 4041 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:28:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:46 vm09 bash[22983]: cluster 2026-03-09T16:28:45.153670+0000 mgr.y (mgr.14520) 1294 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:46 vm09 bash[22983]: cluster 2026-03-09T16:28:45.153670+0000 mgr.y (mgr.14520) 1294 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:46.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:46 vm01 bash[28152]: cluster 2026-03-09T16:28:45.153670+0000 mgr.y (mgr.14520) 1294 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:46.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:46 vm01 bash[28152]: cluster 2026-03-09T16:28:45.153670+0000 mgr.y (mgr.14520) 1294 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:46 vm01 bash[20728]: cluster 2026-03-09T16:28:45.153670+0000 mgr.y (mgr.14520) 1294 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:46 vm01 bash[20728]: cluster 2026-03-09T16:28:45.153670+0000 mgr.y (mgr.14520) 1294 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:28:48.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:28:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:28:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:48 vm09 bash[22983]: cluster 2026-03-09T16:28:47.154061+0000 mgr.y (mgr.14520) 1295 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:48 vm09 bash[22983]: cluster 2026-03-09T16:28:47.154061+0000 mgr.y (mgr.14520) 1295 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:48.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:48 vm01 bash[28152]: cluster 2026-03-09T16:28:47.154061+0000 mgr.y (mgr.14520) 1295 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:48.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:48 vm01 bash[28152]: cluster 2026-03-09T16:28:47.154061+0000 mgr.y (mgr.14520) 1295 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:48 vm01 bash[20728]: cluster 2026-03-09T16:28:47.154061+0000 mgr.y (mgr.14520) 1295 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:48.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:48 vm01 bash[20728]: cluster 2026-03-09T16:28:47.154061+0000 mgr.y (mgr.14520) 1295 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:28:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:49 vm09 bash[22983]: audit 2026-03-09T16:28:47.764881+0000 mgr.y (mgr.14520) 1296 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:49 vm09 bash[22983]: audit 2026-03-09T16:28:47.764881+0000 mgr.y (mgr.14520) 1296 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:49.672 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:49 vm01 bash[28152]: audit 2026-03-09T16:28:47.764881+0000 mgr.y (mgr.14520) 1296 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:49 vm01 bash[28152]: audit 2026-03-09T16:28:47.764881+0000 mgr.y (mgr.14520) 1296 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:49 vm01 bash[20728]: audit 2026-03-09T16:28:47.764881+0000 mgr.y (mgr.14520) 1296 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:49 vm01 bash[20728]: audit 2026-03-09T16:28:47.764881+0000 mgr.y (mgr.14520) 1296 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:50 vm09 bash[22983]: cluster 2026-03-09T16:28:49.154833+0000 mgr.y (mgr.14520) 1297 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T16:28:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:50 vm09 bash[22983]: cluster 2026-03-09T16:28:49.154833+0000 mgr.y (mgr.14520) 1297 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T16:28:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:50 vm09 bash[22983]: cluster 2026-03-09T16:28:49.607765+0000 mon.a (mon.0) 4042 : cluster [WRN] pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' is full (reached quota's max_objects: 10) 2026-03-09T16:28:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:50 vm09 bash[22983]: cluster 2026-03-09T16:28:49.607765+0000 mon.a (mon.0) 4042 : cluster [WRN] pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' is full (reached quota's max_objects: 10) 2026-03-09T16:28:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:50 vm09 bash[22983]: cluster 2026-03-09T16:28:49.607984+0000 mon.a (mon.0) 4043 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:28:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:50 vm09 bash[22983]: cluster 2026-03-09T16:28:49.607984+0000 mon.a (mon.0) 4043 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:28:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:50 vm09 bash[22983]: cluster 2026-03-09T16:28:49.626202+0000 mon.a (mon.0) 4044 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T16:28:50.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:50 vm09 bash[22983]: cluster 2026-03-09T16:28:49.626202+0000 mon.a (mon.0) 4044 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:50 vm01 bash[28152]: cluster 2026-03-09T16:28:49.154833+0000 mgr.y (mgr.14520) 1297 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:50 vm01 bash[28152]: cluster 2026-03-09T16:28:49.154833+0000 mgr.y (mgr.14520) 1297 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:50 vm01 bash[28152]: cluster 2026-03-09T16:28:49.607765+0000 mon.a (mon.0) 4042 : cluster [WRN] pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' is full (reached quota's max_objects: 10) 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:50 vm01 bash[28152]: cluster 2026-03-09T16:28:49.607765+0000 mon.a (mon.0) 4042 : cluster [WRN] pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' is full (reached quota's max_objects: 10) 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:50 vm01 bash[28152]: cluster 2026-03-09T16:28:49.607984+0000 mon.a (mon.0) 4043 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:50 vm01 bash[28152]: cluster 2026-03-09T16:28:49.607984+0000 mon.a (mon.0) 4043 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:50 vm01 bash[28152]: cluster 2026-03-09T16:28:49.626202+0000 mon.a (mon.0) 4044 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:50 vm01 bash[28152]: cluster 2026-03-09T16:28:49.626202+0000 mon.a (mon.0) 4044 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:50 vm01 bash[20728]: cluster 2026-03-09T16:28:49.154833+0000 mgr.y (mgr.14520) 1297 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:50 vm01 bash[20728]: cluster 2026-03-09T16:28:49.154833+0000 mgr.y (mgr.14520) 1297 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:50 vm01 bash[20728]: cluster 2026-03-09T16:28:49.607765+0000 mon.a (mon.0) 4042 : cluster [WRN] pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' is full (reached quota's max_objects: 10) 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:50 vm01 bash[20728]: cluster 2026-03-09T16:28:49.607765+0000 mon.a (mon.0) 4042 : cluster [WRN] pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' is full (reached quota's max_objects: 10) 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:50 vm01 bash[20728]: cluster 2026-03-09T16:28:49.607984+0000 mon.a (mon.0) 4043 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:50 vm01 bash[20728]: cluster 2026-03-09T16:28:49.607984+0000 mon.a (mon.0) 4043 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:50 vm01 bash[20728]: cluster 2026-03-09T16:28:49.626202+0000 mon.a (mon.0) 4044 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T16:28:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:50 vm01 bash[20728]: cluster 2026-03-09T16:28:49.626202+0000 mon.a (mon.0) 4044 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-09T16:28:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:52 vm09 bash[22983]: cluster 2026-03-09T16:28:51.155190+0000 mgr.y (mgr.14520) 1298 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:52 vm09 bash[22983]: cluster 2026-03-09T16:28:51.155190+0000 mgr.y (mgr.14520) 1298 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:52.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:52 vm01 bash[28152]: cluster 2026-03-09T16:28:51.155190+0000 mgr.y (mgr.14520) 1298 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:52.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:52 vm01 bash[28152]: cluster 2026-03-09T16:28:51.155190+0000 mgr.y (mgr.14520) 1298 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:52 vm01 bash[20728]: cluster 2026-03-09T16:28:51.155190+0000 mgr.y (mgr.14520) 1298 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:52 vm01 bash[20728]: cluster 2026-03-09T16:28:51.155190+0000 mgr.y (mgr.14520) 1298 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:53.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:28:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:28:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:28:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:54 vm09 bash[22983]: cluster 2026-03-09T16:28:53.155576+0000 mgr.y (mgr.14520) 1299 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:54.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:54 vm09 bash[22983]: cluster 2026-03-09T16:28:53.155576+0000 mgr.y (mgr.14520) 1299 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:54.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:54 vm01 bash[28152]: cluster 2026-03-09T16:28:53.155576+0000 mgr.y (mgr.14520) 1299 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:54.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:54 vm01 bash[28152]: cluster 2026-03-09T16:28:53.155576+0000 mgr.y (mgr.14520) 1299 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:54 vm01 bash[20728]: cluster 2026-03-09T16:28:53.155576+0000 mgr.y (mgr.14520) 1299 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:54.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:54 vm01 bash[20728]: cluster 2026-03-09T16:28:53.155576+0000 mgr.y (mgr.14520) 1299 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:56 vm09 bash[22983]: cluster 2026-03-09T16:28:55.156259+0000 mgr.y (mgr.14520) 1300 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:56.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:56 vm09 bash[22983]: cluster 2026-03-09T16:28:55.156259+0000 mgr.y (mgr.14520) 1300 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:56.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:56 vm01 bash[28152]: cluster 2026-03-09T16:28:55.156259+0000 mgr.y (mgr.14520) 1300 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:56.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:56 vm01 bash[28152]: cluster 2026-03-09T16:28:55.156259+0000 mgr.y (mgr.14520) 1300 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:56 vm01 bash[20728]: cluster 2026-03-09T16:28:55.156259+0000 mgr.y (mgr.14520) 1300 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:56.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:56 vm01 bash[20728]: cluster 2026-03-09T16:28:55.156259+0000 mgr.y (mgr.14520) 1300 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:58.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:28:57 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:28:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:58 vm09 bash[22983]: cluster 2026-03-09T16:28:57.156663+0000 mgr.y (mgr.14520) 1301 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:58.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:58 vm09 bash[22983]: cluster 2026-03-09T16:28:57.156663+0000 mgr.y (mgr.14520) 1301 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:58.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:58 vm01 bash[28152]: cluster 2026-03-09T16:28:57.156663+0000 mgr.y (mgr.14520) 1301 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:58.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:58 vm01 bash[28152]: cluster 2026-03-09T16:28:57.156663+0000 mgr.y (mgr.14520) 1301 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:58 vm01 bash[20728]: cluster 2026-03-09T16:28:57.156663+0000 mgr.y (mgr.14520) 1301 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:58.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:58 vm01 bash[20728]: cluster 2026-03-09T16:28:57.156663+0000 mgr.y (mgr.14520) 1301 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-09T16:28:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:59 vm09 bash[22983]: audit 2026-03-09T16:28:57.772453+0000 mgr.y (mgr.14520) 1302 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:59.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:28:59 vm09 bash[22983]: audit 2026-03-09T16:28:57.772453+0000 mgr.y (mgr.14520) 1302 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:59.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:59 vm01 bash[28152]: audit 2026-03-09T16:28:57.772453+0000 mgr.y (mgr.14520) 1302 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:59.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:28:59 vm01 bash[28152]: audit 2026-03-09T16:28:57.772453+0000 mgr.y (mgr.14520) 1302 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:59 vm01 bash[20728]: audit 2026-03-09T16:28:57.772453+0000 mgr.y (mgr.14520) 1302 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:28:59.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:28:59 vm01 bash[20728]: audit 2026-03-09T16:28:57.772453+0000 mgr.y (mgr.14520) 1302 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:00 vm09 bash[22983]: cluster 2026-03-09T16:28:59.157445+0000 mgr.y (mgr.14520) 1303 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:29:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:00 vm09 bash[22983]: cluster 2026-03-09T16:28:59.157445+0000 mgr.y (mgr.14520) 1303 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:29:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:00 vm09 bash[22983]: audit 2026-03-09T16:29:00.053218+0000 mon.a (mon.0) 4045 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:00 vm09 bash[22983]: audit 2026-03-09T16:29:00.053218+0000 mon.a (mon.0) 4045 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:00 vm09 bash[22983]: audit 2026-03-09T16:29:00.056696+0000 mon.a (mon.0) 4046 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:00.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:00 vm09 bash[22983]: audit 2026-03-09T16:29:00.056696+0000 mon.a (mon.0) 4046 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:00 vm01 bash[28152]: cluster 2026-03-09T16:28:59.157445+0000 mgr.y (mgr.14520) 1303 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:00 vm01 bash[28152]: cluster 2026-03-09T16:28:59.157445+0000 mgr.y (mgr.14520) 1303 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:00 vm01 bash[28152]: audit 2026-03-09T16:29:00.053218+0000 mon.a (mon.0) 4045 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:00 vm01 bash[28152]: audit 2026-03-09T16:29:00.053218+0000 mon.a (mon.0) 4045 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:00 vm01 bash[28152]: audit 2026-03-09T16:29:00.056696+0000 mon.a (mon.0) 4046 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:00 vm01 bash[28152]: audit 2026-03-09T16:29:00.056696+0000 mon.a (mon.0) 4046 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:00 vm01 bash[20728]: cluster 2026-03-09T16:28:59.157445+0000 mgr.y (mgr.14520) 1303 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:00 vm01 bash[20728]: cluster 2026-03-09T16:28:59.157445+0000 mgr.y (mgr.14520) 1303 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:00 vm01 bash[20728]: audit 2026-03-09T16:29:00.053218+0000 mon.a (mon.0) 4045 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:00 vm01 bash[20728]: audit 2026-03-09T16:29:00.053218+0000 mon.a (mon.0) 4045 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:00 vm01 bash[20728]: audit 2026-03-09T16:29:00.056696+0000 mon.a (mon.0) 4046 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:00.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:00 vm01 bash[20728]: audit 2026-03-09T16:29:00.056696+0000 mon.a (mon.0) 4046 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:01 vm09 bash[22983]: cluster 2026-03-09T16:29:01.157870+0000 mgr.y (mgr.14520) 1304 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 886 B/s rd, 0 op/s 2026-03-09T16:29:01.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:01 vm09 bash[22983]: cluster 2026-03-09T16:29:01.157870+0000 mgr.y (mgr.14520) 1304 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 886 B/s rd, 0 op/s 2026-03-09T16:29:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:01 vm01 bash[28152]: cluster 2026-03-09T16:29:01.157870+0000 mgr.y (mgr.14520) 1304 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 886 B/s rd, 0 op/s 2026-03-09T16:29:01.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:01 vm01 bash[28152]: cluster 2026-03-09T16:29:01.157870+0000 mgr.y (mgr.14520) 1304 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 886 B/s rd, 0 op/s 2026-03-09T16:29:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:01 vm01 bash[20728]: cluster 2026-03-09T16:29:01.157870+0000 mgr.y (mgr.14520) 1304 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 886 B/s rd, 0 op/s 2026-03-09T16:29:01.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:01 vm01 bash[20728]: cluster 2026-03-09T16:29:01.157870+0000 mgr.y (mgr.14520) 1304 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 886 B/s rd, 0 op/s 2026-03-09T16:29:03.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:02 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:29:02] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:29:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:04 vm09 bash[22983]: cluster 2026-03-09T16:29:03.158232+0000 mgr.y (mgr.14520) 1305 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:04.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:04 vm09 bash[22983]: cluster 2026-03-09T16:29:03.158232+0000 mgr.y (mgr.14520) 1305 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:04 vm01 bash[20728]: cluster 2026-03-09T16:29:03.158232+0000 mgr.y (mgr.14520) 1305 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:04.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:04 vm01 bash[20728]: cluster 2026-03-09T16:29:03.158232+0000 mgr.y (mgr.14520) 1305 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:04.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:04 vm01 bash[28152]: cluster 2026-03-09T16:29:03.158232+0000 mgr.y (mgr.14520) 1305 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:04.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:04 vm01 bash[28152]: cluster 2026-03-09T16:29:03.158232+0000 mgr.y (mgr.14520) 1305 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:06 vm09 bash[22983]: cluster 2026-03-09T16:29:05.158898+0000 mgr.y (mgr.14520) 1306 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:06.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:06 vm09 bash[22983]: cluster 2026-03-09T16:29:05.158898+0000 mgr.y (mgr.14520) 1306 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:06.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:06 vm01 bash[28152]: cluster 2026-03-09T16:29:05.158898+0000 mgr.y (mgr.14520) 1306 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:06.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:06 vm01 bash[28152]: cluster 2026-03-09T16:29:05.158898+0000 mgr.y (mgr.14520) 1306 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:06.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:06 vm01 bash[20728]: cluster 2026-03-09T16:29:05.158898+0000 mgr.y (mgr.14520) 1306 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:06.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:06 vm01 bash[20728]: cluster 2026-03-09T16:29:05.158898+0000 mgr.y (mgr.14520) 1306 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:08.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:29:07 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:29:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:08 vm09 bash[22983]: cluster 2026-03-09T16:29:07.159370+0000 mgr.y (mgr.14520) 1307 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:08.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:08 vm09 bash[22983]: cluster 2026-03-09T16:29:07.159370+0000 mgr.y (mgr.14520) 1307 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:08 vm01 bash[28152]: cluster 2026-03-09T16:29:07.159370+0000 mgr.y (mgr.14520) 1307 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:08.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:08 vm01 bash[28152]: cluster 2026-03-09T16:29:07.159370+0000 mgr.y (mgr.14520) 1307 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:08 vm01 bash[20728]: cluster 2026-03-09T16:29:07.159370+0000 mgr.y (mgr.14520) 1307 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:08.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:08 vm01 bash[20728]: cluster 2026-03-09T16:29:07.159370+0000 mgr.y (mgr.14520) 1307 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:09 vm09 bash[22983]: audit 2026-03-09T16:29:07.774462+0000 mgr.y (mgr.14520) 1308 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:09.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:09 vm09 bash[22983]: audit 2026-03-09T16:29:07.774462+0000 mgr.y (mgr.14520) 1308 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:09.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:09 vm01 bash[20728]: audit 2026-03-09T16:29:07.774462+0000 mgr.y (mgr.14520) 1308 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:09.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:09 vm01 bash[20728]: audit 2026-03-09T16:29:07.774462+0000 mgr.y (mgr.14520) 1308 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:09.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:09 vm01 bash[28152]: audit 2026-03-09T16:29:07.774462+0000 mgr.y (mgr.14520) 1308 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:09.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:09 vm01 bash[28152]: audit 2026-03-09T16:29:07.774462+0000 mgr.y (mgr.14520) 1308 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:10 vm09 bash[22983]: cluster 2026-03-09T16:29:09.160041+0000 mgr.y (mgr.14520) 1309 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:10.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:10 vm09 bash[22983]: cluster 2026-03-09T16:29:09.160041+0000 mgr.y (mgr.14520) 1309 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:10.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:10 vm01 bash[20728]: cluster 2026-03-09T16:29:09.160041+0000 mgr.y (mgr.14520) 1309 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:10.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:10 vm01 bash[20728]: cluster 2026-03-09T16:29:09.160041+0000 mgr.y (mgr.14520) 1309 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:10.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:10 vm01 bash[28152]: cluster 2026-03-09T16:29:09.160041+0000 mgr.y (mgr.14520) 1309 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:10.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:10 vm01 bash[28152]: cluster 2026-03-09T16:29:09.160041+0000 mgr.y (mgr.14520) 1309 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:12 vm09 bash[22983]: cluster 2026-03-09T16:29:11.160419+0000 mgr.y (mgr.14520) 1310 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:12.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:12 vm09 bash[22983]: cluster 2026-03-09T16:29:11.160419+0000 mgr.y (mgr.14520) 1310 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:12.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:12 vm01 bash[28152]: cluster 2026-03-09T16:29:11.160419+0000 mgr.y (mgr.14520) 1310 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:12.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:12 vm01 bash[28152]: cluster 2026-03-09T16:29:11.160419+0000 mgr.y (mgr.14520) 1310 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:12.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:12 vm01 bash[20728]: cluster 2026-03-09T16:29:11.160419+0000 mgr.y (mgr.14520) 1310 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:12.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:12 vm01 bash[20728]: cluster 2026-03-09T16:29:11.160419+0000 mgr.y (mgr.14520) 1310 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:13.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:12 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:29:12] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:29:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:14 vm09 bash[22983]: cluster 2026-03-09T16:29:13.160751+0000 mgr.y (mgr.14520) 1311 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:14 vm09 bash[22983]: cluster 2026-03-09T16:29:13.160751+0000 mgr.y (mgr.14520) 1311 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:14 vm09 bash[22983]: audit 2026-03-09T16:29:13.650462+0000 mon.a (mon.0) 4047 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:29:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:14 vm09 bash[22983]: audit 2026-03-09T16:29:13.650462+0000 mon.a (mon.0) 4047 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:29:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:14 vm09 bash[22983]: audit 2026-03-09T16:29:13.976368+0000 mon.a (mon.0) 4048 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:14 vm09 bash[22983]: audit 2026-03-09T16:29:13.976368+0000 mon.a (mon.0) 4048 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:14 vm09 bash[22983]: audit 2026-03-09T16:29:13.984617+0000 mon.a (mon.0) 4049 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:14 vm09 bash[22983]: audit 2026-03-09T16:29:13.984617+0000 mon.a (mon.0) 4049 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:14 vm01 bash[20728]: cluster 2026-03-09T16:29:13.160751+0000 mgr.y (mgr.14520) 1311 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:14 vm01 bash[20728]: cluster 2026-03-09T16:29:13.160751+0000 mgr.y (mgr.14520) 1311 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:14 vm01 bash[20728]: audit 2026-03-09T16:29:13.650462+0000 mon.a (mon.0) 4047 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:14 vm01 bash[20728]: audit 2026-03-09T16:29:13.650462+0000 mon.a (mon.0) 4047 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:14 vm01 bash[20728]: audit 2026-03-09T16:29:13.976368+0000 mon.a (mon.0) 4048 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:14 vm01 bash[20728]: audit 2026-03-09T16:29:13.976368+0000 mon.a (mon.0) 4048 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:14 vm01 bash[20728]: audit 2026-03-09T16:29:13.984617+0000 mon.a (mon.0) 4049 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:14 vm01 bash[20728]: audit 2026-03-09T16:29:13.984617+0000 mon.a (mon.0) 4049 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:14 vm01 bash[28152]: cluster 2026-03-09T16:29:13.160751+0000 mgr.y (mgr.14520) 1311 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:14 vm01 bash[28152]: cluster 2026-03-09T16:29:13.160751+0000 mgr.y (mgr.14520) 1311 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:14 vm01 bash[28152]: audit 2026-03-09T16:29:13.650462+0000 mon.a (mon.0) 4047 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:14 vm01 bash[28152]: audit 2026-03-09T16:29:13.650462+0000 mon.a (mon.0) 4047 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:14 vm01 bash[28152]: audit 2026-03-09T16:29:13.976368+0000 mon.a (mon.0) 4048 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:14 vm01 bash[28152]: audit 2026-03-09T16:29:13.976368+0000 mon.a (mon.0) 4048 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:14 vm01 bash[28152]: audit 2026-03-09T16:29:13.984617+0000 mon.a (mon.0) 4049 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:14 vm01 bash[28152]: audit 2026-03-09T16:29:13.984617+0000 mon.a (mon.0) 4049 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:14.966 INFO:tasks.workunit.client.0.vm01.stderr:+ rados -p 596e5e1f-ecde-406d-b4d0-afd8854e4a60 put threemore /etc/passwd 2026-03-09T16:29:14.993 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool set-quota 596e5e1f-ecde-406d-b4d0-afd8854e4a60 max_bytes 0 2026-03-09T16:29:15.064 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 -- 192.168.123.101:0/3445314954 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f10c4074270 msgr2=0x7f10c40746b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:15.064 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/3445314954 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f10c4074270 0x7f10c40746b0 secure :-1 s=READY pgs=3025 cs=0 l=1 rev1=1 crypto rx=0x7f10b400b0a0 tx=0x7f10b401cb80 comp rx=0 tx=0).stop 2026-03-09T16:29:15.064 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 -- 192.168.123.101:0/3445314954 shutdown_connections 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/3445314954 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f10c4074bf0 0x7f10c411e7a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/3445314954 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f10c4074270 0x7f10c40746b0 unknown :-1 s=CLOSED pgs=3025 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/3445314954 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f10c41112c0 0x7f10c41116a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 -- 192.168.123.101:0/3445314954 >> 192.168.123.101:0/3445314954 conn(0x7f10c406eb60 msgr2=0x7f10c406ef70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 -- 192.168.123.101:0/3445314954 shutdown_connections 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 -- 192.168.123.101:0/3445314954 wait complete. 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.067+0000 7f10ca5ca640 1 Processor -- start 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 -- start start 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f10c4074270 0x7f10c4116fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f10c4074bf0 0x7f10c4117510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f10c41112c0 0x7f10c41c07a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f10c4123a10 con 0x7f10c4074270 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f10c4123890 con 0x7f10c4074bf0 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f10c4123b90 con 0x7f10c41112c0 2026-03-09T16:29:15.065 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c37fe640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f10c4074bf0 0x7f10c4117510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c37fe640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f10c4074bf0 0x7f10c4117510 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:57590/0 (socket says 192.168.123.101:57590) 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c37fe640 1 -- 192.168.123.101:0/2289016386 learned_addr learned my addr 192.168.123.101:0/2289016386 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c8b40640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f10c41112c0 0x7f10c41c07a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c3fff640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f10c4074270 0x7f10c4116fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c37fe640 1 -- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f10c41112c0 msgr2=0x7f10c41c07a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c37fe640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f10c41112c0 0x7f10c41c07a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c37fe640 1 -- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f10c4074270 msgr2=0x7f10c4116fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c37fe640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f10c4074270 0x7f10c4116fd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c37fe640 1 -- 192.168.123.101:0/2289016386 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f10c41c0df0 con 0x7f10c4074bf0 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c8b40640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f10c41112c0 0x7f10c41c07a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c37fe640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f10c4074bf0 0x7f10c4117510 secure :-1 s=READY pgs=3115 cs=0 l=1 rev1=1 crypto rx=0x7f10b401d060 tx=0x7f10b4007b60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:29:15.066 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f10b4002c70 con 0x7f10c4074bf0 2026-03-09T16:29:15.067 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f10c41c1080 con 0x7f10c4074bf0 2026-03-09T16:29:15.067 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f10c41c1510 con 0x7f10c4074bf0 2026-03-09T16:29:15.067 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f10b40a5ad0 con 0x7f10c4074bf0 2026-03-09T16:29:15.067 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f10b40048c0 con 0x7f10c4074bf0 2026-03-09T16:29:15.067 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f10c410a7f0 con 0x7f10c4074bf0 2026-03-09T16:29:15.068 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f10b4005ce0 con 0x7f10c4074bf0 2026-03-09T16:29:15.069 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c17fa640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f10a0077710 0x7f10a0079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:15.069 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.071+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(779..779 src has 257..779) ==== 9373+0+0 (secure 0 0 0) 0x7f10b4134c30 con 0x7f10c4074bf0 2026-03-09T16:29:15.069 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.075+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=780}) -- 0x7f10a00838b0 con 0x7f10c4074bf0 2026-03-09T16:29:15.069 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.075+0000 7f10c3fff640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f10a0077710 0x7f10a0079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:15.070 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.075+0000 7f10c3fff640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f10a0077710 0x7f10a0079bd0 secure :-1 s=READY pgs=4296 cs=0 l=1 rev1=1 crypto rx=0x7f10ac004640 tx=0x7f10ac009210 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:29:15.071 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.075+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f10b40bc050 con 0x7f10c4074bf0 2026-03-09T16:29:15.173 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.175+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"} v 0) -- 0x7f10c4074730 con 0x7f10c4074bf0 2026-03-09T16:29:15.336 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.339+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 7 ==== osd_map(780..780 src has 257..780) ==== 628+0+0 (secure 0 0 0) 0x7f10b40f8d80 con 0x7f10c4074bf0 2026-03-09T16:29:15.336 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.339+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=781}) -- 0x7f10a0084490 con 0x7f10c4074bf0 2026-03-09T16:29:15.336 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.339+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v780) ==== 217+0+0 (secure 0 0 0) 0x7f10b4016800 con 0x7f10c4074bf0 2026-03-09T16:29:15.391 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:15.395+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"} v 0) -- 0x7f10c41c1830 con 0x7f10c4074bf0 2026-03-09T16:29:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:14.320305+0000 mon.a (mon.0) 4050 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:29:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:14.320305+0000 mon.a (mon.0) 4050 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:29:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:14.320959+0000 mon.a (mon.0) 4051 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:29:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:14.320959+0000 mon.a (mon.0) 4051 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:29:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:14.326477+0000 mon.a (mon.0) 4052 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:14.326477+0000 mon.a (mon.0) 4052 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:15.063657+0000 mon.a (mon.0) 4053 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:15.063657+0000 mon.a (mon.0) 4053 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:15.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:15.174274+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:15.174274+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:15.181435+0000 mon.a (mon.0) 4054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.633 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:15 vm09 bash[22983]: audit 2026-03-09T16:29:15.181435+0000 mon.a (mon.0) 4054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:14.320305+0000 mon.a (mon.0) 4050 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:14.320305+0000 mon.a (mon.0) 4050 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:14.320959+0000 mon.a (mon.0) 4051 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:14.320959+0000 mon.a (mon.0) 4051 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:14.326477+0000 mon.a (mon.0) 4052 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:14.326477+0000 mon.a (mon.0) 4052 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:15.063657+0000 mon.a (mon.0) 4053 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:15.063657+0000 mon.a (mon.0) 4053 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:15.174274+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:15.174274+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:15.181435+0000 mon.a (mon.0) 4054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:15 vm01 bash[20728]: audit 2026-03-09T16:29:15.181435+0000 mon.a (mon.0) 4054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:14.320305+0000 mon.a (mon.0) 4050 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:14.320305+0000 mon.a (mon.0) 4050 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:14.320959+0000 mon.a (mon.0) 4051 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:14.320959+0000 mon.a (mon.0) 4051 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:14.326477+0000 mon.a (mon.0) 4052 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:14.326477+0000 mon.a (mon.0) 4052 : audit [INF] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:15.063657+0000 mon.a (mon.0) 4053 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:15.063657+0000 mon.a (mon.0) 4053 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:15.174274+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:15.174274+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:15.181435+0000 mon.a (mon.0) 4054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:15.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:15 vm01 bash[28152]: audit 2026-03-09T16:29:15.181435+0000 mon.a (mon.0) 4054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.331 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.335+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 9 ==== osd_map(781..781 src has 257..781) ==== 628+0+0 (secure 0 0 0) 0x7f10b4132370 con 0x7f10c4074bf0 2026-03-09T16:29:16.331 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.335+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=782}) -- 0x7f10a0084900 con 0x7f10c4074bf0 2026-03-09T16:29:16.339 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.343+0000 7f10c17fa640 1 -- 192.168.123.101:0/2289016386 <== mon.1 v2:192.168.123.109:3300/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v781) ==== 217+0+0 (secure 0 0 0) 0x7f10b4100d90 con 0x7f10c4074bf0 2026-03-09T16:29:16.339 INFO:tasks.workunit.client.0.vm01.stderr:set-quota max_bytes = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f10a0077710 msgr2=0x7f10a0079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f10a0077710 0x7f10a0079bd0 secure :-1 s=READY pgs=4296 cs=0 l=1 rev1=1 crypto rx=0x7f10ac004640 tx=0x7f10ac009210 comp rx=0 tx=0).stop 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f10c4074bf0 msgr2=0x7f10c4117510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f10c4074bf0 0x7f10c4117510 secure :-1 s=READY pgs=3115 cs=0 l=1 rev1=1 crypto rx=0x7f10b401d060 tx=0x7f10b4007b60 comp rx=0 tx=0).stop 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 shutdown_connections 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f10a0077710 0x7f10a0079bd0 unknown :-1 s=CLOSED pgs=4296 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f10c41112c0 0x7f10c41c07a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f10c4074bf0 0x7f10c4117510 unknown :-1 s=CLOSED pgs=3115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 --2- 192.168.123.101:0/2289016386 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f10c4074270 0x7f10c4116fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 >> 192.168.123.101:0/2289016386 conn(0x7f10c406eb60 msgr2=0x7f10c410dab0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 shutdown_connections 2026-03-09T16:29:16.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.347+0000 7f10ca5ca640 1 -- 192.168.123.101:0/2289016386 wait complete. 2026-03-09T16:29:16.354 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool set-quota 596e5e1f-ecde-406d-b4d0-afd8854e4a60 max_objects 0 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 -- 192.168.123.101:0/633875556 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3084074270 msgr2=0x7f30840746b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 --2- 192.168.123.101:0/633875556 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3084074270 0x7f30840746b0 secure :-1 s=READY pgs=3026 cs=0 l=1 rev1=1 crypto rx=0x7f3078009a30 tx=0x7f307801c9b0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 -- 192.168.123.101:0/633875556 shutdown_connections 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 --2- 192.168.123.101:0/633875556 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f3084074bf0 0x7f308411e7a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 --2- 192.168.123.101:0/633875556 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3084074270 0x7f30840746b0 unknown :-1 s=CLOSED pgs=3026 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 --2- 192.168.123.101:0/633875556 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f30841112c0 0x7f30841116a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 -- 192.168.123.101:0/633875556 >> 192.168.123.101:0/633875556 conn(0x7f308406eb60 msgr2=0x7f308406ef70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 -- 192.168.123.101:0/633875556 shutdown_connections 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 -- 192.168.123.101:0/633875556 wait complete. 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 Processor -- start 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 -- start start 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3084074270 0x7f308411e1f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3084074bf0 0x7f308411e730 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f30841112c0 0x7f3084117590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f30841239d0 con 0x7f30841112c0 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f3084123850 con 0x7f3084074270 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308c6b4640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f3084123b50 con 0x7f3084074bf0 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308ac2a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f30841112c0 0x7f3084117590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308a429640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3084074270 0x7f308411e1f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:16.420 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308a429640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3084074270 0x7f308411e1f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.101:57608/0 (socket says 192.168.123.101:57608) 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308ac2a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f30841112c0 0x7f3084117590 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:34564/0 (socket says 192.168.123.101:34564) 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308a429640 1 -- 192.168.123.101:0/2457711976 learned_addr learned my addr 192.168.123.101:0/2457711976 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f3089c28640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3084074bf0 0x7f308411e730 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308ac2a640 1 -- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3084074bf0 msgr2=0x7f308411e730 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308ac2a640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3084074bf0 0x7f308411e730 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308ac2a640 1 -- 192.168.123.101:0/2457711976 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3084074270 msgr2=0x7f308411e1f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308ac2a640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3084074270 0x7f308411e1f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.423+0000 7f308ac2a640 1 -- 192.168.123.101:0/2457711976 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3084117e50 con 0x7f30841112c0 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f308ac2a640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f30841112c0 0x7f3084117590 secure :-1 s=READY pgs=3106 cs=0 l=1 rev1=1 crypto rx=0x7f307c007f60 tx=0x7f307c007f90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f307c002a70 con 0x7f30841112c0 2026-03-09T16:29:16.421 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f3089c28640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3084074bf0 0x7f308411e730 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T16:29:16.423 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f308a429640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3084074270 0x7f308411e1f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T16:29:16.423 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f30841180e0 con 0x7f30841112c0 2026-03-09T16:29:16.423 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f308411c000 con 0x7f30841112c0 2026-03-09T16:29:16.423 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f307c002c10 con 0x7f30841112c0 2026-03-09T16:29:16.423 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f307c013650 con 0x7f30841112c0 2026-03-09T16:29:16.424 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f304c005190 con 0x7f30841112c0 2026-03-09T16:29:16.424 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.427+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f307c012070 con 0x7f30841112c0 2026-03-09T16:29:16.427 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.431+0000 7f306b7fe640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3058077710 0x7f3058079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:16.427 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.431+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(781..781 src has 257..781) ==== 9373+0+0 (secure 0 0 0) 0x7f307c0a1340 con 0x7f30841112c0 2026-03-09T16:29:16.427 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.431+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=782}) -- 0x7f3058083930 con 0x7f30841112c0 2026-03-09T16:29:16.427 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.431+0000 7f308a429640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3058077710 0x7f3058079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:16.427 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.431+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f307c010070 con 0x7f30841112c0 2026-03-09T16:29:16.427 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.431+0000 7f308a429640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3058077710 0x7f3058079bd0 secure :-1 s=READY pgs=4297 cs=0 l=1 rev1=1 crypto rx=0x7f3074005e10 tx=0x7f30740043a0 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:29:16.527 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:16.531+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"} v 0) -- 0x7f304c005480 con 0x7f30841112c0 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: cluster 2026-03-09T16:29:15.161393+0000 mgr.y (mgr.14520) 1312 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: cluster 2026-03-09T16:29:15.161393+0000 mgr.y (mgr.14520) 1312 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: audit 2026-03-09T16:29:15.333197+0000 mon.a (mon.0) 4055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: audit 2026-03-09T16:29:15.333197+0000 mon.a (mon.0) 4055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: cluster 2026-03-09T16:29:15.340283+0000 mon.a (mon.0) 4056 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: cluster 2026-03-09T16:29:15.340283+0000 mon.a (mon.0) 4056 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: audit 2026-03-09T16:29:15.392079+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: audit 2026-03-09T16:29:15.392079+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: audit 2026-03-09T16:29:15.399235+0000 mon.a (mon.0) 4057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:16 vm09 bash[22983]: audit 2026-03-09T16:29:15.399235+0000 mon.a (mon.0) 4057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: cluster 2026-03-09T16:29:15.161393+0000 mgr.y (mgr.14520) 1312 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: cluster 2026-03-09T16:29:15.161393+0000 mgr.y (mgr.14520) 1312 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: audit 2026-03-09T16:29:15.333197+0000 mon.a (mon.0) 4055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: audit 2026-03-09T16:29:15.333197+0000 mon.a (mon.0) 4055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: cluster 2026-03-09T16:29:15.340283+0000 mon.a (mon.0) 4056 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: cluster 2026-03-09T16:29:15.340283+0000 mon.a (mon.0) 4056 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: audit 2026-03-09T16:29:15.392079+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: audit 2026-03-09T16:29:15.392079+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: audit 2026-03-09T16:29:15.399235+0000 mon.a (mon.0) 4057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:16 vm01 bash[20728]: audit 2026-03-09T16:29:15.399235+0000 mon.a (mon.0) 4057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: cluster 2026-03-09T16:29:15.161393+0000 mgr.y (mgr.14520) 1312 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: cluster 2026-03-09T16:29:15.161393+0000 mgr.y (mgr.14520) 1312 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: audit 2026-03-09T16:29:15.333197+0000 mon.a (mon.0) 4055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: audit 2026-03-09T16:29:15.333197+0000 mon.a (mon.0) 4055 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: cluster 2026-03-09T16:29:15.340283+0000 mon.a (mon.0) 4056 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T16:29:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: cluster 2026-03-09T16:29:15.340283+0000 mon.a (mon.0) 4056 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-09T16:29:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: audit 2026-03-09T16:29:15.392079+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: audit 2026-03-09T16:29:15.392079+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.101:0/2289016386' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: audit 2026-03-09T16:29:15.399235+0000 mon.a (mon.0) 4057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:16.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:16 vm01 bash[28152]: audit 2026-03-09T16:29:15.399235+0000 mon.a (mon.0) 4057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T16:29:17.339 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:17.343+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v782) ==== 221+0+0 (secure 0 0 0) 0x7f307c065ee0 con 0x7f30841112c0 2026-03-09T16:29:17.351 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:17.355+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 <== mon.0 v2:192.168.123.101:3300/0 8 ==== osd_map(782..782 src has 257..782) ==== 628+0+0 (secure 0 0 0) 0x7f307c06a020 con 0x7f30841112c0 2026-03-09T16:29:17.352 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:17.355+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=783}) -- 0x7f3058084950 con 0x7f30841112c0 2026-03-09T16:29:17.401 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:17.403+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"} v 0) -- 0x7f304c005c30 con 0x7f30841112c0 2026-03-09T16:29:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:17 vm09 bash[22983]: audit 2026-03-09T16:29:16.336308+0000 mon.a (mon.0) 4058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:17 vm09 bash[22983]: audit 2026-03-09T16:29:16.336308+0000 mon.a (mon.0) 4058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:17 vm09 bash[22983]: cluster 2026-03-09T16:29:16.346547+0000 mon.a (mon.0) 4059 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T16:29:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:17 vm09 bash[22983]: cluster 2026-03-09T16:29:16.346547+0000 mon.a (mon.0) 4059 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T16:29:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:17 vm09 bash[22983]: audit 2026-03-09T16:29:16.535328+0000 mon.a (mon.0) 4060 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:17 vm09 bash[22983]: audit 2026-03-09T16:29:16.535328+0000 mon.a (mon.0) 4060 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:17 vm09 bash[22983]: cluster 2026-03-09T16:29:17.161733+0000 mgr.y (mgr.14520) 1313 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:17.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:17 vm09 bash[22983]: cluster 2026-03-09T16:29:17.161733+0000 mgr.y (mgr.14520) 1313 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:17 vm01 bash[20728]: audit 2026-03-09T16:29:16.336308+0000 mon.a (mon.0) 4058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:17 vm01 bash[20728]: audit 2026-03-09T16:29:16.336308+0000 mon.a (mon.0) 4058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:17 vm01 bash[20728]: cluster 2026-03-09T16:29:16.346547+0000 mon.a (mon.0) 4059 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:17 vm01 bash[20728]: cluster 2026-03-09T16:29:16.346547+0000 mon.a (mon.0) 4059 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:17 vm01 bash[20728]: audit 2026-03-09T16:29:16.535328+0000 mon.a (mon.0) 4060 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:17 vm01 bash[20728]: audit 2026-03-09T16:29:16.535328+0000 mon.a (mon.0) 4060 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:17 vm01 bash[20728]: cluster 2026-03-09T16:29:17.161733+0000 mgr.y (mgr.14520) 1313 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:17 vm01 bash[20728]: cluster 2026-03-09T16:29:17.161733+0000 mgr.y (mgr.14520) 1313 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:17 vm01 bash[28152]: audit 2026-03-09T16:29:16.336308+0000 mon.a (mon.0) 4058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:17 vm01 bash[28152]: audit 2026-03-09T16:29:16.336308+0000 mon.a (mon.0) 4058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:17 vm01 bash[28152]: cluster 2026-03-09T16:29:16.346547+0000 mon.a (mon.0) 4059 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:17 vm01 bash[28152]: cluster 2026-03-09T16:29:16.346547+0000 mon.a (mon.0) 4059 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:17 vm01 bash[28152]: audit 2026-03-09T16:29:16.535328+0000 mon.a (mon.0) 4060 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:17 vm01 bash[28152]: audit 2026-03-09T16:29:16.535328+0000 mon.a (mon.0) 4060 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:17 vm01 bash[28152]: cluster 2026-03-09T16:29:17.161733+0000 mgr.y (mgr.14520) 1313 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:17.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:17 vm01 bash[28152]: cluster 2026-03-09T16:29:17.161733+0000 mgr.y (mgr.14520) 1313 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:18.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:29:17 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:29:18.357 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.359+0000 7f306b7fe640 1 -- 192.168.123.101:0/2457711976 <== mon.0 v2:192.168.123.101:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 v783) ==== 221+0+0 (secure 0 0 0) 0x7f307c06ad90 con 0x7f30841112c0 2026-03-09T16:29:18.357 INFO:tasks.workunit.client.0.vm01.stderr:set-quota max_objects = 0 for pool 596e5e1f-ecde-406d-b4d0-afd8854e4a60 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3058077710 msgr2=0x7f3058079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3058077710 0x7f3058079bd0 secure :-1 s=READY pgs=4297 cs=0 l=1 rev1=1 crypto rx=0x7f3074005e10 tx=0x7f30740043a0 comp rx=0 tx=0).stop 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f30841112c0 msgr2=0x7f3084117590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f30841112c0 0x7f3084117590 secure :-1 s=READY pgs=3106 cs=0 l=1 rev1=1 crypto rx=0x7f307c007f60 tx=0x7f307c007f90 comp rx=0 tx=0).stop 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 shutdown_connections 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f3058077710 0x7f3058079bd0 unknown :-1 s=CLOSED pgs=4297 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f30841112c0 0x7f3084117590 unknown :-1 s=CLOSED pgs=3106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f3084074bf0 0x7f308411e730 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 --2- 192.168.123.101:0/2457711976 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3084074270 0x7f308411e1f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 >> 192.168.123.101:0/2457711976 conn(0x7f308406eb60 msgr2=0x7f30840714b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 shutdown_connections 2026-03-09T16:29:18.360 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:18.363+0000 7f308c6b4640 1 -- 192.168.123.101:0/2457711976 wait complete. 2026-03-09T16:29:18.379 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 30 2026-03-09T16:29:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:18 vm09 bash[22983]: audit 2026-03-09T16:29:17.346427+0000 mon.a (mon.0) 4061 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:18 vm09 bash[22983]: audit 2026-03-09T16:29:17.346427+0000 mon.a (mon.0) 4061 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:18 vm09 bash[22983]: cluster 2026-03-09T16:29:17.359904+0000 mon.a (mon.0) 4062 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T16:29:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:18 vm09 bash[22983]: cluster 2026-03-09T16:29:17.359904+0000 mon.a (mon.0) 4062 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T16:29:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:18 vm09 bash[22983]: audit 2026-03-09T16:29:17.409096+0000 mon.a (mon.0) 4063 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:18 vm09 bash[22983]: audit 2026-03-09T16:29:17.409096+0000 mon.a (mon.0) 4063 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:18 vm09 bash[22983]: audit 2026-03-09T16:29:17.782926+0000 mgr.y (mgr.14520) 1314 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:18.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:18 vm09 bash[22983]: audit 2026-03-09T16:29:17.782926+0000 mgr.y (mgr.14520) 1314 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:18 vm01 bash[28152]: audit 2026-03-09T16:29:17.346427+0000 mon.a (mon.0) 4061 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:18 vm01 bash[28152]: audit 2026-03-09T16:29:17.346427+0000 mon.a (mon.0) 4061 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:18 vm01 bash[28152]: cluster 2026-03-09T16:29:17.359904+0000 mon.a (mon.0) 4062 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:18 vm01 bash[28152]: cluster 2026-03-09T16:29:17.359904+0000 mon.a (mon.0) 4062 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:18 vm01 bash[28152]: audit 2026-03-09T16:29:17.409096+0000 mon.a (mon.0) 4063 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:18 vm01 bash[28152]: audit 2026-03-09T16:29:17.409096+0000 mon.a (mon.0) 4063 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:18 vm01 bash[28152]: audit 2026-03-09T16:29:17.782926+0000 mgr.y (mgr.14520) 1314 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:18 vm01 bash[28152]: audit 2026-03-09T16:29:17.782926+0000 mgr.y (mgr.14520) 1314 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:18 vm01 bash[20728]: audit 2026-03-09T16:29:17.346427+0000 mon.a (mon.0) 4061 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:18 vm01 bash[20728]: audit 2026-03-09T16:29:17.346427+0000 mon.a (mon.0) 4061 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:18 vm01 bash[20728]: cluster 2026-03-09T16:29:17.359904+0000 mon.a (mon.0) 4062 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:18 vm01 bash[20728]: cluster 2026-03-09T16:29:17.359904+0000 mon.a (mon.0) 4062 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:18 vm01 bash[20728]: audit 2026-03-09T16:29:17.409096+0000 mon.a (mon.0) 4063 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:18 vm01 bash[20728]: audit 2026-03-09T16:29:17.409096+0000 mon.a (mon.0) 4063 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:18 vm01 bash[20728]: audit 2026-03-09T16:29:17.782926+0000 mgr.y (mgr.14520) 1314 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:18.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:18 vm01 bash[20728]: audit 2026-03-09T16:29:17.782926+0000 mgr.y (mgr.14520) 1314 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:19 vm09 bash[22983]: audit 2026-03-09T16:29:18.364704+0000 mon.a (mon.0) 4064 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:19 vm09 bash[22983]: audit 2026-03-09T16:29:18.364704+0000 mon.a (mon.0) 4064 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:19 vm09 bash[22983]: cluster 2026-03-09T16:29:18.375176+0000 mon.a (mon.0) 4065 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T16:29:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:19 vm09 bash[22983]: cluster 2026-03-09T16:29:18.375176+0000 mon.a (mon.0) 4065 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T16:29:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:19 vm09 bash[22983]: cluster 2026-03-09T16:29:19.162628+0000 mgr.y (mgr.14520) 1315 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:29:19.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:19 vm09 bash[22983]: cluster 2026-03-09T16:29:19.162628+0000 mgr.y (mgr.14520) 1315 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:19 vm01 bash[20728]: audit 2026-03-09T16:29:18.364704+0000 mon.a (mon.0) 4064 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:19 vm01 bash[20728]: audit 2026-03-09T16:29:18.364704+0000 mon.a (mon.0) 4064 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:19 vm01 bash[20728]: cluster 2026-03-09T16:29:18.375176+0000 mon.a (mon.0) 4065 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:19 vm01 bash[20728]: cluster 2026-03-09T16:29:18.375176+0000 mon.a (mon.0) 4065 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:19 vm01 bash[20728]: cluster 2026-03-09T16:29:19.162628+0000 mgr.y (mgr.14520) 1315 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:19 vm01 bash[20728]: cluster 2026-03-09T16:29:19.162628+0000 mgr.y (mgr.14520) 1315 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:19 vm01 bash[28152]: audit 2026-03-09T16:29:18.364704+0000 mon.a (mon.0) 4064 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:19 vm01 bash[28152]: audit 2026-03-09T16:29:18.364704+0000 mon.a (mon.0) 4064 : audit [INF] from='client.? 192.168.123.101:0/2457711976' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "field": "max_objects", "val": "0"}]': finished 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:19 vm01 bash[28152]: cluster 2026-03-09T16:29:18.375176+0000 mon.a (mon.0) 4065 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:19 vm01 bash[28152]: cluster 2026-03-09T16:29:18.375176+0000 mon.a (mon.0) 4065 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:19 vm01 bash[28152]: cluster 2026-03-09T16:29:19.162628+0000 mgr.y (mgr.14520) 1315 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:29:19.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:19 vm01 bash[28152]: cluster 2026-03-09T16:29:19.162628+0000 mgr.y (mgr.14520) 1315 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-09T16:29:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:22 vm09 bash[22983]: cluster 2026-03-09T16:29:21.162980+0000 mgr.y (mgr.14520) 1316 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 878 B/s rd, 527 B/s wr, 1 op/s 2026-03-09T16:29:22.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:22 vm09 bash[22983]: cluster 2026-03-09T16:29:21.162980+0000 mgr.y (mgr.14520) 1316 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 878 B/s rd, 527 B/s wr, 1 op/s 2026-03-09T16:29:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:22 vm01 bash[20728]: cluster 2026-03-09T16:29:21.162980+0000 mgr.y (mgr.14520) 1316 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 878 B/s rd, 527 B/s wr, 1 op/s 2026-03-09T16:29:22.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:22 vm01 bash[20728]: cluster 2026-03-09T16:29:21.162980+0000 mgr.y (mgr.14520) 1316 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 878 B/s rd, 527 B/s wr, 1 op/s 2026-03-09T16:29:22.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:22 vm01 bash[28152]: cluster 2026-03-09T16:29:21.162980+0000 mgr.y (mgr.14520) 1316 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 878 B/s rd, 527 B/s wr, 1 op/s 2026-03-09T16:29:22.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:22 vm01 bash[28152]: cluster 2026-03-09T16:29:21.162980+0000 mgr.y (mgr.14520) 1316 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 878 B/s rd, 527 B/s wr, 1 op/s 2026-03-09T16:29:23.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:22 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:29:22] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:29:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:24 vm09 bash[22983]: cluster 2026-03-09T16:29:23.163339+0000 mgr.y (mgr.14520) 1317 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 450 B/s wr, 0 op/s 2026-03-09T16:29:24.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:24 vm09 bash[22983]: cluster 2026-03-09T16:29:23.163339+0000 mgr.y (mgr.14520) 1317 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 450 B/s wr, 0 op/s 2026-03-09T16:29:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:24 vm01 bash[20728]: cluster 2026-03-09T16:29:23.163339+0000 mgr.y (mgr.14520) 1317 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 450 B/s wr, 0 op/s 2026-03-09T16:29:24.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:24 vm01 bash[20728]: cluster 2026-03-09T16:29:23.163339+0000 mgr.y (mgr.14520) 1317 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 450 B/s wr, 0 op/s 2026-03-09T16:29:24.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:24 vm01 bash[28152]: cluster 2026-03-09T16:29:23.163339+0000 mgr.y (mgr.14520) 1317 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 450 B/s wr, 0 op/s 2026-03-09T16:29:24.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:24 vm01 bash[28152]: cluster 2026-03-09T16:29:23.163339+0000 mgr.y (mgr.14520) 1317 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 450 B/s wr, 0 op/s 2026-03-09T16:29:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:26 vm09 bash[22983]: cluster 2026-03-09T16:29:25.163965+0000 mgr.y (mgr.14520) 1318 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-09T16:29:26.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:26 vm09 bash[22983]: cluster 2026-03-09T16:29:25.163965+0000 mgr.y (mgr.14520) 1318 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-09T16:29:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:26 vm01 bash[20728]: cluster 2026-03-09T16:29:25.163965+0000 mgr.y (mgr.14520) 1318 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-09T16:29:26.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:26 vm01 bash[20728]: cluster 2026-03-09T16:29:25.163965+0000 mgr.y (mgr.14520) 1318 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-09T16:29:26.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:26 vm01 bash[28152]: cluster 2026-03-09T16:29:25.163965+0000 mgr.y (mgr.14520) 1318 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-09T16:29:26.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:26 vm01 bash[28152]: cluster 2026-03-09T16:29:25.163965+0000 mgr.y (mgr.14520) 1318 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-09T16:29:28.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:29:27 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:29:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:28 vm09 bash[22983]: cluster 2026-03-09T16:29:27.164298+0000 mgr.y (mgr.14520) 1319 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:29:28.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:28 vm09 bash[22983]: cluster 2026-03-09T16:29:27.164298+0000 mgr.y (mgr.14520) 1319 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:29:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:28 vm01 bash[20728]: cluster 2026-03-09T16:29:27.164298+0000 mgr.y (mgr.14520) 1319 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:29:28.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:28 vm01 bash[20728]: cluster 2026-03-09T16:29:27.164298+0000 mgr.y (mgr.14520) 1319 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:29:28.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:28 vm01 bash[28152]: cluster 2026-03-09T16:29:27.164298+0000 mgr.y (mgr.14520) 1319 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:29:28.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:28 vm01 bash[28152]: cluster 2026-03-09T16:29:27.164298+0000 mgr.y (mgr.14520) 1319 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-09T16:29:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:29 vm09 bash[22983]: audit 2026-03-09T16:29:27.791922+0000 mgr.y (mgr.14520) 1320 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:29.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:29 vm09 bash[22983]: audit 2026-03-09T16:29:27.791922+0000 mgr.y (mgr.14520) 1320 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:29 vm01 bash[20728]: audit 2026-03-09T16:29:27.791922+0000 mgr.y (mgr.14520) 1320 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:29 vm01 bash[20728]: audit 2026-03-09T16:29:27.791922+0000 mgr.y (mgr.14520) 1320 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:29.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:29 vm01 bash[28152]: audit 2026-03-09T16:29:27.791922+0000 mgr.y (mgr.14520) 1320 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:29.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:29 vm01 bash[28152]: audit 2026-03-09T16:29:27.791922+0000 mgr.y (mgr.14520) 1320 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:30 vm09 bash[22983]: cluster 2026-03-09T16:29:29.164948+0000 mgr.y (mgr.14520) 1321 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-09T16:29:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:30 vm09 bash[22983]: cluster 2026-03-09T16:29:29.164948+0000 mgr.y (mgr.14520) 1321 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-09T16:29:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:30 vm09 bash[22983]: audit 2026-03-09T16:29:30.070194+0000 mon.a (mon.0) 4066 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:30.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:30 vm09 bash[22983]: audit 2026-03-09T16:29:30.070194+0000 mon.a (mon.0) 4066 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:30 vm01 bash[28152]: cluster 2026-03-09T16:29:29.164948+0000 mgr.y (mgr.14520) 1321 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-09T16:29:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:30 vm01 bash[28152]: cluster 2026-03-09T16:29:29.164948+0000 mgr.y (mgr.14520) 1321 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-09T16:29:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:30 vm01 bash[28152]: audit 2026-03-09T16:29:30.070194+0000 mon.a (mon.0) 4066 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:30.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:30 vm01 bash[28152]: audit 2026-03-09T16:29:30.070194+0000 mon.a (mon.0) 4066 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:30 vm01 bash[20728]: cluster 2026-03-09T16:29:29.164948+0000 mgr.y (mgr.14520) 1321 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-09T16:29:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:30 vm01 bash[20728]: cluster 2026-03-09T16:29:29.164948+0000 mgr.y (mgr.14520) 1321 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-09T16:29:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:30 vm01 bash[20728]: audit 2026-03-09T16:29:30.070194+0000 mon.a (mon.0) 4066 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:30.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:30 vm01 bash[20728]: audit 2026-03-09T16:29:30.070194+0000 mon.a (mon.0) 4066 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:32 vm09 bash[22983]: cluster 2026-03-09T16:29:31.165272+0000 mgr.y (mgr.14520) 1322 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:32.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:32 vm09 bash[22983]: cluster 2026-03-09T16:29:31.165272+0000 mgr.y (mgr.14520) 1322 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:32 vm01 bash[20728]: cluster 2026-03-09T16:29:31.165272+0000 mgr.y (mgr.14520) 1322 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:32.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:32 vm01 bash[20728]: cluster 2026-03-09T16:29:31.165272+0000 mgr.y (mgr.14520) 1322 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:32.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:32 vm01 bash[28152]: cluster 2026-03-09T16:29:31.165272+0000 mgr.y (mgr.14520) 1322 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:32.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:32 vm01 bash[28152]: cluster 2026-03-09T16:29:31.165272+0000 mgr.y (mgr.14520) 1322 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:33.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:32 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:29:32] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:29:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:34 vm09 bash[22983]: cluster 2026-03-09T16:29:33.165607+0000 mgr.y (mgr.14520) 1323 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:34.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:34 vm09 bash[22983]: cluster 2026-03-09T16:29:33.165607+0000 mgr.y (mgr.14520) 1323 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:34 vm01 bash[20728]: cluster 2026-03-09T16:29:33.165607+0000 mgr.y (mgr.14520) 1323 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:34.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:34 vm01 bash[20728]: cluster 2026-03-09T16:29:33.165607+0000 mgr.y (mgr.14520) 1323 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:34.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:34 vm01 bash[28152]: cluster 2026-03-09T16:29:33.165607+0000 mgr.y (mgr.14520) 1323 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:34.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:34 vm01 bash[28152]: cluster 2026-03-09T16:29:33.165607+0000 mgr.y (mgr.14520) 1323 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:36 vm09 bash[22983]: cluster 2026-03-09T16:29:35.166315+0000 mgr.y (mgr.14520) 1324 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:36.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:36 vm09 bash[22983]: cluster 2026-03-09T16:29:35.166315+0000 mgr.y (mgr.14520) 1324 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:36 vm01 bash[20728]: cluster 2026-03-09T16:29:35.166315+0000 mgr.y (mgr.14520) 1324 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:36.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:36 vm01 bash[20728]: cluster 2026-03-09T16:29:35.166315+0000 mgr.y (mgr.14520) 1324 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:36.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:36 vm01 bash[28152]: cluster 2026-03-09T16:29:35.166315+0000 mgr.y (mgr.14520) 1324 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:36.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:36 vm01 bash[28152]: cluster 2026-03-09T16:29:35.166315+0000 mgr.y (mgr.14520) 1324 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:38.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:29:37 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:29:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:38 vm09 bash[22983]: cluster 2026-03-09T16:29:37.166671+0000 mgr.y (mgr.14520) 1325 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:38.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:38 vm09 bash[22983]: cluster 2026-03-09T16:29:37.166671+0000 mgr.y (mgr.14520) 1325 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:38 vm01 bash[20728]: cluster 2026-03-09T16:29:37.166671+0000 mgr.y (mgr.14520) 1325 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:38.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:38 vm01 bash[20728]: cluster 2026-03-09T16:29:37.166671+0000 mgr.y (mgr.14520) 1325 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:38.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:38 vm01 bash[28152]: cluster 2026-03-09T16:29:37.166671+0000 mgr.y (mgr.14520) 1325 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:38.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:38 vm01 bash[28152]: cluster 2026-03-09T16:29:37.166671+0000 mgr.y (mgr.14520) 1325 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:39 vm09 bash[22983]: audit 2026-03-09T16:29:37.796720+0000 mgr.y (mgr.14520) 1326 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:39.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:39 vm09 bash[22983]: audit 2026-03-09T16:29:37.796720+0000 mgr.y (mgr.14520) 1326 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:39 vm01 bash[20728]: audit 2026-03-09T16:29:37.796720+0000 mgr.y (mgr.14520) 1326 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:39.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:39 vm01 bash[20728]: audit 2026-03-09T16:29:37.796720+0000 mgr.y (mgr.14520) 1326 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:39 vm01 bash[28152]: audit 2026-03-09T16:29:37.796720+0000 mgr.y (mgr.14520) 1326 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:39.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:39 vm01 bash[28152]: audit 2026-03-09T16:29:37.796720+0000 mgr.y (mgr.14520) 1326 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:40 vm09 bash[22983]: cluster 2026-03-09T16:29:39.167429+0000 mgr.y (mgr.14520) 1327 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:40.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:40 vm09 bash[22983]: cluster 2026-03-09T16:29:39.167429+0000 mgr.y (mgr.14520) 1327 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:40 vm01 bash[20728]: cluster 2026-03-09T16:29:39.167429+0000 mgr.y (mgr.14520) 1327 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:40.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:40 vm01 bash[20728]: cluster 2026-03-09T16:29:39.167429+0000 mgr.y (mgr.14520) 1327 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:40 vm01 bash[28152]: cluster 2026-03-09T16:29:39.167429+0000 mgr.y (mgr.14520) 1327 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:40.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:40 vm01 bash[28152]: cluster 2026-03-09T16:29:39.167429+0000 mgr.y (mgr.14520) 1327 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:41 vm09 bash[22983]: cluster 2026-03-09T16:29:41.167796+0000 mgr.y (mgr.14520) 1328 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:41.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:41 vm09 bash[22983]: cluster 2026-03-09T16:29:41.167796+0000 mgr.y (mgr.14520) 1328 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:41.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:41 vm01 bash[20728]: cluster 2026-03-09T16:29:41.167796+0000 mgr.y (mgr.14520) 1328 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:41.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:41 vm01 bash[20728]: cluster 2026-03-09T16:29:41.167796+0000 mgr.y (mgr.14520) 1328 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:41.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:41 vm01 bash[28152]: cluster 2026-03-09T16:29:41.167796+0000 mgr.y (mgr.14520) 1328 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:41.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:41 vm01 bash[28152]: cluster 2026-03-09T16:29:41.167796+0000 mgr.y (mgr.14520) 1328 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:43.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:42 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:29:42] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:29:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:44 vm09 bash[22983]: cluster 2026-03-09T16:29:43.168125+0000 mgr.y (mgr.14520) 1329 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:44.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:44 vm09 bash[22983]: cluster 2026-03-09T16:29:43.168125+0000 mgr.y (mgr.14520) 1329 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:44.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:44 vm01 bash[20728]: cluster 2026-03-09T16:29:43.168125+0000 mgr.y (mgr.14520) 1329 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:44.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:44 vm01 bash[20728]: cluster 2026-03-09T16:29:43.168125+0000 mgr.y (mgr.14520) 1329 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:44.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:44 vm01 bash[28152]: cluster 2026-03-09T16:29:43.168125+0000 mgr.y (mgr.14520) 1329 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:44.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:44 vm01 bash[28152]: cluster 2026-03-09T16:29:43.168125+0000 mgr.y (mgr.14520) 1329 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:45 vm09 bash[22983]: audit 2026-03-09T16:29:45.077032+0000 mon.a (mon.0) 4067 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:45.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:45 vm09 bash[22983]: audit 2026-03-09T16:29:45.077032+0000 mon.a (mon.0) 4067 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:45 vm01 bash[20728]: audit 2026-03-09T16:29:45.077032+0000 mon.a (mon.0) 4067 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:45.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:45 vm01 bash[20728]: audit 2026-03-09T16:29:45.077032+0000 mon.a (mon.0) 4067 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:45.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:45 vm01 bash[28152]: audit 2026-03-09T16:29:45.077032+0000 mon.a (mon.0) 4067 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:45.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:45 vm01 bash[28152]: audit 2026-03-09T16:29:45.077032+0000 mon.a (mon.0) 4067 : audit [DBG] from='mgr.14520 192.168.123.101:0/3742478177' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T16:29:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:46 vm09 bash[22983]: cluster 2026-03-09T16:29:45.168646+0000 mgr.y (mgr.14520) 1330 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:46.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:46 vm09 bash[22983]: cluster 2026-03-09T16:29:45.168646+0000 mgr.y (mgr.14520) 1330 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:46 vm01 bash[20728]: cluster 2026-03-09T16:29:45.168646+0000 mgr.y (mgr.14520) 1330 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:46.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:46 vm01 bash[20728]: cluster 2026-03-09T16:29:45.168646+0000 mgr.y (mgr.14520) 1330 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:46.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:46 vm01 bash[28152]: cluster 2026-03-09T16:29:45.168646+0000 mgr.y (mgr.14520) 1330 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:46.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:46 vm01 bash[28152]: cluster 2026-03-09T16:29:45.168646+0000 mgr.y (mgr.14520) 1330 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:48.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:29:47 vm09 bash[48403]: debug there is no tcmu-runner data available 2026-03-09T16:29:48.380 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool delete 596e5e1f-ecde-406d-b4d0-afd8854e4a60 596e5e1f-ecde-406d-b4d0-afd8854e4a60 --yes-i-really-really-mean-it 2026-03-09T16:29:48.443 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- 192.168.123.101:0/1789490747 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f87b8109f50 msgr2=0x7f87b8111ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 --2- 192.168.123.101:0/1789490747 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f87b8109f50 0x7f87b8111ad0 secure :-1 s=READY pgs=3027 cs=0 l=1 rev1=1 crypto rx=0x7f87b400b0a0 tx=0x7f87b401caf0 comp rx=0 tx=0).stop 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- 192.168.123.101:0/1789490747 shutdown_connections 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 --2- 192.168.123.101:0/1789490747 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f87b8109f50 0x7f87b8111ad0 unknown :-1 s=CLOSED pgs=3027 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 --2- 192.168.123.101:0/1789490747 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f87b81057d0 0x7f87b8109820 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 --2- 192.168.123.101:0/1789490747 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f87b8104e20 0x7f87b8105200 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- 192.168.123.101:0/1789490747 >> 192.168.123.101:0/1789490747 conn(0x7f87b8100880 msgr2=0x7f87b8102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- 192.168.123.101:0/1789490747 shutdown_connections 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- 192.168.123.101:0/1789490747 wait complete. 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 Processor -- start 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- start start 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f87b8104e20 0x7f87b819ef80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:48.444 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f87b81057d0 0x7f87b819f4c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f87b8109f50 0x7f87b81a3850 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7f87b81169d0 con 0x7f87b8109f50 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7f87b8116850 con 0x7f87b8104e20 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7f87b8116b50 con 0x7f87b81057d0 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bdfcb640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f87b8109f50 0x7f87b81a3850 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bdfcb640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f87b8109f50 0x7f87b81a3850 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:51728/0 (socket says 192.168.123.101:51728) 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bdfcb640 1 -- 192.168.123.101:0/2020256085 learned_addr learned my addr 192.168.123.101:0/2020256085 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bdfcb640 1 -- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f87b81057d0 msgr2=0x7f87b819f4c0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bdfcb640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f87b81057d0 0x7f87b819f4c0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bdfcb640 1 -- 192.168.123.101:0/2020256085 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f87b8104e20 msgr2=0x7f87b819ef80 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bdfcb640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f87b8104e20 0x7f87b819ef80 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bdfcb640 1 -- 192.168.123.101:0/2020256085 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f87b81a3fd0 con 0x7f87b8109f50 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bdfcb640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f87b8109f50 0x7f87b81a3850 secure :-1 s=READY pgs=3107 cs=0 l=1 rev1=1 crypto rx=0x7f87b4098620 tx=0x7f87b40077f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f87b40079e0 con 0x7f87b8109f50 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f87b4004110 con 0x7f87b8109f50 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f87b40ae690 con 0x7f87b8109f50 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f87b81a4260 con 0x7f87b8109f50 2026-03-09T16:29:48.445 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.447+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f87b81abb00 con 0x7f87b8109f50 2026-03-09T16:29:48.447 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.451+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8780005190 con 0x7f87b8109f50 2026-03-09T16:29:48.447 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.451+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f87b4005ce0 con 0x7f87b8109f50 2026-03-09T16:29:48.450 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.455+0000 7f87a67fc640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8794077710 0x7f8794079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:48.450 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.455+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(783..783 src has 257..783) ==== 9373+0+0 (secure 0 0 0) 0x7f87b4133ef0 con 0x7f87b8109f50 2026-03-09T16:29:48.450 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.455+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=784}) -- 0x7f8794083990 con 0x7f87b8109f50 2026-03-09T16:29:48.450 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.455+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f87b40b2020 con 0x7f87b8109f50 2026-03-09T16:29:48.450 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.455+0000 7f87bd7ca640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8794077710 0x7f8794079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:48.450 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.455+0000 7f87bd7ca640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8794077710 0x7f8794079bd0 secure :-1 s=READY pgs=4298 cs=0 l=1 rev1=1 crypto rx=0x7f87ac009a90 tx=0x7f87ac009450 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:29:48.538 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:48.543+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true} v 0) -- 0x7f8780005480 con 0x7f87b8109f50 2026-03-09T16:29:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:48 vm09 bash[22983]: cluster 2026-03-09T16:29:47.169020+0000 mgr.y (mgr.14520) 1331 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:48.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:48 vm09 bash[22983]: cluster 2026-03-09T16:29:47.169020+0000 mgr.y (mgr.14520) 1331 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:48.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:48 vm01 bash[28152]: cluster 2026-03-09T16:29:47.169020+0000 mgr.y (mgr.14520) 1331 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:48.674 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:48 vm01 bash[28152]: cluster 2026-03-09T16:29:47.169020+0000 mgr.y (mgr.14520) 1331 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:48.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:48 vm01 bash[20728]: cluster 2026-03-09T16:29:47.169020+0000 mgr.y (mgr.14520) 1331 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:48.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:48 vm01 bash[20728]: cluster 2026-03-09T16:29:47.169020+0000 mgr.y (mgr.14520) 1331 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T16:29:49.255 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.259+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]=0 pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' removed v784) ==== 248+0+0 (secure 0 0 0) 0x7f87b4016800 con 0x7f87b8109f50 2026-03-09T16:29:49.258 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.263+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 <== mon.0 v2:192.168.123.101:3300/0 8 ==== osd_map(784..784 src has 257..784) ==== 296+0+0 (secure 0 0 0) 0x7f87b40f80c0 con 0x7f87b8109f50 2026-03-09T16:29:49.258 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.263+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=785}) -- 0x7f8794084760 con 0x7f87b8109f50 2026-03-09T16:29:49.315 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.319+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true} v 0) -- 0x7f87800047c0 con 0x7f87b8109f50 2026-03-09T16:29:49.316 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.319+0000 7f87a67fc640 1 -- 192.168.123.101:0/2020256085 <== mon.0 v2:192.168.123.101:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]=0 pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' does not exist v784) ==== 255+0+0 (secure 0 0 0) 0x7f87b41000d0 con 0x7f87b8109f50 2026-03-09T16:29:49.316 INFO:tasks.workunit.client.0.vm01.stderr:pool '596e5e1f-ecde-406d-b4d0-afd8854e4a60' does not exist 2026-03-09T16:29:49.318 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8794077710 msgr2=0x7f8794079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:49.318 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8794077710 0x7f8794079bd0 secure :-1 s=READY pgs=4298 cs=0 l=1 rev1=1 crypto rx=0x7f87ac009a90 tx=0x7f87ac009450 comp rx=0 tx=0).stop 2026-03-09T16:29:49.318 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f87b8109f50 msgr2=0x7f87b81a3850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:49.318 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f87b8109f50 0x7f87b81a3850 secure :-1 s=READY pgs=3107 cs=0 l=1 rev1=1 crypto rx=0x7f87b4098620 tx=0x7f87b40077f0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.318 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 shutdown_connections 2026-03-09T16:29:49.318 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7f8794077710 0x7f8794079bd0 unknown :-1 s=CLOSED pgs=4298 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.319 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7f87b8109f50 0x7f87b81a3850 unknown :-1 s=CLOSED pgs=3107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.319 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7f87b81057d0 0x7f87b819f4c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.319 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 --2- 192.168.123.101:0/2020256085 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f87b8104e20 0x7f87b819ef80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.319 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 >> 192.168.123.101:0/2020256085 conn(0x7f87b8100880 msgr2=0x7f87b8100ef0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:29:49.319 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 shutdown_connections 2026-03-09T16:29:49.319 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.323+0000 7f87bfa55640 1 -- 192.168.123.101:0/2020256085 wait complete. 2026-03-09T16:29:49.331 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool delete bdba544a-024a-4b6d-a6dd-2ee6648240fd bdba544a-024a-4b6d-a6dd-2ee6648240fd --yes-i-really-really-mean-it 2026-03-09T16:29:49.392 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 -- 192.168.123.101:0/284915553 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fabb0101820 msgr2=0x7fabb0101c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:49.392 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/284915553 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fabb0101820 0x7fabb0101c80 secure :-1 s=READY pgs=3028 cs=0 l=1 rev1=1 crypto rx=0x7faba4009a60 tx=0x7faba401c990 comp rx=0 tx=0).stop 2026-03-09T16:29:49.393 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 -- 192.168.123.101:0/284915553 shutdown_connections 2026-03-09T16:29:49.393 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/284915553 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fabb01021c0 0x7fabb010e6f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.393 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/284915553 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fabb0101820 0x7fabb0101c80 unknown :-1 s=CLOSED pgs=3028 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.393 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/284915553 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fabb0107820 0x7fabb0107c00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.393 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 -- 192.168.123.101:0/284915553 >> 192.168.123.101:0/284915553 conn(0x7fabb00fd540 msgr2=0x7fabb00ff960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:29:49.393 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 -- 192.168.123.101:0/284915553 shutdown_connections 2026-03-09T16:29:49.393 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 -- 192.168.123.101:0/284915553 wait complete. 2026-03-09T16:29:49.393 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.395+0000 7fabb4b6a640 1 Processor -- start 2026-03-09T16:29:49.393 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabb4b6a640 1 -- start start 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabb4b6a640 1 --2- >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fabb0101820 0x7fabb010f000 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabb4b6a640 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fabb01021c0 0x7fabb010f540 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabb4b6a640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fabb0107820 0x7fabb010fa80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabb4b6a640 1 -- --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_getmap magic: 0 -- 0x7fabb0114890 con 0x7fabb0107820 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabb4b6a640 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 -- 0x7fabb0114710 con 0x7fabb01021c0 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabb4b6a640 1 -- --> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] -- mon_getmap magic: 0 -- 0x7fabb0114a10 con 0x7fabb0101820 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabaed76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fabb0107820 0x7fabb010fa80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabaed76640 1 --2- >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fabb0107820 0x7fabb010fa80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.101:3300/0 says I am v2:192.168.123.101:51742/0 (socket says 192.168.123.101:51742) 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabaed76640 1 -- 192.168.123.101:0/3102930177 learned_addr learned my addr 192.168.123.101:0/3102930177 (peer_addr_for_me v2:192.168.123.101:0/0) 2026-03-09T16:29:49.396 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabadd74640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fabb01021c0 0x7fabb010f540 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabaed76640 1 -- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fabb0101820 msgr2=0x7fabb010f000 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabae575640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fabb0101820 0x7fabb010f000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabaed76640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fabb0101820 0x7fabb010f000 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabaed76640 1 -- 192.168.123.101:0/3102930177 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fabb01021c0 msgr2=0x7fabb010f540 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabaed76640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fabb01021c0 0x7fabb010f540 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabaed76640 1 -- 192.168.123.101:0/3102930177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fabb01ab590 con 0x7fabb0107820 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabadd74640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fabb01021c0 0x7fabb010f540 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fabaed76640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fabb0107820 0x7fabb010fa80 secure :-1 s=READY pgs=3108 cs=0 l=1 rev1=1 crypto rx=0x7faba000ef30 tx=0x7faba000c550 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 <== mon.0 v2:192.168.123.101:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7faba0019070 con 0x7fabb0107820 2026-03-09T16:29:49.397 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.399+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 <== mon.0 v2:192.168.123.101:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7faba00092d0 con 0x7fabb0107820 2026-03-09T16:29:49.399 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.403+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 <== mon.0 v2:192.168.123.101:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7faba0004780 con 0x7fabb0107820 2026-03-09T16:29:49.399 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.403+0000 7fabae575640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fabb0101820 0x7fabb010f000 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T16:29:49.399 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.403+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fabb01ab820 con 0x7fabb0107820 2026-03-09T16:29:49.399 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.403+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fabb01abd30 con 0x7fabb0107820 2026-03-09T16:29:49.401 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.403+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fab7c005190 con 0x7fabb0107820 2026-03-09T16:29:49.401 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.403+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 <== mon.0 v2:192.168.123.101:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7faba0005ce0 con 0x7fabb0107820 2026-03-09T16:29:49.401 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.403+0000 7fab977fe640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fab800776d0 0x7fab80079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T16:29:49.401 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.403+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 <== mon.0 v2:192.168.123.101:3300/0 5 ==== osd_map(784..784 src has 257..784) ==== 8985+0+0 (secure 0 0 0) 0x7faba0099b10 con 0x7fabb0107820 2026-03-09T16:29:49.401 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.403+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=785}) -- 0x7fab800830d0 con 0x7fabb0107820 2026-03-09T16:29:49.402 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.407+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 <== mon.0 v2:192.168.123.101:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7faba0010040 con 0x7fabb0107820 2026-03-09T16:29:49.402 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.407+0000 7fabae575640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fab800776d0 0x7fab80079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T16:29:49.402 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.407+0000 7fabae575640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fab800776d0 0x7fab80079b90 secure :-1 s=READY pgs=4299 cs=0 l=1 rev1=1 crypto rx=0x7fab98006fd0 tx=0x7fab98008040 comp rx=0 tx=0).ready entity=mgr.14520 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T16:29:49.492 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:49.495+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true} v 0) -- 0x7fab7c005480 con 0x7fabb0107820 2026-03-09T16:29:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:49 vm09 bash[22983]: audit 2026-03-09T16:29:47.807396+0000 mgr.y (mgr.14520) 1332 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:49 vm09 bash[22983]: audit 2026-03-09T16:29:47.807396+0000 mgr.y (mgr.14520) 1332 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:49 vm09 bash[22983]: audit 2026-03-09T16:29:48.545753+0000 mon.a (mon.0) 4068 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:49.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:49 vm09 bash[22983]: audit 2026-03-09T16:29:48.545753+0000 mon.a (mon.0) 4068 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:49 vm01 bash[28152]: audit 2026-03-09T16:29:47.807396+0000 mgr.y (mgr.14520) 1332 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:49 vm01 bash[28152]: audit 2026-03-09T16:29:47.807396+0000 mgr.y (mgr.14520) 1332 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:49 vm01 bash[28152]: audit 2026-03-09T16:29:48.545753+0000 mon.a (mon.0) 4068 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:49.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:49 vm01 bash[28152]: audit 2026-03-09T16:29:48.545753+0000 mon.a (mon.0) 4068 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:49 vm01 bash[20728]: audit 2026-03-09T16:29:47.807396+0000 mgr.y (mgr.14520) 1332 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:49 vm01 bash[20728]: audit 2026-03-09T16:29:47.807396+0000 mgr.y (mgr.14520) 1332 : audit [DBG] from='client.14496 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T16:29:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:49 vm01 bash[20728]: audit 2026-03-09T16:29:48.545753+0000 mon.a (mon.0) 4068 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:49.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:49 vm01 bash[20728]: audit 2026-03-09T16:29:48.545753+0000 mon.a (mon.0) 4068 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.274 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.279+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 <== mon.0 v2:192.168.123.101:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]=0 pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' removed v785) ==== 248+0+0 (secure 0 0 0) 0x7faba0065e70 con 0x7fabb0107820 2026-03-09T16:29:50.287 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.291+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 <== mon.0 v2:192.168.123.101:3300/0 8 ==== osd_map(785..785 src has 257..785) ==== 296+0+0 (secure 0 0 0) 0x7faba005de60 con 0x7fabb0107820 2026-03-09T16:29:50.287 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.291+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_subscribe({osdmap=786}) -- 0x7fab80083bd0 con 0x7fabb0107820 2026-03-09T16:29:50.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.343+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 --> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true} v 0) -- 0x7fab7c003270 con 0x7fabb0107820 2026-03-09T16:29:50.342 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fab977fe640 1 -- 192.168.123.101:0/3102930177 <== mon.0 v2:192.168.123.101:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]=0 pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' does not exist v785) ==== 255+0+0 (secure 0 0 0) 0x7faba006ad20 con 0x7fabb0107820 2026-03-09T16:29:50.342 INFO:tasks.workunit.client.0.vm01.stderr:pool 'bdba544a-024a-4b6d-a6dd-2ee6648240fd' does not exist 2026-03-09T16:29:50.344 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fab800776d0 msgr2=0x7fab80079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:50.344 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fab800776d0 0x7fab80079b90 secure :-1 s=READY pgs=4299 cs=0 l=1 rev1=1 crypto rx=0x7fab98006fd0 tx=0x7fab98008040 comp rx=0 tx=0).stop 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fabb0107820 msgr2=0x7fabb010fa80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fabb0107820 0x7fabb010fa80 secure :-1 s=READY pgs=3108 cs=0 l=1 rev1=1 crypto rx=0x7faba000ef30 tx=0x7faba000c550 comp rx=0 tx=0).stop 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 shutdown_connections 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:6800/123914266,v1:192.168.123.101:6801/123914266] conn(0x7fab800776d0 0x7fab80079b90 unknown :-1 s=CLOSED pgs=4299 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] conn(0x7fabb0107820 0x7fabb010fa80 unknown :-1 s=CLOSED pgs=3108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fabb01021c0 0x7fabb010f540 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 --2- 192.168.123.101:0/3102930177 >> [v2:192.168.123.101:3301/0,v1:192.168.123.101:6790/0] conn(0x7fabb0101820 0x7fabb010f000 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 >> 192.168.123.101:0/3102930177 conn(0x7fabb00fd540 msgr2=0x7fabb0105330 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 shutdown_connections 2026-03-09T16:29:50.345 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-09T16:29:50.347+0000 7fabb4b6a640 1 -- 192.168.123.101:0/3102930177 wait complete. 2026-03-09T16:29:50.359 INFO:tasks.workunit.client.0.vm01.stdout:OK 2026-03-09T16:29:50.359 INFO:tasks.workunit.client.0.vm01.stderr:+ echo OK 2026-03-09T16:29:50.359 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T16:29:50.360 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T16:29:50.408 INFO:tasks.workunit:Stopping ['rados/test.sh', 'rados/test_pool_quota.sh'] on client.0... 2026-03-09T16:29:50.409 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: cluster 2026-03-09T16:29:49.169766+0000 mgr.y (mgr.14520) 1333 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: cluster 2026-03-09T16:29:49.169766+0000 mgr.y (mgr.14520) 1333 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: audit 2026-03-09T16:29:49.262478+0000 mon.a (mon.0) 4069 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: audit 2026-03-09T16:29:49.262478+0000 mon.a (mon.0) 4069 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: cluster 2026-03-09T16:29:49.268702+0000 mon.a (mon.0) 4070 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: cluster 2026-03-09T16:29:49.268702+0000 mon.a (mon.0) 4070 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: audit 2026-03-09T16:29:49.323721+0000 mon.a (mon.0) 4071 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: audit 2026-03-09T16:29:49.323721+0000 mon.a (mon.0) 4071 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: audit 2026-03-09T16:29:49.500378+0000 mon.a (mon.0) 4072 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:50 vm09 bash[22983]: audit 2026-03-09T16:29:49.500378+0000 mon.a (mon.0) 4072 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: cluster 2026-03-09T16:29:49.169766+0000 mgr.y (mgr.14520) 1333 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: cluster 2026-03-09T16:29:49.169766+0000 mgr.y (mgr.14520) 1333 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: audit 2026-03-09T16:29:49.262478+0000 mon.a (mon.0) 4069 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: audit 2026-03-09T16:29:49.262478+0000 mon.a (mon.0) 4069 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: cluster 2026-03-09T16:29:49.268702+0000 mon.a (mon.0) 4070 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: cluster 2026-03-09T16:29:49.268702+0000 mon.a (mon.0) 4070 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: audit 2026-03-09T16:29:49.323721+0000 mon.a (mon.0) 4071 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: audit 2026-03-09T16:29:49.323721+0000 mon.a (mon.0) 4071 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: audit 2026-03-09T16:29:49.500378+0000 mon.a (mon.0) 4072 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:50 vm01 bash[28152]: audit 2026-03-09T16:29:49.500378+0000 mon.a (mon.0) 4072 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: cluster 2026-03-09T16:29:49.169766+0000 mgr.y (mgr.14520) 1333 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: cluster 2026-03-09T16:29:49.169766+0000 mgr.y (mgr.14520) 1333 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: audit 2026-03-09T16:29:49.262478+0000 mon.a (mon.0) 4069 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:50.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: audit 2026-03-09T16:29:49.262478+0000 mon.a (mon.0) 4069 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:50.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: cluster 2026-03-09T16:29:49.268702+0000 mon.a (mon.0) 4070 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T16:29:50.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: cluster 2026-03-09T16:29:49.268702+0000 mon.a (mon.0) 4070 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-09T16:29:50.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: audit 2026-03-09T16:29:49.323721+0000 mon.a (mon.0) 4071 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: audit 2026-03-09T16:29:49.323721+0000 mon.a (mon.0) 4071 : audit [INF] from='client.? 192.168.123.101:0/2020256085' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "pool2": "596e5e1f-ecde-406d-b4d0-afd8854e4a60", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: audit 2026-03-09T16:29:49.500378+0000 mon.a (mon.0) 4072 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:50 vm01 bash[20728]: audit 2026-03-09T16:29:49.500378+0000 mon.a (mon.0) 4072 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:50.810 DEBUG:teuthology.parallel:result is None 2026-03-09T16:29:50.810 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T16:29:50.817 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T16:29:50.817 DEBUG:teuthology.orchestra.run.vm01:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T16:29:50.865 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T16:29:50.865 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T16:29:50.867 INFO:tasks.cephadm:Teardown begin 2026-03-09T16:29:50.867 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T16:29:50.919 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T16:29:50.930 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T16:29:50.930 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 -- ceph mgr module disable cephadm 2026-03-09T16:29:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:51 vm09 bash[22983]: cluster 2026-03-09T16:29:50.266055+0000 mon.a (mon.0) 4073 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:29:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:51 vm09 bash[22983]: cluster 2026-03-09T16:29:50.266055+0000 mon.a (mon.0) 4073 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:29:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:51 vm09 bash[22983]: audit 2026-03-09T16:29:50.281729+0000 mon.a (mon.0) 4074 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:51 vm09 bash[22983]: audit 2026-03-09T16:29:50.281729+0000 mon.a (mon.0) 4074 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:51 vm09 bash[22983]: cluster 2026-03-09T16:29:50.287424+0000 mon.a (mon.0) 4075 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T16:29:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:51 vm09 bash[22983]: cluster 2026-03-09T16:29:50.287424+0000 mon.a (mon.0) 4075 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T16:29:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:51 vm09 bash[22983]: audit 2026-03-09T16:29:50.349406+0000 mon.a (mon.0) 4076 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:51.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:51 vm09 bash[22983]: audit 2026-03-09T16:29:50.349406+0000 mon.a (mon.0) 4076 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:51 vm01 bash[28152]: cluster 2026-03-09T16:29:50.266055+0000 mon.a (mon.0) 4073 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:29:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:51 vm01 bash[28152]: cluster 2026-03-09T16:29:50.266055+0000 mon.a (mon.0) 4073 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:29:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:51 vm01 bash[28152]: audit 2026-03-09T16:29:50.281729+0000 mon.a (mon.0) 4074 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:51 vm01 bash[28152]: audit 2026-03-09T16:29:50.281729+0000 mon.a (mon.0) 4074 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:51 vm01 bash[28152]: cluster 2026-03-09T16:29:50.287424+0000 mon.a (mon.0) 4075 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T16:29:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:51 vm01 bash[28152]: cluster 2026-03-09T16:29:50.287424+0000 mon.a (mon.0) 4075 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T16:29:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:51 vm01 bash[28152]: audit 2026-03-09T16:29:50.349406+0000 mon.a (mon.0) 4076 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:51.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:51 vm01 bash[28152]: audit 2026-03-09T16:29:50.349406+0000 mon.a (mon.0) 4076 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:51.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:51 vm01 bash[20728]: cluster 2026-03-09T16:29:50.266055+0000 mon.a (mon.0) 4073 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:29:51.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:51 vm01 bash[20728]: cluster 2026-03-09T16:29:50.266055+0000 mon.a (mon.0) 4073 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T16:29:51.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:51 vm01 bash[20728]: audit 2026-03-09T16:29:50.281729+0000 mon.a (mon.0) 4074 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:51.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:51 vm01 bash[20728]: audit 2026-03-09T16:29:50.281729+0000 mon.a (mon.0) 4074 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T16:29:51.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:51 vm01 bash[20728]: cluster 2026-03-09T16:29:50.287424+0000 mon.a (mon.0) 4075 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T16:29:51.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:51 vm01 bash[20728]: cluster 2026-03-09T16:29:50.287424+0000 mon.a (mon.0) 4075 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-09T16:29:51.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:51 vm01 bash[20728]: audit 2026-03-09T16:29:50.349406+0000 mon.a (mon.0) 4076 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:51.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:51 vm01 bash[20728]: audit 2026-03-09T16:29:50.349406+0000 mon.a (mon.0) 4076 : audit [INF] from='client.? 192.168.123.101:0/3102930177' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "pool2": "bdba544a-024a-4b6d-a6dd-2ee6648240fd", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T16:29:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:52 vm09 bash[22983]: cluster 2026-03-09T16:29:51.170204+0000 mgr.y (mgr.14520) 1334 : cluster [DBG] pgmap v1778: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:52.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:52 vm09 bash[22983]: cluster 2026-03-09T16:29:51.170204+0000 mgr.y (mgr.14520) 1334 : cluster [DBG] pgmap v1778: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:52.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:52 vm01 bash[28152]: cluster 2026-03-09T16:29:51.170204+0000 mgr.y (mgr.14520) 1334 : cluster [DBG] pgmap v1778: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:52.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:52 vm01 bash[28152]: cluster 2026-03-09T16:29:51.170204+0000 mgr.y (mgr.14520) 1334 : cluster [DBG] pgmap v1778: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:52 vm01 bash[20728]: cluster 2026-03-09T16:29:51.170204+0000 mgr.y (mgr.14520) 1334 : cluster [DBG] pgmap v1778: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:52.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:52 vm01 bash[20728]: cluster 2026-03-09T16:29:51.170204+0000 mgr.y (mgr.14520) 1334 : cluster [DBG] pgmap v1778: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T16:29:53.173 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:52 vm01 bash[21002]: ::ffff:192.168.123.109 - - [09/Mar/2026:16:29:52] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T16:29:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:53 vm09 bash[22983]: cluster 2026-03-09T16:29:53.170559+0000 mgr.y (mgr.14520) 1335 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:53.632 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 16:29:53 vm09 bash[22983]: cluster 2026-03-09T16:29:53.170559+0000 mgr.y (mgr.14520) 1335 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:53.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:53 vm01 bash[20728]: cluster 2026-03-09T16:29:53.170559+0000 mgr.y (mgr.14520) 1335 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:53.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:53 vm01 bash[20728]: cluster 2026-03-09T16:29:53.170559+0000 mgr.y (mgr.14520) 1335 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:53.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:53 vm01 bash[28152]: cluster 2026-03-09T16:29:53.170559+0000 mgr.y (mgr.14520) 1335 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:53.673 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:53 vm01 bash[28152]: cluster 2026-03-09T16:29:53.170559+0000 mgr.y (mgr.14520) 1335 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T16:29:55.594 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/mon.c/config 2026-03-09T16:29:55.750 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T16:29:55.755+0000 7f2afa5bd640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T16:29:55.750 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T16:29:55.755+0000 7f2afa5bd640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T16:29:55.750 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T16:29:55.755+0000 7f2afa5bd640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T16:29:55.750 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T16:29:55.755+0000 7f2afa5bd640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T16:29:55.750 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T16:29:55.755+0000 7f2afa5bd640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T16:29:55.750 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T16:29:55.755+0000 7f2afa5bd640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T16:29:55.750 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T16:29:55.755+0000 7f2afa5bd640 -1 monclient: keyring not found 2026-03-09T16:29:55.750 INFO:teuthology.orchestra.run.vm01.stderr:[errno 21] error connecting to the cluster 2026-03-09T16:29:55.792 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T16:29:55.792 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T16:29:55.792 DEBUG:teuthology.orchestra.run.vm01:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T16:29:55.795 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T16:29:55.798 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T16:29:55.798 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T16:29:55.798 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.a 2026-03-09T16:29:55.884 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:55 vm01 systemd[1]: Stopping Ceph mon.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:29:56.047 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:55 vm01 bash[20728]: debug 2026-03-09T16:29:55.887+0000 7fa0eb434640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T16:29:56.047 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 16:29:55 vm01 bash[20728]: debug 2026-03-09T16:29:55.887+0000 7fa0eb434640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-09T16:29:56.047 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:55 vm01 bash[21002]: [09/Mar/2026:16:29:55] ENGINE Bus STOPPING 2026-03-09T16:29:56.081 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.a.service' 2026-03-09T16:29:56.092 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:29:56.092 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T16:29:56.092 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-09T16:29:56.092 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.c 2026-03-09T16:29:56.332 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:56 vm01 systemd[1]: Stopping Ceph mon.c for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:29:56.333 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:56 vm01 bash[28152]: debug 2026-03-09T16:29:56.191+0000 7f14c1fde640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T16:29:56.333 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:56 vm01 bash[28152]: debug 2026-03-09T16:29:56.191+0000 7f14c1fde640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T16:29:56.333 INFO:journalctl@ceph.mon.c.vm01.stdout:Mar 09 16:29:56 vm01 bash[132048]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-mon-c 2026-03-09T16:29:56.367 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.c.service' 2026-03-09T16:29:56.378 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:29:56.378 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-09T16:29:56.378 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-09T16:29:56.378 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.b 2026-03-09T16:29:56.423 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T16:29:56.423 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE Bus STOPPED 2026-03-09T16:29:56.423 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE Bus STARTING 2026-03-09T16:29:56.619 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mon.b.service' 2026-03-09T16:29:56.643 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:29:56.643 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-09T16:29:56.643 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-09T16:29:56.643 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.y 2026-03-09T16:29:56.649 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE Serving on http://:::9283 2026-03-09T16:29:56.649 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE Bus STARTED 2026-03-09T16:29:56.649 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE Bus STOPPING 2026-03-09T16:29:56.649 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T16:29:56.649 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE Bus STOPPED 2026-03-09T16:29:56.649 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE Bus STARTING 2026-03-09T16:29:56.649 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE Serving on http://:::9283 2026-03-09T16:29:56.649 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: [09/Mar/2026:16:29:56] ENGINE Bus STARTED 2026-03-09T16:29:56.745 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 systemd[1]: Stopping Ceph mgr.y for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:29:56.746 INFO:journalctl@ceph.mgr.y.vm01.stdout:Mar 09 16:29:56 vm01 bash[21002]: debug 2026-03-09T16:29:56.695+0000 7f5fd26ec640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mgr -n mgr.y -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:29:56.793 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.y.service' 2026-03-09T16:29:56.845 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:29:56.845 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-09T16:29:56.845 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-09T16:29:56.845 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.x 2026-03-09T16:29:56.916 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 16:29:56 vm09 systemd[1]: Stopping Ceph mgr.x for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:29:56.980 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.x.service' 2026-03-09T16:29:56.980 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 16:29:56 vm09 bash[56634]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-mgr-x 2026-03-09T16:29:56.980 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 16:29:56 vm09 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-09T16:29:56.980 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 16:29:56 vm09 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@mgr.x.service: Failed with result 'exit-code'. 2026-03-09T16:29:56.991 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:29:56.991 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-09T16:29:56.991 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T16:29:56.991 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.0 2026-03-09T16:29:57.423 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 16:29:57 vm01 systemd[1]: Stopping Ceph osd.0 for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:29:57.423 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 16:29:57 vm01 bash[31061]: debug 2026-03-09T16:29:57.047+0000 7f9fba271640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:29:57.423 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 16:29:57 vm01 bash[31061]: debug 2026-03-09T16:29:57.047+0000 7f9fba271640 -1 osd.0 785 *** Got signal Terminated *** 2026-03-09T16:29:57.423 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 16:29:57 vm01 bash[31061]: debug 2026-03-09T16:29:57.047+0000 7f9fba271640 -1 osd.0 785 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T16:30:02.401 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 16:30:02 vm01 bash[132247]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-osd-0 2026-03-09T16:30:02.435 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.0.service' 2026-03-09T16:30:02.445 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:02.445 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T16:30:02.445 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T16:30:02.445 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.1 2026-03-09T16:30:02.673 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 16:30:02 vm01 systemd[1]: Stopping Ceph osd.1 for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:02.673 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 16:30:02 vm01 bash[36842]: debug 2026-03-09T16:30:02.531+0000 7f43de054640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:30:02.673 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 16:30:02 vm01 bash[36842]: debug 2026-03-09T16:30:02.531+0000 7f43de054640 -1 osd.1 785 *** Got signal Terminated *** 2026-03-09T16:30:02.673 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 16:30:02 vm01 bash[36842]: debug 2026-03-09T16:30:02.531+0000 7f43de054640 -1 osd.1 785 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T16:30:07.867 INFO:journalctl@ceph.osd.1.vm01.stdout:Mar 09 16:30:07 vm01 bash[132431]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-osd-1 2026-03-09T16:30:07.906 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.1.service' 2026-03-09T16:30:07.917 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:07.917 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T16:30:07.917 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T16:30:07.917 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.2 2026-03-09T16:30:08.173 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 16:30:07 vm01 systemd[1]: Stopping Ceph osd.2 for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:08.173 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 16:30:08 vm01 bash[42882]: debug 2026-03-09T16:30:08.007+0000 7fc7f79c3640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:30:08.173 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 16:30:08 vm01 bash[42882]: debug 2026-03-09T16:30:08.007+0000 7fc7f79c3640 -1 osd.2 785 *** Got signal Terminated *** 2026-03-09T16:30:08.173 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 16:30:08 vm01 bash[42882]: debug 2026-03-09T16:30:08.007+0000 7fc7f79c3640 -1 osd.2 785 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T16:30:13.356 INFO:journalctl@ceph.osd.2.vm01.stdout:Mar 09 16:30:13 vm01 bash[132608]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-osd-2 2026-03-09T16:30:13.406 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.2.service' 2026-03-09T16:30:13.417 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:13.417 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T16:30:13.417 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T16:30:13.417 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.3 2026-03-09T16:30:13.673 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 16:30:13 vm01 systemd[1]: Stopping Ceph osd.3 for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:13.673 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 16:30:13 vm01 bash[49004]: debug 2026-03-09T16:30:13.507+0000 7fcc0d383640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:30:13.673 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 16:30:13 vm01 bash[49004]: debug 2026-03-09T16:30:13.507+0000 7fcc0d383640 -1 osd.3 785 *** Got signal Terminated *** 2026-03-09T16:30:13.673 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 16:30:13 vm01 bash[49004]: debug 2026-03-09T16:30:13.507+0000 7fcc0d383640 -1 osd.3 785 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T16:30:14.382 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 16:30:13 vm09 bash[51261]: ts=2026-03-09T16:30:13.978Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.101:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.101:8765: connect: connection refused" 2026-03-09T16:30:14.382 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 16:30:13 vm09 bash[51261]: ts=2026-03-09T16:30:13.978Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.101:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.101:8765: connect: connection refused" 2026-03-09T16:30:14.382 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 16:30:13 vm09 bash[51261]: ts=2026-03-09T16:30:13.978Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.101:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.101:8765: connect: connection refused" 2026-03-09T16:30:14.382 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 16:30:13 vm09 bash[51261]: ts=2026-03-09T16:30:13.978Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.101:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.101:8765: connect: connection refused" 2026-03-09T16:30:14.382 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 16:30:13 vm09 bash[51261]: ts=2026-03-09T16:30:13.978Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.101:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.101:8765: connect: connection refused" 2026-03-09T16:30:14.382 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 16:30:13 vm09 bash[51261]: ts=2026-03-09T16:30:13.981Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.101:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.101:8765: connect: connection refused" 2026-03-09T16:30:18.825 INFO:journalctl@ceph.osd.3.vm01.stdout:Mar 09 16:30:18 vm01 bash[132789]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-osd-3 2026-03-09T16:30:18.858 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.3.service' 2026-03-09T16:30:18.867 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:18.867 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T16:30:18.867 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T16:30:18.867 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.4 2026-03-09T16:30:19.132 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 16:30:18 vm09 systemd[1]: Stopping Ceph osd.4 for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:19.132 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 16:30:18 vm09 bash[26344]: debug 2026-03-09T16:30:18.910+0000 7f4f58145640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:30:19.132 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 16:30:18 vm09 bash[26344]: debug 2026-03-09T16:30:18.910+0000 7f4f58145640 -1 osd.4 785 *** Got signal Terminated *** 2026-03-09T16:30:19.132 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 16:30:18 vm09 bash[26344]: debug 2026-03-09T16:30:18.910+0000 7f4f58145640 -1 osd.4 785 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T16:30:22.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:22 vm09 bash[37995]: debug 2026-03-09T16:30:22.510+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:23.382 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:23 vm09 bash[32269]: debug 2026-03-09T16:30:23.106+0000 7efce1d6b640 -1 osd.5 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:29:57.035142+0000 front 2026-03-09T16:29:57.034984+0000 (oldest deadline 2026-03-09T16:30:22.934670+0000) 2026-03-09T16:30:23.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:23 vm09 bash[37995]: debug 2026-03-09T16:30:23.534+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:24.233 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 16:30:24 vm09 bash[56720]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-osd-4 2026-03-09T16:30:24.233 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:24 vm09 bash[32269]: debug 2026-03-09T16:30:24.126+0000 7efce1d6b640 -1 osd.5 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:29:57.035142+0000 front 2026-03-09T16:29:57.034984+0000 (oldest deadline 2026-03-09T16:30:22.934670+0000) 2026-03-09T16:30:24.272 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.4.service' 2026-03-09T16:30:24.283 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:24.283 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T16:30:24.283 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T16:30:24.283 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.5 2026-03-09T16:30:24.570 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:24 vm09 systemd[1]: Stopping Ceph osd.5 for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:24.570 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:24 vm09 bash[32269]: debug 2026-03-09T16:30:24.362+0000 7efce5f53640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:30:24.570 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:24 vm09 bash[32269]: debug 2026-03-09T16:30:24.362+0000 7efce5f53640 -1 osd.5 785 *** Got signal Terminated *** 2026-03-09T16:30:24.570 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:24 vm09 bash[32269]: debug 2026-03-09T16:30:24.362+0000 7efce5f53640 -1 osd.5 785 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T16:30:24.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:24 vm09 bash[37995]: debug 2026-03-09T16:30:24.570+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:25.606 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:25 vm09 bash[32269]: debug 2026-03-09T16:30:25.142+0000 7efce1d6b640 -1 osd.5 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:29:57.035142+0000 front 2026-03-09T16:29:57.034984+0000 (oldest deadline 2026-03-09T16:30:22.934670+0000) 2026-03-09T16:30:25.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:25 vm09 bash[37995]: debug 2026-03-09T16:30:25.602+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:25.882 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:25 vm09 bash[44204]: debug 2026-03-09T16:30:25.722+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:26.561 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:26 vm09 bash[32269]: debug 2026-03-09T16:30:26.178+0000 7efce1d6b640 -1 osd.5 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:29:57.035142+0000 front 2026-03-09T16:29:57.034984+0000 (oldest deadline 2026-03-09T16:30:22.934670+0000) 2026-03-09T16:30:26.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:26 vm09 bash[37995]: debug 2026-03-09T16:30:26.558+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:26.882 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:26 vm09 bash[44204]: debug 2026-03-09T16:30:26.674+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:27.526 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:27 vm09 bash[32269]: debug 2026-03-09T16:30:27.202+0000 7efce1d6b640 -1 osd.5 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:29:57.035142+0000 front 2026-03-09T16:29:57.034984+0000 (oldest deadline 2026-03-09T16:30:22.934670+0000) 2026-03-09T16:30:27.882 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:27 vm09 bash[44204]: debug 2026-03-09T16:30:27.682+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:27.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:27 vm09 bash[37995]: debug 2026-03-09T16:30:27.526+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:28.553 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:28 vm09 bash[32269]: debug 2026-03-09T16:30:28.230+0000 7efce1d6b640 -1 osd.5 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:29:57.035142+0000 front 2026-03-09T16:29:57.034984+0000 (oldest deadline 2026-03-09T16:30:22.934670+0000) 2026-03-09T16:30:28.882 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:28 vm09 bash[44204]: debug 2026-03-09T16:30:28.646+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:28.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:28 vm09 bash[37995]: debug 2026-03-09T16:30:28.550+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:29.507 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:29 vm09 bash[32269]: debug 2026-03-09T16:30:29.222+0000 7efce1d6b640 -1 osd.5 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:29:57.035142+0000 front 2026-03-09T16:29:57.034984+0000 (oldest deadline 2026-03-09T16:30:22.934670+0000) 2026-03-09T16:30:29.507 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 16:30:29 vm09 bash[56902]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-osd-5 2026-03-09T16:30:29.733 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.5.service' 2026-03-09T16:30:29.744 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:29.744 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T16:30:29.744 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-09T16:30:29.744 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.6 2026-03-09T16:30:29.791 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:29 vm09 bash[37995]: debug 2026-03-09T16:30:29.514+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:29.791 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:29 vm09 bash[44204]: debug 2026-03-09T16:30:29.674+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:30.132 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:29 vm09 systemd[1]: Stopping Ceph osd.6 for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:30.132 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:29 vm09 bash[37995]: debug 2026-03-09T16:30:29.826+0000 7f5904397640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:30:30.132 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:29 vm09 bash[37995]: debug 2026-03-09T16:30:29.826+0000 7f5904397640 -1 osd.6 785 *** Got signal Terminated *** 2026-03-09T16:30:30.132 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:29 vm09 bash[37995]: debug 2026-03-09T16:30:29.826+0000 7f5904397640 -1 osd.6 785 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T16:30:30.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:30 vm09 bash[37995]: debug 2026-03-09T16:30:30.490+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:30.882 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:30 vm09 bash[44204]: debug 2026-03-09T16:30:30.706+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:31.742 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:31 vm09 bash[37995]: debug 2026-03-09T16:30:31.482+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:32.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:31 vm09 bash[44204]: debug 2026-03-09T16:30:31.742+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:32.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:31 vm09 bash[44204]: debug 2026-03-09T16:30:31.742+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:05.561425+0000 front 2026-03-09T16:30:05.561445+0000 (oldest deadline 2026-03-09T16:30:30.861360+0000) 2026-03-09T16:30:32.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:32 vm09 bash[37995]: debug 2026-03-09T16:30:32.526+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:32.882 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:32 vm09 bash[44204]: debug 2026-03-09T16:30:32.742+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:32.882 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:32 vm09 bash[44204]: debug 2026-03-09T16:30:32.742+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:05.561425+0000 front 2026-03-09T16:30:05.561445+0000 (oldest deadline 2026-03-09T16:30:30.861360+0000) 2026-03-09T16:30:33.882 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:33 vm09 bash[44204]: debug 2026-03-09T16:30:33.758+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:33.882 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:33 vm09 bash[44204]: debug 2026-03-09T16:30:33.758+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:05.561425+0000 front 2026-03-09T16:30:05.561445+0000 (oldest deadline 2026-03-09T16:30:30.861360+0000) 2026-03-09T16:30:33.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:33 vm09 bash[37995]: debug 2026-03-09T16:30:33.522+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:33.882 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:33 vm09 bash[37995]: debug 2026-03-09T16:30:33.522+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:06.839131+0000 front 2026-03-09T16:30:06.839314+0000 (oldest deadline 2026-03-09T16:30:32.739025+0000) 2026-03-09T16:30:34.871 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:34 vm09 bash[44204]: debug 2026-03-09T16:30:34.762+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:34.871 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:34 vm09 bash[44204]: debug 2026-03-09T16:30:34.762+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:05.561425+0000 front 2026-03-09T16:30:05.561445+0000 (oldest deadline 2026-03-09T16:30:30.861360+0000) 2026-03-09T16:30:34.871 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:34 vm09 bash[44204]: debug 2026-03-09T16:30:34.762+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6822 osd.2 since back 2026-03-09T16:30:10.862164+0000 front 2026-03-09T16:30:10.861886+0000 (oldest deadline 2026-03-09T16:30:33.761614+0000) 2026-03-09T16:30:34.871 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:34 vm09 bash[37995]: debug 2026-03-09T16:30:34.554+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:01.039019+0000 front 2026-03-09T16:30:01.039114+0000 (oldest deadline 2026-03-09T16:30:22.138501+0000) 2026-03-09T16:30:34.871 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:34 vm09 bash[37995]: debug 2026-03-09T16:30:34.554+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:06.839131+0000 front 2026-03-09T16:30:06.839314+0000 (oldest deadline 2026-03-09T16:30:32.739025+0000) 2026-03-09T16:30:34.871 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:34 vm09 bash[37995]: debug 2026-03-09T16:30:34.554+0000 7f59009b0640 -1 osd.6 785 heartbeat_check: no reply from 192.168.123.101:6822 osd.2 since back 2026-03-09T16:30:12.739610+0000 front 2026-03-09T16:30:12.739592+0000 (oldest deadline 2026-03-09T16:30:33.839270+0000) 2026-03-09T16:30:35.132 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 16:30:34 vm09 bash[57085]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-osd-6 2026-03-09T16:30:35.189 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.6.service' 2026-03-09T16:30:35.239 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:35.239 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-09T16:30:35.239 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-09T16:30:35.240 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.7 2026-03-09T16:30:35.632 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:35 vm09 systemd[1]: Stopping Ceph osd.7 for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:35.632 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:35 vm09 bash[44204]: debug 2026-03-09T16:30:35.322+0000 7f2846ce9640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:30:35.632 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:35 vm09 bash[44204]: debug 2026-03-09T16:30:35.322+0000 7f2846ce9640 -1 osd.7 785 *** Got signal Terminated *** 2026-03-09T16:30:35.632 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:35 vm09 bash[44204]: debug 2026-03-09T16:30:35.322+0000 7f2846ce9640 -1 osd.7 785 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T16:30:36.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:35 vm09 bash[44204]: debug 2026-03-09T16:30:35.730+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:36.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:35 vm09 bash[44204]: debug 2026-03-09T16:30:35.730+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:05.561425+0000 front 2026-03-09T16:30:05.561445+0000 (oldest deadline 2026-03-09T16:30:30.861360+0000) 2026-03-09T16:30:36.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:35 vm09 bash[44204]: debug 2026-03-09T16:30:35.730+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6822 osd.2 since back 2026-03-09T16:30:10.862164+0000 front 2026-03-09T16:30:10.861886+0000 (oldest deadline 2026-03-09T16:30:33.761614+0000) 2026-03-09T16:30:37.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:36 vm09 bash[44204]: debug 2026-03-09T16:30:36.778+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:37.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:36 vm09 bash[44204]: debug 2026-03-09T16:30:36.778+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:05.561425+0000 front 2026-03-09T16:30:05.561445+0000 (oldest deadline 2026-03-09T16:30:30.861360+0000) 2026-03-09T16:30:37.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:36 vm09 bash[44204]: debug 2026-03-09T16:30:36.778+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6822 osd.2 since back 2026-03-09T16:30:10.862164+0000 front 2026-03-09T16:30:10.861886+0000 (oldest deadline 2026-03-09T16:30:33.761614+0000) 2026-03-09T16:30:38.133 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:37 vm09 bash[44204]: debug 2026-03-09T16:30:37.798+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:38.133 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:37 vm09 bash[44204]: debug 2026-03-09T16:30:37.798+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:05.561425+0000 front 2026-03-09T16:30:05.561445+0000 (oldest deadline 2026-03-09T16:30:30.861360+0000) 2026-03-09T16:30:38.133 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:37 vm09 bash[44204]: debug 2026-03-09T16:30:37.798+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6822 osd.2 since back 2026-03-09T16:30:10.862164+0000 front 2026-03-09T16:30:10.861886+0000 (oldest deadline 2026-03-09T16:30:33.761614+0000) 2026-03-09T16:30:39.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:38 vm09 bash[44204]: debug 2026-03-09T16:30:38.818+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:39.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:38 vm09 bash[44204]: debug 2026-03-09T16:30:38.818+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:05.561425+0000 front 2026-03-09T16:30:05.561445+0000 (oldest deadline 2026-03-09T16:30:30.861360+0000) 2026-03-09T16:30:39.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:38 vm09 bash[44204]: debug 2026-03-09T16:30:38.818+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6822 osd.2 since back 2026-03-09T16:30:10.862164+0000 front 2026-03-09T16:30:10.861886+0000 (oldest deadline 2026-03-09T16:30:33.761614+0000) 2026-03-09T16:30:40.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:39 vm09 bash[44204]: debug 2026-03-09T16:30:39.830+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6806 osd.0 since back 2026-03-09T16:30:00.261754+0000 front 2026-03-09T16:30:00.261536+0000 (oldest deadline 2026-03-09T16:30:25.561101+0000) 2026-03-09T16:30:40.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:39 vm09 bash[44204]: debug 2026-03-09T16:30:39.830+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6814 osd.1 since back 2026-03-09T16:30:05.561425+0000 front 2026-03-09T16:30:05.561445+0000 (oldest deadline 2026-03-09T16:30:30.861360+0000) 2026-03-09T16:30:40.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:39 vm09 bash[44204]: debug 2026-03-09T16:30:39.830+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6822 osd.2 since back 2026-03-09T16:30:10.862164+0000 front 2026-03-09T16:30:10.861886+0000 (oldest deadline 2026-03-09T16:30:33.761614+0000) 2026-03-09T16:30:40.132 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:39 vm09 bash[44204]: debug 2026-03-09T16:30:39.830+0000 7f2842b01640 -1 osd.7 785 heartbeat_check: no reply from 192.168.123.101:6830 osd.3 since back 2026-03-09T16:30:13.762058+0000 front 2026-03-09T16:30:13.762148+0000 (oldest deadline 2026-03-09T16:30:39.661841+0000) 2026-03-09T16:30:40.632 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 16:30:40 vm09 bash[57266]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-osd-7 2026-03-09T16:30:40.687 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@osd.7.service' 2026-03-09T16:30:40.696 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:40.696 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-09T16:30:40.696 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-09T16:30:40.697 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@rgw.foo.a 2026-03-09T16:30:41.173 INFO:journalctl@ceph.rgw.foo.a.vm01.stdout:Mar 09 16:30:40 vm01 systemd[1]: Stopping Ceph rgw.foo.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:41.173 INFO:journalctl@ceph.rgw.foo.a.vm01.stdout:Mar 09 16:30:40 vm01 bash[53549]: debug 2026-03-09T16:30:40.743+0000 7ffb34f31640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T16:30:41.173 INFO:journalctl@ceph.rgw.foo.a.vm01.stdout:Mar 09 16:30:40 vm01 bash[53549]: debug 2026-03-09T16:30:40.743+0000 7ffb387a0980 -1 shutting down 2026-03-09T16:30:50.814 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@rgw.foo.a.service' 2026-03-09T16:30:50.826 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:50.826 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-09T16:30:50.826 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-09T16:30:50.827 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@prometheus.a 2026-03-09T16:30:50.932 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@prometheus.a.service' 2026-03-09T16:30:50.942 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T16:30:50.942 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-09T16:30:50.942 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm rm-cluster --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 --force --keep-logs 2026-03-09T16:30:51.023 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T16:30:55.888 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 16:30:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:30:55.888 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:30:56.156 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:30:56.156 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:30:56.156 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 16:30:55 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:30:56.157 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:30:56.424 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: Stopping Ceph alertmanager.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:56.424 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 16:30:56 vm01 bash[56700]: ts=2026-03-09T16:30:56.267Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T16:30:56.424 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 16:30:56 vm01 bash[133207]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-alertmanager-a 2026-03-09T16:30:56.424 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@alertmanager.a.service: Deactivated successfully. 2026-03-09T16:30:56.424 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: Stopped Ceph alertmanager.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T16:30:56.748 INFO:journalctl@ceph.alertmanager.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:30:56.748 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:30:56.748 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: Stopping Ceph node-exporter.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:30:56.748 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:56 vm01 bash[133337]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-node-exporter-a 2026-03-09T16:30:56.748 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T16:30:56.748 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T16:30:56.748 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: Stopped Ceph node-exporter.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T16:30:57.008 INFO:journalctl@ceph.node-exporter.a.vm01.stdout:Mar 09 16:30:56 vm01 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:30:58.239 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm rm-cluster --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 --force --keep-logs 2026-03-09T16:30:58.320 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T16:31:03.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:02 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.132 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:02 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.132 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:02 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.477 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.477 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.477 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.477 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.477 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.477 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.771 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.771 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:03.771 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:04.132 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:04.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:04.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: Stopping Ceph iscsi.iscsi.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:31:04.132 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:03 vm09 bash[48403]: debug Shutdown received 2026-03-09T16:31:04.132 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:03 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.163 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:13 vm09 bash[57757]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-iscsi-iscsi-a 2026-03-09T16:31:14.163 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:13 vm09 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-09T16:31:14.163 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:13 vm09 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-09T16:31:14.163 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:13 vm09 systemd[1]: Stopped Ceph iscsi.iscsi.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T16:31:14.163 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.430 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.430 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.430 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.430 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.430 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.430 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.430 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: Stopping Ceph grafana.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:31:14.430 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.687 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.687 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 bash[50619]: logger=server t=2026-03-09T16:31:14.424798546Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-09T16:31:14.687 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 bash[50619]: logger=grafana-apiserver t=2026-03-09T16:31:14.424924592Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-09T16:31:14.687 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 bash[50619]: logger=tracing t=2026-03-09T16:31:14.424946834Z level=info msg="Closing tracing" 2026-03-09T16:31:14.687 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 bash[50619]: logger=ticker t=2026-03-09T16:31:14.425095242Z level=info msg=stopped last_tick=2026-03-09T16:31:10Z 2026-03-09T16:31:14.687 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 bash[57921]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-grafana-a 2026-03-09T16:31:14.687 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@grafana.a.service: Deactivated successfully. 2026-03-09T16:31:14.687 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: Stopped Ceph grafana.a for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T16:31:14.687 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.963 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:14.963 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:14 vm09 systemd[1]: Stopping Ceph node-exporter.b for 397fadc0-1bcf-11f1-8481-edc1430c2c03... 2026-03-09T16:31:15.224 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:15 vm09 bash[58086]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03-node-exporter-b 2026-03-09T16:31:15.224 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:15 vm09 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T16:31:15.224 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:15 vm09 systemd[1]: ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T16:31:15.224 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:15 vm09 systemd[1]: Stopped Ceph node-exporter.b for 397fadc0-1bcf-11f1-8481-edc1430c2c03. 2026-03-09T16:31:15.577 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 16:31:15 vm09 systemd[1]: /etc/systemd/system/ceph-397fadc0-1bcf-11f1-8481-edc1430c2c03@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T16:31:15.896 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T16:31:15.903 INFO:teuthology.orchestra.run.vm01.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-09T16:31:15.904 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T16:31:15.904 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T16:31:15.911 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T16:31:15.912 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/533/remote/vm01/crash 2026-03-09T16:31:15.912 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/crash -- . 2026-03-09T16:31:15.953 INFO:teuthology.orchestra.run.vm01.stderr:tar: /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/crash: Cannot open: No such file or directory 2026-03-09T16:31:15.954 INFO:teuthology.orchestra.run.vm01.stderr:tar: Error is not recoverable: exiting now 2026-03-09T16:31:15.954 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/533/remote/vm09/crash 2026-03-09T16:31:15.954 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/crash -- . 2026-03-09T16:31:15.962 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/crash: Cannot open: No such file or directory 2026-03-09T16:31:15.962 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-09T16:31:15.962 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T16:31:15.963 DEBUG:teuthology.orchestra.run.vm01:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'reached quota' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(POOL_FULL\)' | egrep -v '\(SMALLER_PGP_NUM\)' | egrep -v '\(CACHE_POOL_NO_HIT_SET\)' | egrep -v '\(CACHE_POOL_NEAR_FULL\)' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(PG_AVAILABILITY\)' | egrep -v '\(PG_DEGRADED\)' | egrep -v CEPHADM_STRAY_DAEMON | head -n 1 2026-03-09T16:31:16.013 INFO:tasks.cephadm:Compressing logs... 2026-03-09T16:31:16.013 DEBUG:teuthology.orchestra.run.vm01:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T16:31:16.054 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T16:31:16.061 INFO:teuthology.orchestra.run.vm01.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T16:31:16.061 INFO:teuthology.orchestra.run.vm01.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T16:31:16.061 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.3.log 2026-03-09T16:31:16.062 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.log 2026-03-09T16:31:16.062 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mon.c.log 2026-03-09T16:31:16.062 INFO:teuthology.orchestra.run.vm09.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T16:31:16.064 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T16:31:16.064 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mgr.x.log 2026-03-09T16:31:16.064 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.log 2026-03-09T16:31:16.066 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.log: 92.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T16:31:16.066 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.1.log 2026-03-09T16:31:16.066 INFO:teuthology.orchestra.run.vm01.stderr: 93.4% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.log.gz 2026-03-09T16:31:16.067 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mgr.y.log 2026-03-09T16:31:16.071 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mgr.x.log: 93.1% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mgr.x.log.gz 2026-03-09T16:31:16.071 INFO:teuthology.orchestra.run.vm09.stderr: 90.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T16:31:16.071 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mon.b.log 2026-03-09T16:31:16.072 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.5.log 2026-03-09T16:31:16.072 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mon.b.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.7.log 2026-03-09T16:31:16.072 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.6.log 2026-03-09T16:31:16.075 INFO:teuthology.orchestra.run.vm09.stderr: 88.1% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.log.gz 2026-03-09T16:31:16.076 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.audit.log 2026-03-09T16:31:16.077 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.1.log: /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mon.a.log 2026-03-09T16:31:16.086 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.2.log 2026-03-09T16:31:16.086 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.7.log: /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-volume.log 2026-03-09T16:31:16.102 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.audit.log 2026-03-09T16:31:16.106 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.audit.log: 92.5% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.audit.log.gz 2026-03-09T16:31:16.106 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-volume.log 2026-03-09T16:31:16.110 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.cephadm.log 2026-03-09T16:31:16.118 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-client.rgw.foo.a.log 2026-03-09T16:31:16.122 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.4.log 2026-03-09T16:31:16.126 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.cephadm.log: 79.7% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.cephadm.log.gz 2026-03-09T16:31:16.126 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/tcmu-runner.log 2026-03-09T16:31:16.134 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.cephadm.log 2026-03-09T16:31:16.138 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.4.log: /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/tcmu-runner.log: 73.0% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/tcmu-runner.log.gz 2026-03-09T16:31:16.142 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-client.rgw.foo.a.log: gzip -5 --verbose -- /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.0.log 2026-03-09T16:31:16.142 INFO:teuthology.orchestra.run.vm09.stderr: 95.8% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-volume.log.gz 2026-03-09T16:31:16.149 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.cephadm.log: 95.3% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.audit.log.gz 2026-03-09T16:31:16.150 INFO:teuthology.orchestra.run.vm01.stderr: 88.3% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph.cephadm.log.gz 2026-03-09T16:31:16.177 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.0.log: 95.8% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-volume.log.gz 2026-03-09T16:31:16.253 INFO:teuthology.orchestra.run.vm01.stderr: 94.6% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-client.rgw.foo.a.log.gz 2026-03-09T16:31:16.989 INFO:teuthology.orchestra.run.vm01.stderr: 91.5% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mgr.y.log.gz 2026-03-09T16:31:17.812 INFO:teuthology.orchestra.run.vm09.stderr: 93.0% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mon.b.log.gz 2026-03-09T16:31:18.251 INFO:teuthology.orchestra.run.vm01.stderr: 92.7% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mon.c.log.gz 2026-03-09T16:31:20.566 INFO:teuthology.orchestra.run.vm01.stderr: 91.9% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-mon.a.log.gz 2026-03-09T16:31:26.222 INFO:teuthology.orchestra.run.vm09.stderr: 94.7% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.6.log.gz 2026-03-09T16:31:26.600 INFO:teuthology.orchestra.run.vm09.stderr: 94.7% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.4.log.gz 2026-03-09T16:31:26.717 INFO:teuthology.orchestra.run.vm09.stderr: 94.7% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.7.log.gz 2026-03-09T16:31:26.780 INFO:teuthology.orchestra.run.vm09.stderr: 94.7% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.5.log.gz 2026-03-09T16:31:26.781 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-09T16:31:26.781 INFO:teuthology.orchestra.run.vm09.stderr:real 0m10.724s 2026-03-09T16:31:26.781 INFO:teuthology.orchestra.run.vm09.stderr:user 0m20.031s 2026-03-09T16:31:26.781 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m1.290s 2026-03-09T16:31:27.395 INFO:teuthology.orchestra.run.vm01.stderr: 94.6% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.2.log.gz 2026-03-09T16:31:27.414 INFO:teuthology.orchestra.run.vm01.stderr: 94.7% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.1.log.gz 2026-03-09T16:31:27.671 INFO:teuthology.orchestra.run.vm01.stderr: 94.7% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.3.log.gz 2026-03-09T16:31:27.829 INFO:teuthology.orchestra.run.vm01.stderr: 94.8% -- replaced with /var/log/ceph/397fadc0-1bcf-11f1-8481-edc1430c2c03/ceph-osd.0.log.gz 2026-03-09T16:31:27.830 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-09T16:31:27.830 INFO:teuthology.orchestra.run.vm01.stderr:real 0m11.774s 2026-03-09T16:31:27.830 INFO:teuthology.orchestra.run.vm01.stderr:user 0m21.696s 2026-03-09T16:31:27.830 INFO:teuthology.orchestra.run.vm01.stderr:sys 0m1.668s 2026-03-09T16:31:27.830 INFO:tasks.cephadm:Archiving logs... 2026-03-09T16:31:27.830 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/533/remote/vm01/log 2026-03-09T16:31:27.830 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T16:31:28.703 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/533/remote/vm09/log 2026-03-09T16:31:28.704 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T16:31:29.455 INFO:tasks.cephadm:Removing cluster... 2026-03-09T16:31:29.455 DEBUG:teuthology.orchestra.run.vm01:> sudo cephadm rm-cluster --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 --force 2026-03-09T16:31:29.548 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T16:31:30.810 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm rm-cluster --fsid 397fadc0-1bcf-11f1-8481-edc1430c2c03 --force 2026-03-09T16:31:30.900 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 397fadc0-1bcf-11f1-8481-edc1430c2c03 2026-03-09T16:31:32.201 INFO:tasks.cephadm:Teardown complete 2026-03-09T16:31:32.201 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T16:31:32.203 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T16:31:32.204 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T16:31:32.205 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T16:31:32.220 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T16:31:32.220 DEBUG:teuthology.orchestra.run.vm01:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T16:31:32.225 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T16:31:32.225 DEBUG:teuthology.orchestra.run.vm09:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T16:31:32.293 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:32.295 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:32.493 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:32.494 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:32.506 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:32.507 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:32.699 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:32.699 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T16:31:32.700 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T16:31:32.700 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:32.723 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:32.724 INFO:teuthology.orchestra.run.vm01.stdout: ceph* 2026-03-09T16:31:32.742 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:32.742 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T16:31:32.743 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T16:31:32.743 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:32.761 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:32.763 INFO:teuthology.orchestra.run.vm09.stdout: ceph* 2026-03-09T16:31:33.027 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T16:31:33.027 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T16:31:33.209 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T16:31:33.209 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T16:31:33.244 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T16:31:33.247 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:33.254 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T16:31:33.256 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:34.517 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:34.553 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:34.576 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:34.612 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:34.761 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:34.762 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:34.826 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:34.826 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:34.960 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:34.960 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T16:31:34.960 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T16:31:34.961 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:34.971 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:34.972 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T16:31:35.004 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:35.004 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T16:31:35.004 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T16:31:35.004 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:35.014 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:35.014 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T16:31:35.143 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T16:31:35.143 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T16:31:35.181 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T16:31:35.182 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T16:31:35.188 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T16:31:35.191 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:35.210 INFO:teuthology.orchestra.run.vm09.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:35.223 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T16:31:35.226 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:35.241 INFO:teuthology.orchestra.run.vm09.stdout:Looking for files to backup/remove ... 2026-03-09T16:31:35.242 INFO:teuthology.orchestra.run.vm09.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T16:31:35.244 INFO:teuthology.orchestra.run.vm09.stdout:Removing user `cephadm' ... 2026-03-09T16:31:35.244 INFO:teuthology.orchestra.run.vm09.stdout:Warning: group `nogroup' has no more members. 2026-03-09T16:31:35.248 INFO:teuthology.orchestra.run.vm01.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:35.255 INFO:teuthology.orchestra.run.vm09.stdout:Done. 2026-03-09T16:31:35.279 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:31:35.280 INFO:teuthology.orchestra.run.vm01.stdout:Looking for files to backup/remove ... 2026-03-09T16:31:35.281 INFO:teuthology.orchestra.run.vm01.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T16:31:35.284 INFO:teuthology.orchestra.run.vm01.stdout:Removing user `cephadm' ... 2026-03-09T16:31:35.284 INFO:teuthology.orchestra.run.vm01.stdout:Warning: group `nogroup' has no more members. 2026-03-09T16:31:35.296 INFO:teuthology.orchestra.run.vm01.stdout:Done. 2026-03-09T16:31:35.321 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:31:35.385 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T16:31:35.387 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:35.420 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T16:31:35.422 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:36.561 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:36.596 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:36.597 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:36.633 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:36.817 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:36.818 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:36.860 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:36.861 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:36.987 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:36.987 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T16:31:36.987 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T16:31:36.987 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:36.999 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:37.000 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds* 2026-03-09T16:31:37.059 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:37.060 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T16:31:37.060 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T16:31:37.060 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:37.071 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:37.071 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds* 2026-03-09T16:31:37.185 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T16:31:37.185 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T16:31:37.227 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T16:31:37.229 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:37.250 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T16:31:37.250 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T16:31:37.294 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T16:31:37.296 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:37.654 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:31:37.752 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T16:31:37.754 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:37.761 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:31:37.861 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T16:31:37.864 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:39.296 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:39.336 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:39.435 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:39.473 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:39.540 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:39.540 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:39.654 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:39.655 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:39.728 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:39.728 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T16:31:39.729 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T16:31:39.730 INFO:teuthology.orchestra.run.vm01.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T16:31:39.730 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:39.730 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev 2026-03-09T16:31:39.730 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:39.746 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:39.746 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T16:31:39.747 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-k8sevents* 2026-03-09T16:31:39.857 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev 2026-03-09T16:31:39.858 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:39.872 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:39.872 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T16:31:39.873 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-k8sevents* 2026-03-09T16:31:39.938 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T16:31:39.938 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T16:31:39.977 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T16:31:39.980 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:39.996 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:40.025 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:40.048 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T16:31:40.048 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T16:31:40.072 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:40.087 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T16:31:40.089 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:40.101 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:40.128 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:40.168 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:40.588 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T16:31:40.591 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:40.653 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T16:31:40.656 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:42.174 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:42.209 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:42.223 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:42.259 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:42.426 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:42.427 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:42.467 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:42.468 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:42.583 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:42.584 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:42.584 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:42.585 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:42.602 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:42.603 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T16:31:42.643 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:42.643 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:42.643 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:42.644 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:42.654 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:42.655 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T16:31:42.793 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T16:31:42.793 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T16:31:42.836 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T16:31:42.836 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T16:31:42.841 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T16:31:42.844 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:42.879 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T16:31:42.881 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:42.916 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:42.942 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:43.354 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:43.386 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:43.792 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:43.795 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:44.240 INFO:teuthology.orchestra.run.vm09.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:44.260 INFO:teuthology.orchestra.run.vm01.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:44.653 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:44.676 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:44.677 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:44.709 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:45.109 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:31:45.147 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T16:31:45.152 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:31:45.191 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T16:31:45.217 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T16:31:45.219 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:45.261 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T16:31:45.262 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:45.876 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:45.892 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:46.316 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:46.338 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:46.743 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:46.811 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:47.184 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:47.294 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:48.692 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:48.727 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:48.791 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:48.824 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:48.928 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:48.929 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:49.034 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:49.035 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:49.080 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:49.081 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:49.094 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:49.095 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse* 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:49.192 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:49.193 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:49.193 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:49.193 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:49.193 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:49.193 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:49.193 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:49.203 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:49.204 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse* 2026-03-09T16:31:49.275 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T16:31:49.275 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T16:31:49.314 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T16:31:49.316 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:49.373 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T16:31:49.373 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T16:31:49.412 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T16:31:49.415 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:49.752 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:31:49.842 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T16:31:49.844 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:49.864 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:31:49.961 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T16:31:49.963 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:51.332 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:51.365 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:51.441 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:51.481 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:51.566 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:51.566 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:51.695 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:51.696 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:51.705 INFO:teuthology.orchestra.run.vm09.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T16:31:51.705 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:51.705 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:51.705 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:51.706 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:51.731 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:51.731 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:51.763 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:51.845 INFO:teuthology.orchestra.run.vm01.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T16:31:51.845 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:51.845 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:51.845 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:51.846 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:51.864 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:51.864 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:51.896 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:51.974 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:51.974 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:52.100 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:52.101 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:52.112 INFO:teuthology.orchestra.run.vm09.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T16:31:52.112 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:52.112 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:52.112 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:52.112 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:52.113 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:52.138 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:52.138 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:52.170 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:52.252 INFO:teuthology.orchestra.run.vm01.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T16:31:52.252 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:52.252 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:52.252 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:52.253 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:52.271 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:52.271 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:52.303 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:52.374 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:52.374 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:52.507 INFO:teuthology.orchestra.run.vm09.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T16:31:52.507 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:52.507 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:52.507 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:52.508 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:52.509 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:52.510 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:52.531 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:52.531 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:52.563 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T16:31:52.655 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T16:31:52.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T16:31:52.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T16:31:52.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T16:31:52.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T16:31:52.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T16:31:52.656 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T16:31:52.656 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:52.674 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:52.675 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:52.707 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:52.767 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:52.768 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:52.911 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:52.911 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:52.911 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:52.911 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:52.911 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T16:31:52.912 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:52.917 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:52.918 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:52.925 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:52.925 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T16:31:53.063 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:53.063 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:53.063 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:53.063 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T16:31:53.064 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:53.076 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:53.076 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T16:31:53.117 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T16:31:53.117 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T16:31:53.155 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T16:31:53.157 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:53.168 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:53.178 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:53.252 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T16:31:53.252 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T16:31:53.299 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T16:31:53.301 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:53.315 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:53.328 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:54.279 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:54.313 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:54.455 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:54.489 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:54.509 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:54.509 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:54.674 INFO:teuthology.orchestra.run.vm09.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T16:31:54.674 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:54.674 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:54.674 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:54.674 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:54.674 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:54.674 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T16:31:54.675 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:54.695 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:54.695 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:54.708 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:54.709 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:54.727 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:54.862 INFO:teuthology.orchestra.run.vm01.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T16:31:54.862 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:54.862 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:54.862 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:54.862 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T16:31:54.863 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:54.884 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:54.885 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:54.919 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:54.934 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:54.934 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:55.058 INFO:teuthology.orchestra.run.vm09.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T16:31:55.058 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:55.058 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:55.058 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:55.058 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:55.058 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:55.058 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:55.058 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:55.058 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T16:31:55.059 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:55.075 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:55.076 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:55.107 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:55.128 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:55.129 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:55.278 INFO:teuthology.orchestra.run.vm01.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T16:31:55.279 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:55.299 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:55.300 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:55.308 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:55.308 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:55.333 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:55.458 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T16:31:55.459 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:55.467 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:55.467 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd* 2026-03-09T16:31:55.525 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:55.526 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:55.635 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T16:31:55.635 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T16:31:55.676 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T16:31:55.679 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:55.710 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:55.710 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:55.710 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:55.710 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:55.710 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T16:31:55.711 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:55.723 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:55.723 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd* 2026-03-09T16:31:55.901 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T16:31:55.901 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T16:31:55.950 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T16:31:55.953 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:56.809 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:56.842 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:57.036 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:57.036 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:57.088 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:57.128 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:57.179 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:57.179 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:57.179 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:57.180 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T16:31:57.181 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:57.191 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:57.192 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-dev* libcephfs2* 2026-03-09T16:31:57.334 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:57.335 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:57.354 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T16:31:57.354 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T16:31:57.395 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T16:31:57.397 INFO:teuthology.orchestra.run.vm09.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:57.408 INFO:teuthology.orchestra.run.vm09.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:57.432 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T16:31:57.479 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:57.479 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:57.479 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:57.479 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:57.479 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:57.479 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:57.480 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:57.480 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:57.480 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:57.480 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T16:31:57.482 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:57.497 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:57.498 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-dev* libcephfs2* 2026-03-09T16:31:57.679 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T16:31:57.680 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T16:31:57.719 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T16:31:57.721 INFO:teuthology.orchestra.run.vm01.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:57.733 INFO:teuthology.orchestra.run.vm01.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:57.757 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T16:31:58.513 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:58.547 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:58.738 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:58.739 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:58.859 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:58.871 INFO:teuthology.orchestra.run.vm09.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T16:31:58.871 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:58.871 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:58.871 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:58.871 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T16:31:58.872 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:58.891 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:58.891 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:58.894 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:58.923 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:31:59.098 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:59.099 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:59.112 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:31:59.113 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:59.260 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T16:31:59.261 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T16:31:59.261 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:59.283 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:31:59.283 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:31:59.305 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:59.305 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:59.305 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T16:31:59.305 INFO:teuthology.orchestra.run.vm09.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T16:31:59.305 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T16:31:59.306 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:59.316 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:31:59.321 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:31:59.321 INFO:teuthology.orchestra.run.vm09.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T16:31:59.322 INFO:teuthology.orchestra.run.vm09.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T16:31:59.501 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T16:31:59.501 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T16:31:59.512 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:31:59.513 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:31:59.544 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T16:31:59.547 INFO:teuthology.orchestra.run.vm09.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:59.559 INFO:teuthology.orchestra.run.vm09.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:59.571 INFO:teuthology.orchestra.run.vm09.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:59.582 INFO:teuthology.orchestra.run.vm09.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T16:31:59.656 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:31:59.667 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:31:59.667 INFO:teuthology.orchestra.run.vm01.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T16:31:59.667 INFO:teuthology.orchestra.run.vm01.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T16:31:59.828 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T16:31:59.828 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T16:31:59.866 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T16:31:59.869 INFO:teuthology.orchestra.run.vm01.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:59.881 INFO:teuthology.orchestra.run.vm01.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:59.894 INFO:teuthology.orchestra.run.vm01.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:31:59.905 INFO:teuthology.orchestra.run.vm01.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T16:32:00.008 INFO:teuthology.orchestra.run.vm09.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:00.019 INFO:teuthology.orchestra.run.vm09.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:00.031 INFO:teuthology.orchestra.run.vm09.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:00.055 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:32:00.087 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T16:32:00.156 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T16:32:00.158 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T16:32:00.321 INFO:teuthology.orchestra.run.vm01.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:00.332 INFO:teuthology.orchestra.run.vm01.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:00.344 INFO:teuthology.orchestra.run.vm01.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:00.370 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:32:00.403 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T16:32:00.471 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T16:32:00.473 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T16:32:01.634 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:32:01.667 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:32:01.869 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:32:01.870 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:32:01.976 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:32:02.012 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:32:02.019 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T16:32:02.020 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:32:02.042 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:32:02.042 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:32:02.074 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:32:02.223 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:32:02.224 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:32:02.277 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:32:02.278 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:32:02.392 INFO:teuthology.orchestra.run.vm01.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T16:32:02.392 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:32:02.392 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:32:02.392 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T16:32:02.392 INFO:teuthology.orchestra.run.vm01.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T16:32:02.392 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T16:32:02.392 INFO:teuthology.orchestra.run.vm01.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T16:32:02.393 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:32:02.419 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:32:02.419 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:32:02.451 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:32:02.478 INFO:teuthology.orchestra.run.vm09.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:32:02.479 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T16:32:02.480 INFO:teuthology.orchestra.run.vm09.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T16:32:02.480 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:32:02.501 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:32:02.501 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:32:02.502 DEBUG:teuthology.orchestra.run.vm09:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T16:32:02.561 DEBUG:teuthology.orchestra.run.vm09:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T16:32:02.636 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:32:02.661 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:32:02.662 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T16:32:02.789 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T16:32:02.806 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T16:32:02.806 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:32:02.808 DEBUG:teuthology.orchestra.run.vm01:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T16:32:02.838 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T16:32:02.839 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T16:32:02.862 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T16:32:02.939 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T16:32:02.964 INFO:teuthology.orchestra.run.vm09.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T16:32:03.128 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T16:32:03.128 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T16:32:03.132 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T16:32:03.133 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T16:32:03.166 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T16:32:03.168 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:03.184 INFO:teuthology.orchestra.run.vm09.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T16:32:03.195 INFO:teuthology.orchestra.run.vm09.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T16:32:03.205 INFO:teuthology.orchestra.run.vm09.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T16:32:03.217 INFO:teuthology.orchestra.run.vm09.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T16:32:03.227 INFO:teuthology.orchestra.run.vm09.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T16:32:03.238 INFO:teuthology.orchestra.run.vm09.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T16:32:03.249 INFO:teuthology.orchestra.run.vm09.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T16:32:03.260 INFO:teuthology.orchestra.run.vm09.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T16:32:03.276 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T16:32:03.276 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T16:32:03.276 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T16:32:03.276 INFO:teuthology.orchestra.run.vm01.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T16:32:03.276 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T16:32:03.276 INFO:teuthology.orchestra.run.vm01.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T16:32:03.277 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T16:32:03.279 INFO:teuthology.orchestra.run.vm09.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T16:32:03.290 INFO:teuthology.orchestra.run.vm09.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T16:32:03.300 INFO:teuthology.orchestra.run.vm09.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T16:32:03.310 INFO:teuthology.orchestra.run.vm09.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T16:32:03.320 INFO:teuthology.orchestra.run.vm09.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T16:32:03.330 INFO:teuthology.orchestra.run.vm09.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T16:32:03.340 INFO:teuthology.orchestra.run.vm09.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T16:32:03.350 INFO:teuthology.orchestra.run.vm09.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T16:32:03.360 INFO:teuthology.orchestra.run.vm09.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T16:32:03.371 INFO:teuthology.orchestra.run.vm09.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T16:32:03.404 INFO:teuthology.orchestra.run.vm09.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T16:32:03.415 INFO:teuthology.orchestra.run.vm09.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T16:32:03.425 INFO:teuthology.orchestra.run.vm09.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T16:32:03.438 INFO:teuthology.orchestra.run.vm09.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T16:32:03.442 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T16:32:03.442 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T16:32:03.449 INFO:teuthology.orchestra.run.vm09.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T16:32:03.459 INFO:teuthology.orchestra.run.vm09.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T16:32:03.469 INFO:teuthology.orchestra.run.vm09.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T16:32:03.479 INFO:teuthology.orchestra.run.vm09.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T16:32:03.482 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T16:32:03.484 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:03.490 INFO:teuthology.orchestra.run.vm09.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T16:32:03.497 INFO:teuthology.orchestra.run.vm09.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T16:32:03.504 INFO:teuthology.orchestra.run.vm01.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T16:32:03.508 INFO:teuthology.orchestra.run.vm09.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T16:32:03.515 INFO:teuthology.orchestra.run.vm01.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T16:32:03.526 INFO:teuthology.orchestra.run.vm09.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T16:32:03.526 INFO:teuthology.orchestra.run.vm01.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T16:32:03.537 INFO:teuthology.orchestra.run.vm09.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T16:32:03.537 INFO:teuthology.orchestra.run.vm01.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T16:32:03.547 INFO:teuthology.orchestra.run.vm09.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T16:32:03.551 INFO:teuthology.orchestra.run.vm01.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T16:32:03.558 INFO:teuthology.orchestra.run.vm09.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T16:32:03.562 INFO:teuthology.orchestra.run.vm01.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T16:32:03.571 INFO:teuthology.orchestra.run.vm09.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T16:32:03.574 INFO:teuthology.orchestra.run.vm01.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T16:32:03.585 INFO:teuthology.orchestra.run.vm01.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T16:32:03.587 INFO:teuthology.orchestra.run.vm09.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T16:32:03.605 INFO:teuthology.orchestra.run.vm01.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T16:32:03.617 INFO:teuthology.orchestra.run.vm01.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T16:32:03.630 INFO:teuthology.orchestra.run.vm01.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T16:32:03.639 INFO:teuthology.orchestra.run.vm01.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T16:32:03.650 INFO:teuthology.orchestra.run.vm01.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T16:32:03.661 INFO:teuthology.orchestra.run.vm01.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T16:32:03.672 INFO:teuthology.orchestra.run.vm01.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T16:32:03.684 INFO:teuthology.orchestra.run.vm01.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T16:32:03.695 INFO:teuthology.orchestra.run.vm01.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T16:32:03.706 INFO:teuthology.orchestra.run.vm01.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T16:32:03.740 INFO:teuthology.orchestra.run.vm01.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T16:32:03.751 INFO:teuthology.orchestra.run.vm01.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T16:32:03.764 INFO:teuthology.orchestra.run.vm01.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T16:32:03.775 INFO:teuthology.orchestra.run.vm01.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T16:32:03.786 INFO:teuthology.orchestra.run.vm01.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T16:32:03.797 INFO:teuthology.orchestra.run.vm01.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T16:32:03.808 INFO:teuthology.orchestra.run.vm01.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T16:32:03.819 INFO:teuthology.orchestra.run.vm01.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T16:32:03.831 INFO:teuthology.orchestra.run.vm01.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T16:32:03.839 INFO:teuthology.orchestra.run.vm01.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T16:32:03.851 INFO:teuthology.orchestra.run.vm01.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T16:32:03.868 INFO:teuthology.orchestra.run.vm01.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T16:32:03.881 INFO:teuthology.orchestra.run.vm01.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T16:32:03.891 INFO:teuthology.orchestra.run.vm01.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T16:32:03.903 INFO:teuthology.orchestra.run.vm01.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T16:32:03.917 INFO:teuthology.orchestra.run.vm01.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T16:32:03.935 INFO:teuthology.orchestra.run.vm01.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T16:32:04.017 INFO:teuthology.orchestra.run.vm09.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T16:32:04.050 INFO:teuthology.orchestra.run.vm09.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T16:32:04.076 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T16:32:04.139 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T16:32:04.188 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T16:32:04.239 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T16:32:04.288 INFO:teuthology.orchestra.run.vm09.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T16:32:04.299 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T16:32:04.354 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T16:32:04.382 INFO:teuthology.orchestra.run.vm01.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T16:32:04.415 INFO:teuthology.orchestra.run.vm01.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T16:32:04.439 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T16:32:04.501 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T16:32:04.548 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T16:32:04.597 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T16:32:04.610 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T16:32:04.644 INFO:teuthology.orchestra.run.vm01.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T16:32:04.655 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T16:32:04.660 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T16:32:04.705 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:04.708 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T16:32:04.751 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:04.801 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T16:32:04.862 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T16:32:04.912 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T16:32:04.961 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T16:32:04.961 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T16:32:05.008 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T16:32:05.013 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T16:32:05.056 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T16:32:05.061 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:05.104 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T16:32:05.109 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T16:32:05.151 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T16:32:05.161 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T16:32:05.198 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T16:32:05.218 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T16:32:05.267 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T16:32:05.315 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T16:32:05.318 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T16:32:05.363 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T16:32:05.379 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T16:32:05.412 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T16:32:05.429 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T16:32:05.466 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T16:32:05.482 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T16:32:05.516 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T16:32:05.532 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T16:32:05.566 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T16:32:05.591 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T16:32:05.640 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T16:32:05.691 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T16:32:05.704 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T16:32:05.741 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T16:32:05.763 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T16:32:05.793 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T16:32:05.814 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T16:32:05.844 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T16:32:05.866 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T16:32:05.896 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T16:32:05.916 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T16:32:05.948 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T16:32:05.976 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T16:32:05.997 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T16:32:06.024 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T16:32:06.049 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T16:32:06.076 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T16:32:06.097 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T16:32:06.122 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T16:32:06.124 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T16:32:06.169 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T16:32:06.175 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T16:32:06.214 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T16:32:06.225 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T16:32:06.266 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T16:32:06.276 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T16:32:06.317 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T16:32:06.329 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T16:32:06.368 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T16:32:06.378 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T16:32:06.422 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T16:32:06.436 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T16:32:06.474 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T16:32:06.487 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T16:32:06.513 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T16:32:06.522 INFO:teuthology.orchestra.run.vm09.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T16:32:06.543 INFO:teuthology.orchestra.run.vm09.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T16:32:06.562 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T16:32:06.610 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T16:32:06.657 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T16:32:06.705 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T16:32:06.753 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T16:32:06.802 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T16:32:06.857 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T16:32:06.903 INFO:teuthology.orchestra.run.vm01.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T16:32:06.923 INFO:teuthology.orchestra.run.vm01.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T16:32:06.988 INFO:teuthology.orchestra.run.vm09.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T16:32:06.999 INFO:teuthology.orchestra.run.vm09.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T16:32:07.018 INFO:teuthology.orchestra.run.vm09.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T16:32:07.035 INFO:teuthology.orchestra.run.vm09.stdout:Removing zip (3.0-12build2) ... 2026-03-09T16:32:07.065 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T16:32:07.076 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:32:07.119 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T16:32:07.126 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T16:32:07.145 INFO:teuthology.orchestra.run.vm09.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T16:32:07.353 INFO:teuthology.orchestra.run.vm01.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T16:32:07.366 INFO:teuthology.orchestra.run.vm01.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T16:32:07.387 INFO:teuthology.orchestra.run.vm01.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T16:32:07.405 INFO:teuthology.orchestra.run.vm01.stdout:Removing zip (3.0-12build2) ... 2026-03-09T16:32:07.431 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T16:32:07.442 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T16:32:07.488 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T16:32:07.494 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T16:32:07.513 INFO:teuthology.orchestra.run.vm01.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T16:32:08.648 INFO:teuthology.orchestra.run.vm09.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T16:32:08.649 INFO:teuthology.orchestra.run.vm09.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T16:32:09.057 INFO:teuthology.orchestra.run.vm01.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T16:32:09.058 INFO:teuthology.orchestra.run.vm01.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T16:32:10.746 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:32:10.749 DEBUG:teuthology.parallel:result is None 2026-03-09T16:32:11.172 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T16:32:11.175 DEBUG:teuthology.parallel:result is None 2026-03-09T16:32:11.175 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm01.local 2026-03-09T16:32:11.175 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm09.local 2026-03-09T16:32:11.175 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T16:32:11.175 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T16:32:11.184 DEBUG:teuthology.orchestra.run.vm09:> sudo apt-get update 2026-03-09T16:32:11.226 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-get update 2026-03-09T16:32:11.373 INFO:teuthology.orchestra.run.vm09.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T16:32:11.375 INFO:teuthology.orchestra.run.vm09.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T16:32:11.383 INFO:teuthology.orchestra.run.vm09.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T16:32:11.410 INFO:teuthology.orchestra.run.vm01.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T16:32:11.410 INFO:teuthology.orchestra.run.vm01.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T16:32:11.418 INFO:teuthology.orchestra.run.vm01.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T16:32:11.717 INFO:teuthology.orchestra.run.vm09.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T16:32:11.778 INFO:teuthology.orchestra.run.vm01.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T16:32:12.710 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T16:32:12.724 DEBUG:teuthology.parallel:result is None 2026-03-09T16:32:12.848 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T16:32:12.862 DEBUG:teuthology.parallel:result is None 2026-03-09T16:32:12.863 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T16:32:12.865 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T16:32:12.865 DEBUG:teuthology.orchestra.run.vm01:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T16:32:12.866 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout:============================================================================== 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout:+static.215.156. 35.73.197.144 2 u 74 128 377 23.569 -0.292 0.031 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout:*node-1.infogral 168.239.11.197 2 u 55 128 377 23.640 -0.208 0.274 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout:+mail2.light-spe 237.17.204.95 2 u - 128 377 28.795 -0.238 0.130 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout:+81.3.27.46 (ntp 131.188.3.220 2 u 75 128 377 27.460 -0.328 0.186 2026-03-09T16:32:13.049 INFO:teuthology.orchestra.run.vm09.stdout:+ctb01.martinmoe 87.63.200.138 2 u 77 128 377 31.633 -0.344 1.660 2026-03-09T16:32:13.138 INFO:teuthology.orchestra.run.vm01.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T16:32:13.138 INFO:teuthology.orchestra.run.vm01.stdout:============================================================================== 2026-03-09T16:32:13.138 INFO:teuthology.orchestra.run.vm01.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.138 INFO:teuthology.orchestra.run.vm01.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.138 INFO:teuthology.orchestra.run.vm01.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.138 INFO:teuthology.orchestra.run.vm01.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:+static.215.156. 35.73.197.144 2 u 68 128 377 23.557 -7.122 0.732 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:+ntp1.m-online.n 212.18.1.106 2 u 9 128 377 43.192 -6.109 0.848 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:-ns.gunnarhofman 237.17.204.95 2 u 65 128 377 24.927 -7.574 0.950 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:+141.144.246.224 146.131.121.246 2 u 16 128 377 29.158 -7.990 1.773 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:+77.90.0.148 (14 131.188.3.220 2 u 10 128 377 22.869 -6.567 0.900 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:+81.3.27.46 (ntp 131.188.3.220 2 u 13 128 377 27.516 -5.754 1.329 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:+node-1.infogral 168.239.11.197 2 u 28 128 377 23.534 -5.591 1.394 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:*47.ip-51-75-67. 185.248.188.98 2 u 14 128 377 21.198 -6.974 0.723 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:+ctb01.martinmoe 87.63.200.138 2 u 9 128 377 31.628 -6.387 1.752 2026-03-09T16:32:13.139 INFO:teuthology.orchestra.run.vm01.stdout:-mail2.light-spe 237.17.204.95 2 u 30 128 377 28.788 -7.211 0.657 2026-03-09T16:32:13.139 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T16:32:13.141 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T16:32:13.141 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T16:32:13.144 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T16:32:13.146 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T16:32:13.148 INFO:teuthology.task.internal:Duration was 2937.644470 seconds 2026-03-09T16:32:13.148 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T16:32:13.150 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T16:32:13.150 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T16:32:13.151 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T16:32:13.178 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T16:32:13.178 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm01.local 2026-03-09T16:32:13.178 DEBUG:teuthology.orchestra.run.vm01:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T16:32:13.224 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-09T16:32:13.224 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T16:32:13.237 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T16:32:13.237 DEBUG:teuthology.orchestra.run.vm01:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T16:32:13.266 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T16:32:13.476 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T16:32:13.476 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T16:32:13.477 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T16:32:13.483 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T16:32:13.484 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T16:32:13.484 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T16:32:13.484 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T16:32:13.484 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T16:32:13.486 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T16:32:13.486 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T16:32:13.487 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T16:32:13.487 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T16:32:13.487 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T16:32:13.515 INFO:teuthology.orchestra.run.vm09.stderr: 93.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T16:32:13.520 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 95.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T16:32:13.521 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T16:32:13.524 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T16:32:13.524 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T16:32:13.573 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T16:32:13.582 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T16:32:13.584 DEBUG:teuthology.orchestra.run.vm01:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T16:32:13.614 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T16:32:13.622 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = core 2026-03-09T16:32:13.633 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-09T16:32:13.641 DEBUG:teuthology.orchestra.run.vm01:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T16:32:13.676 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T16:32:13.676 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T16:32:13.684 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T16:32:13.684 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T16:32:13.687 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T16:32:13.687 DEBUG:teuthology.misc:Transferring archived files from vm01:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/533/remote/vm01 2026-03-09T16:32:13.687 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T16:32:13.727 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/533/remote/vm09 2026-03-09T16:32:13.728 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T16:32:13.740 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T16:32:13.740 DEBUG:teuthology.orchestra.run.vm01:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T16:32:13.770 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T16:32:13.785 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T16:32:13.787 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T16:32:13.787 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T16:32:13.790 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T16:32:13.790 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T16:32:13.819 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T16:32:13.821 INFO:teuthology.orchestra.run.vm01.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 16:32 /home/ubuntu/cephtest 2026-03-09T16:32:13.830 INFO:teuthology.orchestra.run.vm09.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 16:32 /home/ubuntu/cephtest 2026-03-09T16:32:13.831 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T16:32:13.837 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} duration: 2937.6444702148438 flavor: default owner: kyr success: true 2026-03-09T16:32:13.837 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T16:32:13.868 INFO:teuthology.run:pass